id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2309.06863
Lavender Autonomous Navigation with Semantic Segmentation at the Edge
Achieving success in agricultural activities heavily relies on precise navigation in row crop fields. Recently, segmentation-based navigation has emerged as a reliable technique when GPS-based localization is unavailable or higher accuracy is needed due to vegetation or unfavorable weather conditions. It also comes in handy when plants are growing rapidly and require an online adaptation of the navigation algorithm. This work applies a segmentation-based visual agnostic navigation algorithm to lavender fields, considering both simulation and real-world scenarios. The effectiveness of this approach is validated through a wide set of experimental tests, which show the capability of the proposed solution to generalize over different scenarios and provide highly-reliable results.
Alessandro Navone, Fabrizio Romanelli, Marco Ambrosio, Mauro Martini, Simone Angarano, Marcello Chiaberge
2023-09-13T10:20:13Z
http://arxiv.org/abs/2309.06863v1
# Lavender Autonomous Navigation with Semantic Segmentation at the Edge ###### Abstract Achieving success in agricultural activities heavily relies on precise navigation in row crop fields. Recently, segmentation-based navigation has emerged as a reliable technique when GPS-based localization is unavailable or higher accuracy is needed due to vegetation or unfavorable weather conditions. It also comes in handy when plants are growing rapidly and require an online adaptation of the navigation algorithm. This work applies a segmentation-based visual agnostic navigation algorithm to lavender fields, considering both simulation and real-world scenarios. The effectiveness of this approach is validated through a wide set of experimental tests, which show the capability of the proposed solution to generalize over different scenarios and provide highly-reliable results. Autonomous Navigation Semantic Segmentation Precision Agriculture ## 1 Introduction In recent times, the food and farming industries have been facing a significant rise in global demand for food production, resulting in an increased need for resources [1]. As a result, there is a growing requirement for new methods and technologies to enhance efficiency and productivity while also ensuring sustainability throughout the process. As a consequence, farming industries focused their resources on developing new techniques to boost productivity and lower production costs, thereby reducing the need for labor-intensive human work [2]. The use of Deep Learning (DL) plays a significant role in the shift towards autonomous agents carrying out tasks in Agriculture 3.0 and 4.0 scenarios. Its ability to analyze data from various sources and reduce the likelihood of human errors makes it a valuable tool that can generalize and reduce tedious work [3]. DL has been proven to be beneficial for autonomous guidance systems, which include automating tasks such as navigation along the fields [4], including localization [5], global path planning [6] and local motion planning [7], [8]. Moreover, among the different kinds of farming techniques, it emerged how row-organized crops are the most widely adopted farming arrangement, representing around 75% of the USA farmland [9]. The aim of this research is to address issues that arise when traditional localization methods, like GPS, are inaccurate due to factors such as weather conditions, obstructions, or signal interference caused by tall vegetation. Robust navigation in fields can be considered the starting point to carry out more complex and structural tasks such as crop monitoring [10], diseases detection [11][12], spraying pesticides in a more localized and efficient way [13], fruit counting [14], harvesting [15] and monitoring the crop status and eventual diseases [11][12]. Within this context, autonomous guidance systems and artificial intelligence have a primary role in achieving the objectives above. The aim of this project is to create a motion planner that can navigate through plant rows of medium to high height. Many existing examples face challenges such as high costs and limited scalability. Typically, autonomous systems that navigate through row crops use high-precision GPS receivers and accuracy enhancement techniques [16], or a combination of sensors including lasers and GPS [17]. However, vegetation can interfere with the GPS signal reducing its accuracy and reliability [18]. Recent solutions involve the employment of multiple sensors such as GPS, inertial navigation systems (INS) wheels' encoder and LIDARs to improve localization accuracy [19]; however, equipment composed of multiple sensors can lead to an increase in the costs of the platform. A visual odometry system that uses a downward-looking camera to reduce costs was proposed in the study cited as [20]. Nonetheless, the authors noted that accuracy tends to decrease when the path is longer due to the accumulation of odometric error, and therefore, to keep the error bounded, integration with an absolute reference is necessary. The main contribution of this work can be identified as the extensive experimentation carried out to validate the proposed method, which proves its generalization capabilities from a simulation to a real-world environment. The rest of this work can be summarized as follows: Section 2 introduces the adopted solution for visual-based navigation in row crops, starting from the segmentation model to the adopted control algorithm. Section 3 presents the simulation environment and the experimental setup and, later, reports the obtained results for both simulation and real-world tests. Finally, Section 4 redraws and comments on the main points of this work. ## 2 Methodology This study presents a new method for navigating without relying on position, using RGB-D data. The control algorithm created for this approach utilizes only real-time visual information to move through the rows of medium vegetation crops, like lavelander. Additionally, this solution incorporates advancements in AI that enable "edge inferencing," where the robot's computer, often resource-limited, is able to process the input data in real time. The basic idea behind this work consists of the generalization of the visual control algorithms proposed for high-vegetation crops in [8] to lower Figure 1: Overall scheme of the pipeline of the proposed approach. vegetation fields, obtaining a reliable and continuous control using a cost-effective set of sensors. In fact, the proposed system only exploits RGB and depth images, overcoming the problems experimented with using GPS-based solutions. Furthermore, the system can seamlessly integrate into a comprehensive navigation configuration, which includes a waypoints generator, a global path planner, and a classic navigation system based on a Dynamic Window Approach (DWA) that controls the robots outside of the rows [4]. At instant \(t\) camera located in the front of the UGV platform captures the RGB frame, denoted as \(\mathbf{X}_{rgb}^{t}\in\mathbb{R}^{h\times w\times c}\), and the depth frame denoted as \(\mathbf{X}_{depth}^{t}\in\mathbb{R}^{h\times w}\). Their dimensions are determined by the number of rows, represented by \(h\), the number of columns, represented by \(w\), and the number of channels, represented by \(c\). The RGB frame is later segmented semantically through a segmentation neural network, indicated as \(H(\cdot)\). Its output, namely \(\hat{\mathbf{X}}_{seg}^{t}\), is a binary segmentation mask that indicates the segmentation of the crop vegetation with ones, while the remaining areas of the frame are represented by zeroes. \[\hat{\mathbf{X}}_{seg}^{t}=H\left(\mathbf{X}_{rgb}^{t}\right) \tag{1}\] To achieve a more accurate determination of the end of the row's position, we eliminate noise from the segmented frame. We assess each column of the frame, denoted as \(\hat{\mathbf{X}}_{seg}^{t}(:,j)\), and if over 97 % of the column is recognized as background, we set the remaining portion to zero. Moreover, to further increase the algorithm's robustness, the segmentation masks of the last \(N\), i.e. \(\{t-N,\dots,t\}\) time instants, are super-imposed, obtaining a cumulative segmentation mask. \[\hat{\mathbf{X}}_{CumSeg}^{t}=\bigcup_{j=t-N}^{t}\hat{\mathbf{X}}_{seg}^{t} \tag{2}\] where \(\hat{\mathbf{X}}_{CumSeg}^{t}\) is the cumulative segmentation mask, and \(\bigcup\) represents the bitwise OR operation between the several segmentation masks. Later on, the depth frame \(\mathbf{X}_{d}^{t}\) is used to cut the cumulative segmented frame at a threshold distance \(d_{th}\) in order to ignore the further segmentation data and better identify the continuation of the row. \[\hat{\mathbf{X}}_{SegDepth}^{t}(i,j)=\begin{cases}0,\;\text{if}\;\hat{\mathbf{X }}_{CumDepth(i,j)}^{t}\cdot\mathbf{X}_{d(i,j)}^{t}>d_{th}\\ 1,\;\text{if}\;\hat{\mathbf{X}}_{CumDepth(i,j)}^{t}\cdot\mathbf{X}_{d(i,j)}^{t} \leq d_{th}\end{cases} \tag{3}\] where \(i=0,\dots,h\), \(j=0,\dots,w\) and \(\hat{\mathbf{X}}_{SegDepth}^{t}\) is the segmentation frame cut with the depth information. In order to determine the center of the row and generate velocity commands, the columns of the depth-cut segmented image are summed, obtaining a histogram, \(\mathbf{h}^{t}\), obtained as in Equation 4. The practical idea behind this is to estimate the amount of vegetation per column. \[\mathbf{h}_{j}^{t}=\sum_{i=1}^{w}\hat{\mathbf{X}}_{SegDepth}^{t}(i,j) \tag{4}\] Therefore, after obtaining the histogram \(\mathbf{h}_{j}^{t}\), it is evident how empty regions, namely clusters of zeros, represent regions in the field of view of the camera where no vegetation is present. Thus, identifying the widest cluster of zeros equals finding the continuation of the row. Therefore, the desired cluster is identified with the following steps: 1. The zeroes in the histogram are grouped in several clusters if they are in contiguous positions. 2. If clusters are smaller than a threshold (i. e. smaller than 3 elements), they are discarded. 3. the largest cluster is considered. 4. If the largest cluster occupies more than 80% of the space, it is considered an end-of-row condition. Once the desired cluster is identified, the distance \(d\) between the center of the frame and the cluster center is considered and employed to calculate velocity commands. ### Segmentation Network Our real-time crop segmentation approach is based on previous works [8], [21], that utilized a network consisting of a MobilenetV3 backbone for feature extraction and an efficient LR-ASPP segmentation head, as represented in Figure 2. The LR-ASPP utilizes depth-wise convolutions, channel-wise attention, and residual skip connections to ensure a balance between accuracy and inference speed. ### Velocity Command Generation Once the center of the largest cluster is identified and the distance \(d\) from the center of the frame is evaluated, the velocity control commands can be generated. Furthermore, parabolic functions are used to generate the velocity functions. \[v_{x}=v_{x,max}\cdot\left(1-\frac{d^{2}}{\left(\frac{w}{2}\right)^{2}}\right) \tag{5}\] \[\omega_{z}=-\omega_{z,max}\cdot sign(d)\cdot\frac{d^{2}}{\left(\frac{w}{2} \right)^{2}} \tag{6}\] In 5 and 6, \(v_{x,max}\) and \(\omega_{z,max}\) indicate the maximum linear and angular velocities respectively. The function \(sign(\cdot)\) is used to determine the sign of a value, resulting in a value of 1 if the argument is greater than or equal to zero and -1 if it is less than zero. ## 3 Tests and Results In this section, we present the results of experimental tests conducted both in simulation and in a real lavender field using a robot equipped with semantic segmentation capabilities at the edge. The objective of these tests was to evaluate the performance of the lavender autonomous navigation system and assess its effectiveness in identifying and avoiding obstacles. Specifically, the system has been tested on the field in order to assess its capability to be centered with respect to the rows of lavender as it travels along the row autonomously. Figure 2: Sample frames and relative segmentation masks from AgriSeg dataset including synthetic data (a) and real data (b). ### Evaluation Metrics To quantitatively assess the performance of the lavender autonomous navigation system, we employed the following evaluation metrics: * Navigation Success Rate: This metric measures the success rate of the autonomous robot in navigating through the lavender field without colliding with any obstacles. It is calculated as the percentage of successful navigation trials out of the total number of trials conducted. * Root Mean Square Error (RMSE) between the actual trajectory and the center of the rows: This metric measures the autonomous system's capability to maintain an equal distance from the rows while navigating toward the end of the field. Regarding the navigation success rate metrics, we will report the percentage of successful navigation trials out of the total number of trials for the experiments in the next section. While for the simulation test, we have an absolute ground truth provided by the simulator, in the real-world experiments, we considered the effective ground truth of the robot path to be the data acquired from the GPS RTK sensor during the experimental runs. Furthermore, in order to get a more accurate ground truth, we resorted to a Visual Odometry system fusing the trajectory estimated by an optical flow algorithm and the one estimated by the ORB-SLAM2 algorithm (a modified version of the one presented in [22]). The GPS RTK and Visual Odometry paths are finally fused and used as the trajectory ground truth for all the experiments. In order to assess how close the robot trajectory is from the center of the lavender rows, a set of measurements have been performed, measuring the GPS RTK coordinates, with \(1\) m step along the path, at the middle of the rows. ### Segmentation Network Training Frames with dimensions \(w=224\) and \(h=224\) were considered as input of the Segmentation Neural network. Moreover, the number of channels \(c\) is equal to 3 since they are RGB images. The model was trained on a combination of synthetic and real images from the lavender section of the AgriSeg dataset [23]. The dataset consisted of 4800 synthetic images and 1100 real ones. The model underwent 50 epochs of training with an ADAM optimizer, utilizing a learning rate of \(3\cdot 10^{-4}\). The data was augmented through cropping, flipping, grayscaling, and random jitters. The training of the model was conducted using TensorFlow 2 environment, starting from an ImageNet pretrained network on a single Nvidia RTX 3090 GPU. ### Simulation Tests #### 3.3.1 Simulation Setup The software used to perform the simulation is Gazebo 1 since it is one of the most supported and diffused simulators for robotics applications. Blender 2 is used to create realistic models of plants and terrain that are exported are assembled in a Gazebo world using a procedural tool [23]. The robot model used in the simulation is the Husky UGV, the same that is used in real tests to reduce the gap between simulation and reality. Simulations are performed in rows of \(8-10\) m length, and at least three runs are performed to assess the repeatability of the control algorithm. Footnote 1: [https://classic.gazebosim.org/](https://classic.gazebosim.org/) Footnote 2: [https://www.blender.org/download/lts/3-3/](https://www.blender.org/download/lts/3-3/) #### 3.3.2 Simulation Results The results obtained from the simulation are evaluated according to the metrics reported in Section 3.1. The Navigation Success Rate is 1.0 since, in all the runs, the robot managed to reach the end of the row without hitting plants. The overall behavior of the robot was robust and reliable navigation until the end of the row. The RMSE of the robot trajectory with respect to the center for all the performed tests is reported in Table 1. In addition, Figure 3 represent the trajectory followed by the robot during a simulation and the central line between the plants. ### Experimental Tests #### 3.4.1 Experimental Setup The experimental tests were conducted in a real lavender field with varying degrees of complexity in terms of terrain, vegetation density, and lighting conditions. The autonomous robot used in the experiments was equipped with state-of the-art sensors, including cameras for capturing RGB images and depth information, and a powerful onboard processing unit capable of performing semantic segmentation at the edge. The lavender field, where the experimental tests have been run, is located in Tuscania, Italy. In the same estate, two lavender fields are located. One field is composed of \(22\) lavender rows with an average length of \(35\) m with a distance between rows of about \(80\) cm. The other field is composed of \(16\) lavender rows with an average length of \(25\) m with a distance between rows of about \(65\) cm. The terrain was even, and there were weeds along the rows, even taller than the robot itself. The lighting conditions varied a lot between the experiments: from sunny to cloudy, also changing during every single run. An image of the environment is reported in Figure 4, where a first-person view of the lavender rows is visible, together with the cloudy sky and weeds along the path. The robot used in the tests is a ClearPath Robotics Husky UGV, shown in Figure 5. Husky is a medium-sized robotic development platform. It has a large payload capacity and power systems. Stereo cameras, LIDAR, GPS, and IMUs are mounted on the UGV in order to achieve a high level of autonomy. The Husky UGV has a rugged construction and high-torque drivetrain that enables the robot to work in harsh environments and uneven terrain. Husky is also fully supported in ROS, where the algorithms have been developed, tested, and used in real experiments. Its external dimensions are \(990\times 670\times 390\) mm, while its internal dimensions are \(296\times 411\times 155\) mm. It weighs \(50\) Kg, and it can support a payload of \(75\) Kg at \(1\) m/s maximum speed with a \(3\) hours battery autonomy. The real experiments consist of the robot running autonomously between the lavender rows with the aim of maintaining a proper distance from the lavender plants exploiting the semantic segmentation algorithms presented in Section 2, while moving towards the end of the row. #### 3.4.2 Experimental Results During the experimental tests, the lavender autonomous navigation system exhibited robust performance and demonstrated promising results. Here, we present the key findings based on the evaluation metrics mentioned in Section 3.1. Figure 4: A first-person view of the lavender rows with the path followed by the robot during the experimental tests. Figure 3: Robot trajectories for one simulation run. The green dashed lines represent the plants, the blue line is the ideal central line, and the golden line represents the actual robot trajectory. \begin{table} \begin{tabular}{l l c c} \hline Test number & Test type & Path length [m] & RMSE [m] \\ \hline test 1 & simulation & 8.26 & 0.077 \\ test 2 & simulation & 8.36 & 0.073 \\ test 3 & simulation & 8.14 & 0.082 \\ \hline overall & simulation & 24.76 & 0.077 \(\pm\) 0.004 \\ \hline test 1 & real world & 23.27 & 0.289 \\ test 2 & real world & 21.62 & 0.259 \\ test 3 & real world & 11.88 & 0.049 \\ test 4 & real world & 10.52 & 0.051 \\ test 5 & real world & 18.55 & 0.152 \\ test 6 & real world & 17.98 & 0.113 \\ test 7 & real world & 21.12 & 0.235 \\ \hline overall & real world & 124.94 & 1.164 \(\pm\) 0.098 \\ \hline \end{tabular} \end{table} Table 1: Results of simulation and real-world tests. In simulation tests, the RMSE is computed between the geometrical center of the row and the actual trajectory; in real-world tests, it is computed between the actual robot trajectory (from the fused VO and GPS trajectory) and the center of the row. Figure 5: An aerial view of the ClearPath Husky UGV, where the GPS antennas, IMU, LIDAR, and Intel RealSense D435 Camera are visible. The autonomous robot achieved a remarkable navigation success rate of 70%. This indicates that the lavender autonomous navigation system effectively planned and executed collision-free paths, successfully maneuvering through the lavender field while avoiding obstacles. In order to assess this metric, we performed \(10\) runs within different lavender rows with different lengths, and the robot successfully moved without any collision on \(7\) tests. The \(3\) runs where the robot failed to perform its task were characterized by strong changes in lighting conditions (e.g., the sky conditions changed during the experiment passing from cloudy to fully sunny). However, even in these cases, the robot was able to complete about \(80\)% of the path. Furthermore, in order to assess the system's ability to maintain a centered position with respect to the lavender rows, we measured the RMSE between the actual trajectory and the center of the rows. Here we report three significant results: one test for a long path (about \(20\) m) that presented the highest RMSE, one for a long path (about \(20\) m) that presented the lowest RMSE, and one for a short path of about \(10\) m. These results are summarized in Table 1. From these results, we noticed that the accuracy decreases when the path is longer due to the accumulation of odometric errors; also, tests 1 and 2 were performed along rows where the lavender plants were blooming, and therefore the rows were narrower, compared to those of test 3 were the lavender plants were lower. Finally, we present the details about the trajectories for the three tests just introduced. Figure 6 shows the experimental results for the three tests; there, we reported, for each test, the Visual Odometry estimated trajectory, the GPS RTK trajectory, the VO and GPS fused trajectory, and the center of the row. In the same figure, we also reported the row boundaries as red dotted lines. These boundaries are taken with a rough estimation of the lavender plants' center and help to determine the surface that can be traveled between the rows. As previously pointed out, test number 3 shows the best results in terms of the ability to stay close to the row's center because of two factors: the path length and the lavender plant state (lower and smaller than in cases 1 and 2). ## 4 Conclusions Overall, the experimental results demonstrate the efficacy of the lavender autonomous navigation system with semantic segmentation at the edge. The high accuracy in semantic segmentation showcases the system's potential for autonomous operation in real-world lavender farming environments. These results validate the effectiveness of the proposed system in enabling autonomous robots to navigate through lavender fields efficiently, reducing manual intervention, Figure 6: Robot trajectories for three tests. The red solid line represents the center of the row, the green dashed line is the Visual Odometry trajectory, the blue dashed line is the GPS trajectory whilst the black dashed line represents the VO and GPS fused trajectory. For clarity’s sake, we reported the boundaries of the lavender rows as red dotted lines. and enhancing productivity in lavender cultivation. However, further testing and refinement are necessary to ensure robustness and adaptability across a wider range of lavender field conditions and environmental variations. #### 4.0.1 Acknowledgements This work has been developed with the contribution of Politecnico di Torino Interdepartmental Center for Service Robotics PIC4SeR 3. Footnote 3: www.pic4ser.polito.it
2309.08696
RIFL: A Reliable Link Layer Network Protocol for Data Center Communication
More and more latency-sensitive services and applications are being deployed into the data center. Performance can be limited by the high latency of the network interconnect. Because the conventional network stack is designed not only for LAN, but also for WAN, it carries a great amount of redundancy that is not required in a data center network. This paper introduces the concept of a three-layer protocol stack that can fulfill the exact demands of data center network communications. The detailed design and implementation of the first layer of the stack, which we call RIFL, is presented. A novel low latency in-band hop-by-hop re-transmission protocol is proposed and adopted in RIFL, which guarantees lossless transmission in a data center environment. Experimental results show that RIFL achieves 110 nanoseconds point-to-point latency on 10-meter Active Optical Cables, at a line rate of 112 Gbps. RIFL is a multi-lane protocol with scalable throughput up to multi-hundred gigabits per second. It can be the enabler of low latency, high throughput, flexible, scalable, and lossless data center networks.
Qianfeng, Shen, Jun Zheng, Paul Chow
2023-09-15T18:38:16Z
http://arxiv.org/abs/2309.08696v1
# RIFL: A Reliable Link Layer Network Protocol for Data Center Communication ###### Abstract More and more latency-sensitive services and applications are being deployed into the data center. Performance can be limited by the high latency of the network interconnect. Because the conventional network stack is designed not only for LAN, but also for WAN, it carries a great amount of redundancy that is not required in a data center network. This paper introduces the concept of a three-layer protocol stack that can fulfill the exact demands of data center network communications. The detailed design and implementation of the first layer of the stack, which we call RIFL, is presented. A novel low latency in-band hop-by-hop re-transmission protocol is proposed and adopted in RIFL, which guarantees lossless transmission in a data center environment. Experimental results show that RIFL achieves 110 nanoseconds point-to-point latency on 10-meter Active Optical Cables, at a line rate of 112 Gbps. RIFL is a multi-lane protocol with scalable throughput up to multi-hundred gigabits per second. It can be the enabler of low latency, high throughput, flexible, scalable, and lossless data center networks. osurmurm ## 1 Introduction Major data center services and applications such as remote direct memory access (RDMA), machine learning, and cloud storage demand the network interconnect to be low latency and lossless while preserving high bandwidth. Previous works, such as [1, 2], demonstrate how the performance of applications in various fields can be drastically impacted by interconnect latency. It is important to realize that most of the technologies and concepts used in today's data center networks (DCNs) existed before the large-scale data centers of today were even imagined. For example, IP protocols were first defined in 1974 [3], well before any massive data center was built. Today, with rapidly evolving technologies, it is time to explore new approaches for the DCN that are designed for the needs of today's data center. The conventional TCP/IP stack is designed to work reliably not only in a local area network (LAN), but also in a wide area network (WAN). The physical properties of a LAN and a WAN are significantly different. Both bandwidth-wise and latency-wise [4], TCP/IP and UDP/IP carry too much redundancy when used in a LAN. Considering the diameter of a data center server room is rarely more than 100 meters, a DCN is essentially a LAN. There should be a more efficient protocol stack that fulfills the exact needs of a DCN. Nevertheless, protocols based on TCP/IP and UDP/IP [5, 6] still dominate the data center market. One of the most important reasons for cloud providers to use these protocols is that hardware changes would be required to both the end devices and the network switches to deploy a new protocol in a data center. Traditionally, the network switches and the NICs are all implemented using ASICs. It would take years to design, fabricate, test, and deploy the ASICs for a new protocol. Compatibility with the established infrastructure and the barrier to developing new ASICs makes it extremely difficult to introduce major changes. However, it is still interesting to know what opportunities exist that might influence DCN infrastructure over time. The basis of our work is to build an experimental platform that enables us to explore what might be possible if we could start over, i.e., how would we build the DCN infrastructure starting with what we know is feasible today and not be constrained by any legacy requirements, either technical or business. In this paper, we will show what we can do by leveraging the capabilities of modern FPGAs. Today, the number of high-speed transceivers is quickly increasing in modern FPGAs. Off-the-shelf FPGAs containing multiple QSFP28 ports are already available in the market [7], showing that a flexible and economically efficient approach to redesigning DCNs starting from the very bottom layer of the protocol stack can be prototyped without needing new ASICs. There are many network protocols apart from TCP/IP and UDP/IP. However, some of them [8, 9] are dedicated to the link layer, providing limited scalability and flexibility. Some of them are based on the Media-Independent Interface (MII) [10] or UDP [6], and you cannot remove the redundancy carried with the conventional network stack. Others such as Infiniband [11] implement re-transmission in their Transport Layer. We will discuss its inefficiency in Section 2. To meet the exact demands of a DCN, we propose a new protocol stack as follows: **Layer 1: Link Layer** This layer is implemented immediately next to the transceivers. It is a combination of the data link layer (layer 2) and the physical layer (layer 1) in the OSI model. It should provide a line protocol with appropriate data packetization, channel bonding and clock compensation. Re-transmission should also be a part of this layer to resolve link-level data corruption. The benefits of implementing re-transmission at this layer is discussed in Section 2. Beyond this layer, there should be no data corruption caused by link noise. **Layer 2: Network Layer** This layer should provide a low latency routing scheme that avoids using a centralized routing table. Switch initiated congestion control mechanisms should also be implemented in this layer. Beyond this layer, all the data transfers should be lossless. Anything sitting above this layer does not have to worry about checksums, re-transmission, or congestion at all. **Layer 3: Application Layer** This layer consists of two parts: hardware and software. The hardware serves as an accelerator for common DCN applications and services, e.g., a near-memory computing engine to reduce the round trips for RDMA. The software abstracts the usage of the hardware and provides the software programmer an easy-to-use user interface. With this protocol stack, we envision a lossless network can be built. In our prototype, at its Layer 2 interface, this network can provide lossless links with less than 300 ns typical latency per hop with bandwidths beyond 100 Gbps. **This paper focuses on the Link Layer design, named RIFL**. The Network Layer and the Application Layer designs will be the subject of our future work. The rest of this paper is organized as follows: Section 2 discusses the physical properties of a DCN and how they can be leveraged to build a more efficient Link Layer protocol. In Section 3, we define the RIFL Frames. Section 4 introduces the RIFL protocols. Section 5 presents the hardware implementation of RIFL. Section 6 provides performance results. Section 7 discusses the related work and Section 8 concludes this work. ## 2 Layer 1 - the Link Layer The goal of our Layer 1 is to provide a reliable Link Layer point-to-point protocol as a foundation for the higher layers. This layer should be low-latency, high-bandwidth and use minimal hardware resources. Reliability here means correcting any bit errors that occur during transmission across the link. With a reliable link, the higher layers need not be concerned with any data integrity issues resulting from the physical transmission. In this section we cover the following topics: The development of our Layer 1 first requires the selection of the mechanism for error detection and correction. After selecting re-transmission, we show that it can work within the constraints of a DCN. After justifying hop-by-hop Link Layer re-transmission, we show that an additional property can be introduced. Finally, we explain why we can solely rely on negative acknowledgments (NACKs) as the re-transmission notifications in DCNs, and why doing so is critical for the efficiency. Given these justifications we can then develop the circuit for our protocol implementation. We start by imposing the first constraint: **A the distance between any two nodes within a DCN is less than 500 meters.** ### Forward Error Correction (FEC) vs Re-transmission There are two major approaches to eliminate the effect of data corruption caused by bit errors: FEC and re-transmission. FEC is widely used in wireless and low-level wired communication. It requires the sender to send redundant data along with the payload. The redundant data, which is usually an error correction code (ECC), can be used to detect the errors in the payload as well as correct the errors. Re-transmission requires redundant data as well. The redundant data is usually a checksum. However, the checksum is not used to correct the errors. Instead, it only needs to carry enough information to detect the errors in the payload. While sending data to the receiver, the sender keeps a copy of the most recent transmitted data. Once an error is detected, the receiver notifies the sender to resend the corrupted data. While FEC detects and corrects the errors, the checksum only detects the errors. Consequently, for the same size of the payload, the size of the ECC used by FEC is much larger than the size of the checksum used by re-transmission, which means the bandwidth overhead for FEC is much larger than re-transmission. Moreover, because FEC usually involves large matrix multiplications, the typical latency overhead for FEC is much larger as well. Therefore, FEC is more suitable in situations where re-transmission is impossible or very expensive. E.g, in one-way communications such as radio networks or simplex links, or in any bidirectional communication that operates on a very high bit error ratio (BER). In current DCNs, 100G Ethernet is slowly becoming the dominant interconnect technology [12]. The commercially available QSFF28 cables used by 100G Ethernet can guarantee BERs better than \(10^{-12}\) without using FEC. Under such a low BER, re-transmission is much more efficient than FEC. However, as the next generation cable technologies pursue higher throughput per lane, their associated BER can be significantly higher than \(10^{-12}\). Thus, for better compatibility with the future technologies, our BER constraint is: **B the effective BER of the link that RIFL operates on must not exceed \(10^{-7}\).** We set the minimal BER requirement as \(10^{-7}\) because in our simulations, we found that in any link shorter than 500 meters with a BER better than \(10^{-7}\), re-transmission can be done efficiently. Plus, a minimal BER of \(10^{-7}\) means RIFL can work not only with the current popular cables, but also with any future physical links providing BERs better than \(10^{-7}\). For links whose BERs are worse than \(10^{-7}\), FEC must be incorporated to guarantee reliable transmissions. Otherwise, the bandwidth will be mainly occupied by re-transmissions instead of regular data transmissions. Nevertheless, even if FEC is used, RIFL still has advantages because it only needs a lightweight FEC code to improve BER to better than \(10^{-7}\) while other protocols, such as Ethernet, require much lower post-FEC BERs [13]. Even with FEC, they still cannot guarantee lossless transmissions. To summarize, we choose re-transmission as the main error recovery method for RIFL. When BER is higher than \(10^{-7}\), FEC has to be applied to improve the BER so that constraint B can be satisfied. ### Re-transmission Efficiency vs Round Trip Time (RTT) To guarantee a lossless link, the re-transmission mechanism should be designed for the worst case. Because any Frame1 being transmitted during the RTT may have errors, the size of the re-transmission buffer, denoted as \(S_{\text{retrans}}\), must be larger than the size of the data being transmitted during the largest RTT between the sender and the receiver, namely: Footnote 1: **Frame**: the basic unit of data transmitted across the link. Any data is transmitted along the link by the means of one or multiple Frames. \[S_{\text{retrans}}\geqslant\lambda_{\text{line}}*RTT \tag{1}\] where \(\lambda_{\text{line}}\) denotes the line rate. The larger the RTT is, the larger the re-transmission buffer needs to be. It is worth noting that when line rate is larger than 100 Gbps and RTT exceeds 100 \(\mu s\), it requires more than one megabytes of re-transmission buffer. It is no longer suitable to use embedded memories such as SRAM as the buffer. Otherwise, the circuit area will be too large. This issue is encountered by some TCP implementations [14, 15]. Their solution is to use DDR memory as an alternative. However, it further increases the RTT and complexity because the latency of a DDR memory is not constant and is sometimes more than 100 nanoseconds [16], whereas the latency of an embedded memory is much more stable and is usually a few nanoseconds. Moreover, a shorter RTT also lowers the latency and bandwidth overhead introduced by re-transmission: a shorter RTT means quicker interaction between the sender and the receiver, and a shorter stalling time after a Frame error is detected. Therefore, for optimal efficiency, re-transmission should be implemented in a protocol layer where the RTT is minimized. The RTT consists of two parts, the circuit delay (\(T_{\text{circuit}}\)) and the cable delay (\(T_{\text{cable}}\)). The circuit delay is the time the circuit logic spends to process and forward the data, including the latency introduced by the transceivers (\(T_{\text{gt}}\)), the upper layer protocols (\(T_{\text{proto}}\)), as well as the buffer queues (\(T_{\text{buffer}}\)). The cable delay is the time the data travels along the cable, determined by the speed of light and the total link length. Assuming both directions of the link are symmetric, we have: \[RTT=2*(T_{\text{circuit}}+T_{\text{cable}}) \tag{2}\] \[T_{\text{circuit}}=T_{\text{gt}}+T_{\text{proto}}+T_{\text{buffer}} \tag{3}\] \[T_{\text{cable}}=\frac{L_{\text{cable}}}{C} \tag{4}\] where \(C\) denotes the speed of light in the cable and \(L_{\text{cable}}\) denotes the link length. While the \(T_{\text{cable}}\) is a constant as the link length will not grow or shrink over time, the \(T_{\text{circuit}}\) can vary in a very wide range, depending on the protocol layer where the RTT is measured. If re-transmission is implemented within or above the Network Layer, where more than two nodes are involved and the data needs to go across a switching node to be routed to the destination, then end-to-end RTT is used. Otherwise, if re-transmission is done hop-by-hop within the Link Layer, then hop-by-hop RTT is used. Figure 1 shows the difference between end-to-end and hop-by-hop. For end-to-end, the worst case RTT can be hundreds or thousands of times larger than the typical RTT. When the network is congested, the \(T_{\text{buffer}}\) can be unpredictably large. Furthermore, congestion can cause frame losses, frame losses lead to re-transmission, and re-transmission can intensify network congestion, causing a positive feedback. For hop-by-hop, because there is no congestion at this level, the RTT will be constant and there will be no congestion-caused frame loss. Although end-to-end re-transmission is adopted by protocols such as TCP and Infiniband, according to the above discussion, hop-by-hop is better for minimizing the memory usage, the latency and the bandwidth overhead because it achieves the minimal RTT. However, despite its significant advantages, re-transmission is seldom included in existing Link Layer protocols. One of the reasons we believe is related to the circuit area and complexity. The hardware implementation of a Link Layer protocol should not be heavy and power hungry. Specifically, a Link Layer protocol should not need megabytes of memory to function properly. In our case, assuming the line rate is 100 Gbps and the \(T_{\text{circuit}}\) is 100 nanoseconds, according to Equations 1, 2 and 4 and the Constraint A, the \(S_{\text{retrans}}\) required is no larger than 45 KB. The size is comparable to a CPU L1 cache, making Link Layer re-transmission feasible. In conclusion, in a DCN, re-transmission should be done hop-by-hop within the Link Layer. ### Leveraging Hop-by-Hop Link Layer Re-Transmission Once hop-by-hop Link Layer re-transmission is chosen, a unique and vital property can be added to the constraint set, that is: **C In the hop-by-hop Link Layer transmission, the receiver can assume that Frame N+1 will always arrive immediately after Frame N from the same sender.** Such an assumption is not true for end-to-end transmission protocols such as any Ethernet-based protocol, where Frames from multiple senders can be routed to the same receiver. The receiver may receive Frame N and Frame N+1 from different sources. The traffic can also stop at Frame N if none of the senders continues to send data to the receiver after Frame N. However, for the Link Layer, a receiver is always paired to the same sender at the other end of the cable. If the user at the sender stops sending valid data after Frame N, the Link Layer protocol can pack invalid/idle data into Frames to create Frame N+1 and the subsequent Frames. The invalid Frames can be used by the protocol internally without being delivered to the user. This is an extremely useful property for hop-by-hop Link Layer re-transmission. We will discuss how it can be leveraged in the upcoming sections. There is another equivalent expression of Constraint C that is worth emphasizing, i.e.: Figure 1: Hop-by-hop vs End-to-end **The receiver will never receive Frame N+1 before receiving Frame N** because in the hop-by-hop Link Layer transmission there is no buffer overflow caused by congestion. Starting from the sender logic, the data is handed over to the transceiver and then it is serialized, crosses the cable, is de-serialized, and finally it is handed over to the receiver logic. There can be a few bits that are not sampled, causing the link to be out-of-sync. However, there is no way that a whole Frame is lost during this process. ### ACK vs NACK ACK (acknowledgment) and NACK (negative acknowledgment) are the two possible acknowledgement mechanisms for retransmission. For ACK, the receiver sends acknowledgements whenever it receives correct Frames. For NACK, the receiver sends acknowledgements whenever it receives Frames with bit errors. In a DCN context, NACKs have a significantly better efficiency over ACKs: let p denote the Frame Error Ratio (FER2), and N denote the total number of Frames to be transmitted during a certain period. For ACK, at least N*(1-p) acknowledgements need to be transmitted from the receiver to the sender; For NACK, at least N*p negative acknowledgements are needed. In DCNs, as a result of Constraint B, p is much smaller than 1-p. Therefore, with NACKs, a much higher reverse channel bandwidth efficiency3 can be achieved compared to ACKs. Footnote 2: **Frame Error Ratio**: ratio of Frames received with errors over total Frames received. Nevertheless, for end-to-end re-transmission, reliability cannot be guaranteed with only NACKs and no ACKs. Assume Frame N is the last Frame to be transmitted from the sender to the receiver, and Frame N is dropped by an intermediate node (e.g., a switch). The receiver will never know that Frame N has been sent, hence no NACK will be generated. Similarly, the sender will never know that Frame N is not received, hence Frame N will not be re-transmitted. However, for hop-by-hop Link Layer re-transmission, with Constraint C, it is feasible to use only NACKs to achieve reliability, because there are always Frames being transmitted and none of them can be lost. They can only be corrupted. As a result, NACK is the acknowledgment mechanism we choose for RIFL. ### Summary In this section we have now provided the basis for RIFL. We summarize the characteristics here before describing its implementation: * Data corruption is handled by re-transmission. * The buffers required by re-transmission can be implemented entirely using embedded memories. * Link Layer frames will always arrive in sequence. * We will use NACKs to reduce bandwidth overhead introduced by acknowledgments. ## 3 Defining the RIFL frames In Section 2, we justified that Link Layer hop-by-hop re-transmission is an efficient solution for eliminating bit errors in DCNs. However, the protocol itself and its microarchitecture will also significantly impact the efficiency. Without a concrete protocol, we are still far away from the final answer. In this Section, we will define the RIFL Frames by answering the following questions: 1. The Frame Structure: What are the header fields in a RIFL Frame? 2. The Frame Size: How large is a Frame in RIFL? ### High-Level Exploration of the Data Frame Structure There is no universal definition of _Frame_. In Section 2 we defined a _Frame_ as the basic unit of data transmitted across the link. At higher protocol layers, we use the term _packet_ to denote a bundle of data, such as an IP packet. A packet will be transmitted as a number of RIFL Link Layer Frames. To function properly, Link Layer Frames not only carry the payload, but also carry other essential signals. For example, when re-transmission or flow control events occur, the corresponding control signals need to be exchanged between the sender and the receiver. There should be Frames that carry such information. However, such events are assumed to occur much less frequently than regular data transmission. For bandwidth efficiency there is no reason to include both the control signals and the payload in every Frame. We need to define different types of Frames. By functionality, we divide the Frames into the Data Frames and the Control Frames. The Data Frames are the Frames that carry the payload, and all the other Frames are Control Frames that help maintain state transitions. In a healthy link, most of the Frames being transmitted are Data Frames. It is important to define the Data Frame structure well so that it serves the goal of making RIFL a low latency, high bandwidth, lightweight (small circuit area) and lossless Link Layer protocol. Section 2 showed that the circuit area is mainly impacted by the cable length and the microarchitecture of the protocol, and it is less relevant to the Data Frame structure. When define the Data Frame structure, we should mainly study its impact on the latency and the bandwidth efficiency. ### Header Fields To make the bandwidth overhead small, only essential information should be included in the header of the Data Frames. First, to be able to detect any errors, a checksum must be included in every Data Frame. Second, a Data Frame should carry a Frame ID. Usually, there will be more than one Data Frame being transmitted during an RTT, so the Frame ID is used as the identifier to indicate which Data Frames should be re-transmitted when errors are detected. Third, for better granularity, a Data Frame should carry the information to indicate how many bytes in the payload are valid. Also, because any packet is divided into one or multiple Data Frames, there should be a marker in the Data Frame header to distinguish the end-of-packet Data Frames from other Data Frames, so that packet boundaries can be defined. Finally, for any Link Layer protocol, a line code should be adopted to re-align the data after deserialization. For 64b/66b encoding in Ethernet and Aurora [9], and 64b/67b encoding in Interlaken [8], the encoding is done independently from the protocol framing. Different from the conventional protocols, in RIFL, to minimize the complexity and the latency, the line code is integrated into every Frame. In summary, the Data Frame header should carry the following essential information: the checksum, the Frame ID, the count of valid bytes in the payload, the end-of-packet marker and the line code header. ### Data Frame Size The first decision RIFL made for the Data Frame size is to use a fixed frame size instead of a variable frame size. While a variable frame size is overall good for bandwidth efficiency, it is more complicated to implement, introduces longer latency, and requires a much larger buffer. Most importantly, a variable frame size introduces variable frame intervals (the difference of the arrival times between two adjacent frames), which can greatly increase the complexity of the re-transmission protocol. It is not worth sacrificing so much to save only three percent of the bandwidth. Thus, we only study the frame size impact of fixed size Data Frames. We start with exploring the impact of the Data Frame size on the bandwidth efficiency. The following equation can yield the bandwidth efficiency: \[Eff_{\textit{bandwidth}}=(1-\frac{S_{\textit{DIFrame}}}{S_{\textit{DFrame}}}) \times R_{\textit{DFrame}} \tag{5}\] where \(Eff_{\textit{bandwidth}}\) denotes the bandwidth efficiency, \(S_{\textit{DIFrame}}\) denotes the notes the size of the header in a Data Frame, \(S_{\textit{DFrame}}\) denotes the Data Frame size, and \(R_{\textit{DFrame}}\) denotes the fraction of the Data Frames transmitted to all Frames transmitted. By Constraint C, there are continuous Frames transmitted, regardless of whether there is valid data to transmit. Let \(R_{\textit{NDFrame}}\) denote the fraction of all the non-Data Frames, we get: \[R_{\textit{DFrame}}=1-R_{\textit{NDFrame}} \tag{6}\] Assuming when an error is detected, on average, there are N\({}_{\textit{stall}}\) subsequent non-Data Frames (including the re-transmitted Data Frames and the Control Frames) being transmitted, we get: \[R_{\textit{NDFrame}}=N_{\textit{stall}}\times FER \tag{7}\] Combining Equation 5, 6, 7, we get: \[Eff_{\textit{bandwidth}}=(1-\frac{S_{\textit{DIFrame}}}{S_{\textit{DFrame}}}) \times(1-N_{\textit{stall}}\times FER) \tag{8}\] where \[FER=1-(1-BER)^{S_{\textit{DFrame}}} \tag{9}\] According to Equation 8, a higher bandwidth efficiency is achieved by reducing the ratio of \(S_{\textit{DIFrame}}\) to S\({}_{\textit{DFrame}}\), and minimizing N\({}_{\textit{stall}}\) and FER. Among the three factors, N\({}_{\textit{stall}}\) is mainly affected by the protocol design, while the others are mainly determined by \(S_{\textit{DFrame}}\). For a good re-transmission protocol, most of the frames transmitted should be Data Frames. For environments with a low BER, R\({}_{\textit{NDFrame}}\) will be much smaller compared to the ratio of S\({}_{\textit{DIFrame}}\) to S\({}_{\textit{DFrame}}\). So, the \(Eff_{\textit{bandwidth}}\) will be mainly impacted by the ratio of \(S_{\textit{DIFrame}}\) to S\({}_{\textit{DFrame}}\). As \(S_{\textit{DFrame}}\) increases, \(S_{\textit{DIFrame}}\) will also increase because some of the header fields, such as the checksum, need to be expanded for a larger \(S_{\textit{DFrame}}\), but \(S_{\textit{DIFrame}}\) increases more slowly than S\({}_{\textit{DFrame}}\) increases. For example, among all the Cyclic Redundancy Check (CRC) codes that feature a Hamming Distance [17] (HD) of four (can detect at most three errors), 8-bit CRC codes can protect at most 119 bits of payload, while 16-bit CRC codes can protect at most 32751 bits of payload [18][19]. Therefore, generally, the ratio of S\({}_{\textit{DIFrame}}\) to S\({}_{\textit{DFrame}}\) decreases when S\({}_{\textit{DFrame}}\) increases. Nevertheless, this does not mean S\({}_{\textit{DFrame}}\) can be infinitely large. For the same BER, the larger S\({}_{\textit{DFrame}}\) is, the larger the FER is. Even though by Constraint B, the BER should be smaller than \(10^{-7}\), if S\({}_{\textit{DFrame}}\) is large enough, R\({}_{\textit{NDFrame}}\) can still impact \(Eff_{\textit{bandwidth}}\). In addition, a larger S\({}_{\textit{DFrame}}\) also means a larger latency. During transmission, the receiver can only verify the correctness of a Data Frame after all the bits of the Data Frame are received. To guarantee a lossless transmission, before examining the entire Data Frame, not a single bit of the Data Frame can be delivered from the receiver. That is to say, the larger S\({}_{\textit{DFrame}}\) is, the larger the latency will be introduced by checksum verification. In summary, S\({}_{\textit{DFrame}}\) cannot be too small, otherwise the bandwidth overhead of the header will be too large. On the other hand, S\({}_{\textit{DFrame}}\) cannot be too large as well, otherwise the bandwidth can also be reduced because of a high FER, and the latency will also be too large. ### The Data Frame With the conclusions of Section A, we define the following Data Frame header fields: ### Syncword (SYN) SYN is a 2-bit line code header. It is also used as a marker to mark whether a Frame is a Data Frame or a Control Frame. Using the Verilog constant notation, in Data Frames, SYNs are set to 2'b01; in Control Frames, SYNs are set to 2'b10. A SYN of 2'b00 or 2'b11 is illegal, indicating that data is not aligned. ### Payload The user payload. ### Meta Code The Meta Code is used to indicate whether the Payload is not valid, partially valid, or all bytes of the Payload are valid. The end-of-packet marker is also encoded by the Meta Code. Table 1 shows the Meta Code encoding and the corresponding interpretation. With only two bits, the Meta Code cannot indicate how many bytes in the Payload are valid. It can only indicate whether all bytes of the Payload are valid. When not all bytes of the Payload are valid, the last byte of the Payload, which is certainly invalid as user data, becomes the Format Code. ### Format Code The Format Code is an 8-bit field. It is used to indicate how many bytes in the Payload are valid when the Meta Code indicates that not all bytes of the Payload are valid. By combing the Meta Code and the Format Code, the count of the valid bytes in the payload and the end-of-packet marker mentioned in Section A can be represented with only a cost of two bits in the Data Frame header. Meanwhile, because the Format Code is limited to eight bits, it only works when the Payload size is not larger than 2048 bits (256 bytes). \begin{table} \begin{tabular}{|c c c c|} \hline Meta Code & Payload Valid & EOP & ABV \\ \hline 00 & No & No & No \\ \hline 01 & Yes & No & Yes \\ \hline 10 & Yes & Yes & Yes \\ \hline 11 & Yes & Yes & No \\ \hline \end{tabular} * EOP: end of packet * ABV: all bytes valid \end{table} Table 1: Meta Code Encoding ### Verification Code The Verification Code is the exclusive-or result of the checksum and the Frame ID. It combines the functionalities of the checksum and the Frame ID, i.e., the Verification Code is used to verify the correctness of the Data Frames as well as to locate the error Frame when an error is detected. More details of usage of the Verification Code will be illustrated in the next section. Figure 2 shows the Data Frame Structure, where \(\text{S}_{DFFrame}\)4 denotes the Data Frame size, \(\text{S}_{payload}\) denotes the size of the Payload, and \(\text{S}_{verification}\) denotes the size of the Verification Code. We use Xilinx FPGAs for prototyping, and the available transceivers offer 32-bit, 64-bit and 128-bit interfaces [20][21][22]. To minimize the latency and complexity of data width conversion, \(\text{S}_{DFFrame}\) should be a multiple of the interface width of the transceiver. In our prototype, we set \(\text{S}_{DFFrame}\) to be a power of two, and no less than 128. According to Figure 2, we get: Footnote 4: We use bit as the size unit for the rest of this paper \[\begin{split} S_{DFFrame}&=S_{payload}+S_{Dheader}\\ &=S_{payload}+S_{verification}+4\end{split} \tag{10}\] If we assume \(R_{NDFrame}\) is small, then \(R_{DFFrame}\) is close to 1. Combining Equation 5 and Equation 10, we get: \[\text{Eff}_{bandwidth}=1-\frac{S_{verification}+4}{S_{DFrame}} \tag{11}\] As discussed in Section A, to minimize the latency and maximize the bandwidth efficiency, both \(\text{S}_{DFFrame}\) and \(\text{S}_{Verification}\) need to be small. Because \(\text{S}_{DFFrame}\) is set to be a power of two, and no less than 128, and the Frame Code can only support up to 2048 bits of Payload, the \(\text{S}_{DFFrame}\) options are limited to: 128, 256, 512, 1024, and 2048. Let \(\text{S}_{FrameID}\) denote the size of the Frame ID field and \(\text{S}_{dricksum}\) denote the size of the checksum. Because the Verification Code is the exclusive-or result of the checksum and the Frame ID, we get: \[S_{verification}=Max(S_{FrameID},S_{dricksum}) \tag{12}\] A valid tuple of \((\text{S}_{DFFrame}S_{verification})\) should satisfy the following requirements: 1. The size of the Frame ID should provide enough unique data frame IDs to cover all the data frames being sent during an RTT. 2. For any BER that is better than \(10^{-7}\), the Mean Time Before Failure (MTBF)5 associated with the checksum should be at least longer than the lifetime of the circuit, say 100 years. Footnote 5: In this paper, we define MTBF as the time to make the system failure possibility equal to 1%. The first requirement can be quantitatively described as: Footnote 5: In this paper, we define MTBF as the time to make the system failure possibility equal to 1%. \[2^{\text{S}_{FrameID}}\geqslant\frac{\lambda_{line}\times RTT}{SDFrame} \tag{13}\] The second requirement can be expressed by: Footnote 5: In this paper, we define MTBF as the time to make the system failure possibility equal to 1%. \[(1-FFR)^{\frac{\lambda_{line}\times\text{MTBF}}{LDFrame}}=99\% \tag{14}\] where FFR denotes the Frame Failure Ratio, representing the ratio of the error Frames that cannot be detected by verifying the checksum to the total number of frames transmitted. In RIFL, we use a CRC code as the checksum. For an m-bit CRC code that features a Hamming Distance [17] (HD) of \(n+1\), it can detect all error Frames that carry no more than n error bits. If the number of the error bits are more than n, one over \(2^{\text{m}}\) of the error Frames cannot be detected. Therefore: FFR=\frac{1}{2^{m}}\times(1-\sum_{i=0}^{n}P(i))\] (15) where \(P(i)\) denotes the possibility of a frame carrying exactly i bits of errors: \[P(i)=\binom{\text{S}_{DFFrame}}{i}BER^{i}(1-BER)^{S_{DFFrame}-i} \tag{16}\] There are a wide range of CRC codes listed in [19]. Let the line rate be 100 Gbps, RTT be 500 ns, and the BER be \(10^{-7}\). Combining Equations 13, 14, 15, 16, and the CRC codes in [19], the minimal \(\text{S}_{FramID}\) and \(\text{S}_{dricksum}\) for different \(\text{S}_{DFFrame}\) can be found in Table 2. Because the Payload is input from the user interface, and following the convention that the data width of the user interface should be a power of two, there should be a data width conversion module to convert the user input to the Payload. To minimize the latency and the complexity of the conversion module, the Payload should be byte-aligned: Footnote 5: In this paper, we define MTBF as the time to make the system failure possibility equal to 1%. \[S_{payload}\equiv 0\mod 8 \tag{17}\] Because we have limited \(\text{S}_{DFFrame}\) to a power of two and to be no less than 128, we get: Footnote 5: In this paper, we define MTBF as the time to make the system failure possibility equal to 1%. \[S_{DFFrame}\equiv 0\mod 8 \tag{18}\] Combining Equations 10, 17, 18, we get: Footnote 5: In this paper, we define MTBF as the time to make the system failure possibility equal to 1%. \[S_{verification}\equiv 4\mod 8 \tag{19}\] The minimal \(\text{S}_{verification}\) and the corresponding \(\text{Eff}_{bandwidth}\) for various values of \(S_{DFFrame}\) can be found in Table 3. Let 90% be the acceptance threshold of the bandwidth efficiency, then the available options for the Data Frame size are 256, 512, 1024, and 2048 bits, and \(S_{\textit{verification}}\) should always be 12 bits. Because the \(S_{\textit{verification}}\) should be 12 bits, we extend the CRC code to 12 bits for a stronger protection. We choose not to extend the Frame ID field, because a larger \(S_{\textit{FrameID}}\) means larger \(S_{\textit{returns}}\), which leads to larger circuit area. In summary, we defined the Data Frame fields and the size of each field in this subsection. ### The Control Frame As discussed in A, there should be Control Frames in RIFL to help maintain state transitions. Because the Control Frames are used much less than Data Frames, the size of the Control Frames does not have much affect on the protocol efficiency. Therefore, we don not need to further analyze the impact of the Control Frame size like we did for the Data Frame size. To minimize the complexity, the Control Frame size is set to be equal to the Data Frame size. Figure 3 shows the Control Frame structure, where \(\text{S}_{\textit{frame}}\) denotes the Data Frame size, \(S_{\textit{verification}}\) denotes the size of the Verification Code. The **SYN** and the **Verification Code** do the same thing in the Control Frames as they do in the Data Frames. The **Control Codes** are: **lde** This code indicates the sender is not in the normal data transfer state. This code is sent out when the sender is in the transition state between the pause state, the re-transmit state, and the normal state. Detailed explanations of each state will be introduced in the next section. **Pause Request** This code is sent by the receiver when the link is out-of-sync. It notifies the sender to pause from sending data. **Re-transmit Request** This code is sent by the receiver when a bad verification code is encountered. It tells the sender to switch from the normal data transmission to the re-transmission procedure. ### Summary In this section, we analyzed the Frame structure's impact on the bandwidth efficiency and the latency of the protocol. We defined the structure of the Data Frames and the Control Frames based on our analysis. It is worth noting that, the Frame sizes we chose are based on the interface data width of the transceivers we used for prototyping. For other types of transceivers that offer different interface data widths, the same analysis can be done again to determine the best Frame size options. ## 4 Defining the RIFL Protocol In this Section, we will introduce how RIFL operates with the Frames we defined in Section 3. By functionality, this section are divided as follows: 1. **The TX and the RX Protocol**: How RIFL TX and RX side operates. 2. **Re-transmission**: How re-transmission is done with the Verification Code we defined in the Section 3. 3. **Flow Control** and **Clock Compensation**: Explanation of the flow control procedure and the clock compensation procedure. 4. **Channel Bonding**: How RIFL aggregates multiple transceivers to achieve higher line rates. ### The TX Protocol There are six states for the TX logic: * **Init**: In this state, invalid Data Frames are generated with Meta Code 2b00, and Frame ID from zero to the max6. The corresponding Verification Codes are also computed and inserted into each Frame. These invalid Data Frames will fill the re-transmission buffer during initialization. Throughout this state, the TX logic sends out back-to-back Pause Request Frames. Footnote 6: The max value depends on how many bits are used for the Frame ID. E.g. if \(S_{\textit{frameID}}\) is 8 bits, then the max value is 255. The \(S_{\textit{frameID}}\) can be at most 12 bits because the \(S_{\textit{verification}}\) is set to 12 bits. * **Send Pause**: Transmitting falls into this state when the RX logic detects that the link is out-of-sync, or right after the TX logic finishes initialization. Throughout this state, the TX logic sends out back-to-back Pause Request Frames. * **Pause**: Transmitting falls into this state when Pause Re-quests Frames are received by the RX logic. Throughout this state, the TX logic sends back-to-back Idle Frames. * **Retrans**: Transmitting falls into this state when Re-transmit Request Frames are received by the RX logic, or a re-transmission is resumed from an interruption caused by higher priority events. In this state, the TX logic can send three types of Frames: **Re-transmitted Data Frames, Idle Frames or Re-transmitted Request Frames**. More details will be elaborated in the upcoming Re-transmission subsection. Figure 2: Data Frame Structure * **Send Retrans**: Transmitting falls into this state when an error is detected by the RX logic and there is no other higher priority condition. Throughout this state, the TX logic sends out back-to-back Re-transmit Request Frames. * **Normal**: The normal data transmission state. As discussed previously, the link should stay in this state for most of the time if the BER is within the designed operation range (\(10^{7}\) in our case). In this state, user is allowed to transmit data. When valid user data is input, the data is transformed into the Payload of one or multiple Data Frames. When user does not input valid data, invalid Data Frame with Meta Code 2'b00 are generated. In other words, in this state, the TX logic constantly sends out back-to-back Data Frames and copy them to the re-transmission buffer. Whenever user input is not valid, protocol-generated invalid Data Frames will be transmitted along the link to fill in the gaps. Figure 4 shows the state transition diagram for the TX logic. Except for the Init state, all the other states follow the same transition logic. ### The RX protocol There are in total five special events in RIFL: **Out-of-sync, Pause Request, Re-transmit Request, Frame Error** and Flow Control. The reactions of the TX logic to the first four events are already described in Sections A. The Flow Control protocol will be introduced in Section D. The RX logic is responsible for monitoring such events and generating the event flags. Once an event is detected, the RX logic turns on the corresponding flag to notify the TX logic to make a proper reaction. There is no state in the RX logic. All the special events are monitored independently and concurrently. The priority order of these events is presented in Figure 4. To prevent a Frame that carry errors from being recognized as a Control Frame, eight consecutive Pause Requests or Re-transmit Requests need to be received by the RX logic to activate the Pause or Re-transmit control flag. The Out-of-sync flag is turned on whenever an illegal **Syncword** is received. The Frame Error flag is turned on whenever a Data Frame with a wrong Verification Code is received. ### Re-transmission When both directions of the link are synchronized, the TX logic will switch between Normal, Retrans and Send Retrans states. The re-transmission falls into three scenarios: ### No error for both directions When there is no error for both directions of the link, both ends stay in the Normal state. In this scenario, the SYN of every Frame is always set to 2'b01 to represent a Data Frame. The Meta Code and the Payload are generated based on different scenarios of the user input. Every time a new Meta Code and Payload is generated, the 12-bit CRC checksum will be calculated. The Verification Code is then yielded by performing exclusive-or between the Frame ID and the checksum. After the TX logic sends out a Data Frame, the Frame ID will increment by one. Each Data Frame being sent out will also be copied to the re-transmission buffer. The re-transmission buffer is essentially a shift register, when a new entry is written, the oldest entry will be removed. Because \(S_{retrans}\) is set to be equal to \(2^{S_{remmID}}\), each entry in the re-transmission buffer holds a Frame with an unique Frame ID. When a new Frame is written in to the buffer, the old Frame to be removed has the same Frame ID with the new Frame. ### Errors are detected in one of the directions When errors are detected in only one of the directions, the endpoint where the errors are detected enters the Send Retrans state, the other end enters the Retrans state. In the endpoint that is in the Send Retrans state, Frame Error flag is raised, its TX logic will send out back-to-back Re-transmit Request Frames. In the endpoint that is in the Retrans state, Re-transmit Request flag will be raised after the most recent received eight Control Frames are all Re-transmit Request. The TX logic will then perform the re-transmission procedure. Throughout the re-transmission procedure, the TX logic will send \(2.5^{25_{remmID}}\) Frames. The first \(2^{25_{remmID}}\) Frames are interleaved with Idle Frames and Re-transmitted Data Frames. The last \(0.5^{25_{remmID}}\) Frames are Idle Frames. After the last Frame of the re-transmission procedure is sent, if the Re-transmit Request flag is still raised, the TX logic will perform the re-transmission procedure all over again, until the Re-transmit Request Flag is down. ### Errors are detected in both directions When errors are detected in both direction, both endpoints will enter the Retrans state and start the re-transmission procedure. Different from the situation where only one direction detects the errors, for this scenario, the first \(2^{2}2^{S_{remmID}}\) Frames will be Re-transmitted Data Frames interleaved with Idle Frames or Re-transmit Requests Frames. The last \(0.5^{2}2^{S_{remmID}}\) Frames can also be either Idle Frames or Re-transmit Requests Frames. Whether to send the Re-transmit Requests Frames depends on the Frame Error flag is down or not. By interleaving the Idle/Re-transmit Request Frames with the Re-transmitted Data Frames in the first \(2^{2}2^{S_{remmID}}\) Frames, even there are errors in both direction, both endpoints can perform re-transmission while sending re-transmission notifications at the same time. In addition, when one of the endpoints stops sending the Re-transmit Request Frames, it will take a half of the RTT for the last Re-transmit Request Frame to arrive the other end, and only if the other end stops receiving the Re-transmit Request Frames, it can put down the Re-transmit Request flag. To cover this delay, the last \(0.5^{2}2^{S_{remmID}}\) Frames are designed to be the buffer Frames. Thus far, we have introduced the re-transmission procedure for the TX logic. On the RX side, there is also a procedure to verify if a Data Frame should be delivered to the user and if the Frame Error flag should be raised. Pseudo code of the verification procedure is shown in Listing 1. Figure 3: Control Frame Structure ``` 1:Input:SYN,MetaCode,Payload,VCode 2:Output:Frame_Valid,Frame_Error 3:Init: 4:FrameID=0 5:Threshold_FrameID=16 6:Always: 7:Checksum=CRC12({MetaCode,Payload}) 8:ifVCode==FrameID^Checksum: 9:ifSYN==2^job: 10: FrameID++1 11:ifFrameID==Threshold_FrameID: 12:Threshold_FrameID++1 13:Frame_Valid=True 14:Frame_Error=False 15:else: 16:Frame_Valid=False 17:else: 18:FrameID=Threshold_FrameID-16 19:Frame_Valid=False 20:Frame_Error=True ``` **Listing 1** RX Verification Procedure As shown in Listing 1, the RX logic keeps its own Frame ID counter (\(FrameID\)) and a threshold counter (\(Threshold_{FrameID}\)). \(FrameID\) is initialized as 0 and \(Threshold_{FrameID}\) is initialized as 16. When a Frame is received, the RX side will first calculate the CRC checksum of the Frame. The exclusive-or result of the checksum and \(FrameID\) will then be compared against the Verification Code in the Frame. If the compare result is not equal, it implies the Frame has errors and the verification failed. The Frame Error flag will be raised and the Frame will not be delivered to the user. If the compare result is equal, meaning the verification passed, the RX logic will then examine the Syncword. If the Syncword is 2'b10, meaning the Frame is a Control Frame, the RX verification logic will not do anything. If the Frame is a Data Frame that carries a Syncword of 2'b01, \(FrameID\) will then be compared against \(Threshold_{FrameID}\), only if \(FrameID\) is equal to \(Threshold_{FrameID}\), the Data Frame can be delivered to the user, and both \(FrameID\) and \(Threshold_{FrameID}\) will then increment by one. If \(FrameID\) is not equal to \(Threshold_{FrameID}\), then only \(FrameID\) will increment by one, the Frame will not be delivered to the user. In the case that the verification is failed. the \(FrameID\) will be rolled back to \(Threshold_{FrameID}\) minus 16. The RX verification procedure is designed to deal with a special sequence of errors that can cause a false-positive verification result without the verification procedure. Here is an example of the special sequence of errors: assume Frame 68 has an error. A re-transmission is requested. Meanwhile, the subsequent frames, such as Frame 69 and Frame 70, are already on the fly. Because the 12-bit Verification Code is the exclusive-or result of the Frame ID and the CRC checksum, if either Frame 69 or Frame 70 has an error, they can be misrecognized as a correct Frame 68 - there is only one bit difference between the binary representations of 69 and 70 from 68. Also, because the TX logic will start re-transmission whenever the Re-transmit Request flag is raised, the re-transmission will not start exactly from Frame 68. Instead, it will start from a Frame sent before Frame 68. If some of the re-transmitted Frames before Frame 68 carry errors, they may also look like Frame 68 for the same reason. Thus, when an error is detected in Frame 68, the Frame ID will be rolled back to 52. We require the RX logic to see a correct sequence from Frame 52 to Frame 67 before accepting Frame 68. This means the RX logic must see a correct sequence of sixteen 12-bit Verification Codes. In this way, even a Frame with white noise (BER = 0.5) has only a chance of one over (\(2^{12}\))16 to be misrecognized as Frame 68. For BER better than \(10^{7}\), the probability of a false-positive is even more negligible. Footnote 16: The flow control pause Frame is different from the Pause Request Control Frame ### Flow Control As we discussed in Section 1, congestion control should be done at the Network Layer. However, besides congestion control, flow control is still necessary - the receiver may not be able to receive the data all the time, a method for the receiver to notify the sender to stop transmitting data is needed. To provide flow control, a buffer is added between the RX logic and the user interface. A simple ON/OFF flow control mechanism is adopted for low complexity. When the buffer queue length exceeds the ON threshold (\(Thr_{ON}\)), the TX logic of the receiver will send out a flow control pause Frame2. When the buffer queue length drops below the OFF threshold (\(Thr_{OFF}\)), the TX logic of the receiver will send out a flow control resume Frame. The sender completely stops transmitting any data after receiving the flow control pause Frame, and it resumes transmitting at the line rate after receiving the flow control resume Frame. Footnote 2: The flow control pause Frame is different from the Pause Request Control Frame The size of the flow control buffer (\(S_{FC}\)) must be carefully chosen to prevent any buffer overflow or starving during a flow control process - buffer overflow will cause frame losses and buffer starving will cause bandwidth under-utilization. Because it takes a half of the RTT for a flow control notification Frame to arrive from the receiver to the sender, during this period, the flow control buffer must reserve enough space to receive the Frames sent from the sender at the line rate, hence: \[S_{FC}-Thr_{ON}\geqslant A_{line}*\frac{RTT}{2} \tag{20}\] Figure 4: TX State Transition Diagram Also, during this period, the buffer must also be able to deliver Frames to the user at the line rate, then we get: \[\mathit{Thr}_{\mathit{OFF}}\geqslant\lambda_{\mathit{line}}*\frac{RTT}{2} \tag{21}\] Lastly, \(\mathit{Thr}_{\mathit{ON}}\) and \(\mathit{Thr}_{\mathit{OFF}}\) must not be too close. Otherwise, frequently switching between ON and OFF will cause the flow control notification Frames occupying too much bandwidth of the reverse channel. For convenience, we set: \[\mathit{Thr}_{\mathit{ON}}-\mathit{Thr}_{\mathit{OFF}}\geqslant\lambda_{ \mathit{line}}*\frac{RTT}{2} \tag{22}\] Combing Equation 20 and 21 and 22, we get: \[S_{\mathit{FC}}\geqslant\frac{3}{2}*\lambda_{\mathit{line}}*RTT \tag{23}\] and we can set: \[\mathit{Thr}_{\mathit{ON}}=\frac{2}{3}*S_{\mathit{FC}} \tag{24}\] \[\mathit{Thr}_{\mathit{OFF}}=\frac{1}{3}*S_{\mathit{FC}} \tag{25}\] After defining the flow control mechanism and the flow control buffer size, there is one remaining issue for flow control: bit error. Every Frame, including the flow control notification Frames, can end up being corrupted during transmitting. If there is a bit error in the flow control pause Frame, then it can result in a buffer overflow and a Frame loss. If there is a bit error in the flow control resume Frame, then the link may stop transmitting data forever. In our case, we extended the Meta Code encoding scheme and defined flow control notification Frames as special Data Frames. Previously, when the Meta Code is 2'_b00_, it indicates the Frame is an invalid Data Frame. Now, three types of Frames share Meta Code 2'_b00_. Only if the last byte of the Payload is 2'_b00_, it represents an invalid Data Frame. Otherwise, 2'_b01_ represents a flow control pause Frame and 2'_b02_ represents a flow control resume Frame. By defining the flow control notification Frames as special Data Frames, the flow control notifications are guaranteed delivering to the sender. Even when there are bit errors, the flow control notifications will only be delayed, but not be missing. During the delay time, regular data transmissions at both sides of the link will be completely stopped because of re-transmission. Hence, there will be no data loss because of the flow control pause notifications not taking effect on time. ### Clock Compensation Although both sides of the link should operate at the same nominal line rate, the actual frequencies of their clocks will not be exactly the same because of the crystal oscillator frequency deviation. The endpoint with the faster clock will send data slightly faster than the slower end can receive. This will eventually overflow the slower end's receive buffer. With flow control, the issue can be resolved. However, it comes with a price of higher latency. Because flow control relies on the buffer queue length to slowly increase to \(\mathit{Thr}_{\mathit{ON}}\) for a pause, the Frames at the end of the queue will experience large latency. It will be ideal if the TX logic at the faster endpoint can proactively regulate its rate. Because clocks are embedded into the data streams for serial transmission between transceivers, and RIFL directly interfaces with the transceivers, we are able to compare the frequency of the recovered clock with the frequency of the local clock to determine whether and when the TX logic should pause for one cycle for clock compensation. Details of the clock compensation implementation will be introduced in Section 5. ### Channel Bonding So far, we have introduced the single-lane protocol of RIFL. It works when both ends of the link only use a single transceiver for transmission. Nevertheless, although transceiver technology evolves rapidly, transceivers that support above 100 Gbps line rate are still rare to see. To achieve a bandwidth of hundreds gigabytes per second, channel bonding has to be done to aggregate the bandwidths of multiple transceivers. In RIFL, when multiple transceivers are used, every single pair of the transceivers runs the single-lane protocol. The channel bonding logic is responsible for dispatching the user data to each lane and aggregating the received data from each lane. To simplify the logic, we divide the user data into segments, the size of each segment is \(S_{\mathit{payload}}\). At the TX side, the first segment goes to lane 1, the second goes to lane 2, and so on so forth. The same applies to the RX side, the Frame received from lane 1 is delivered first, followed by the Frame received from lane 2, and so on so forth. Because of the lane skew, lane 1 is not guaranteed to be the first lane to receive a Frame. The flow control buffer at each lane is used to overcome the lane skew. Details of the channel bonding implementation will be introduced in Section 5. ### Summary In this section, we have defined the RIFL protocols. We first introduced how TX and RX logic operates in general. We then added more details of re-transmission, flow control and clock compensation. Finally, we presented the channel bonding protocol. More details on the implementation of the protocols are presented in Section 5. ## 5 Implementation In this section, we present the FPGA implementation of RIFL that is open sourced at [23]. RIFL is fully parameterized. Implementation options such as the Frame size and the transceiver line rate are exposed as synthesis parameters. For convenience, in this section, we demonstrate a four-lane implementation. In the implementation, each lane runs at 28 Gbps, and the Frame size is set to 256 bits. ### Top-Level Architecture Figure 5 shows the top-level architecture. RIFL provides a pair of AXI4-Stream [24] interfaces to the user. Both interfaces consist Figure 5: RIFL Top-Level Architecture of TDATA, TVALID, TKEEP, TLAST, and TREADY fields. With these fields, each fit 8 of the user data stream carries all the essential information we discussed in Section A. Adjacent to the user interfaces is the AXI4-Stream data width conversion block. It converts the stream width from any power of two to a multiple of \(S_{\textit{Polyland}}\). Footnote 8: it: The data being transmitted in a single clock cycle When more than one transceiver is used, the AXI4-Stream data width converter will then be connected to the channel bonding module. In the TX path, the channel bonding module splits a single data stream into multiple data streams. In the RX path, it does the inverse. To provide more flexibility, two different channel bonding methods can be used in the channel bonding module: Temporal Channel Bonding and Special Channel Bonding. Temporal Channel Bonding splits a single data stream that runs at a higher frequency into multiple data streams that run at a lower frequency. After being split, the data width remains unchanged. Special Channel Bonding splits a single wider data stream into multiple narrower data streams and it does not change the frequency. In the example shown in Figure 5, both methods are used, the 512-bit AXI-4 Stream is first converted a 480-bit AXI-4 Stream. Then, inside of the channel bonding module, it is split into two 480-bit AXI-4 Streams running at half of the original frequency. Finally, each of the 480-bit AXI-4 Stream is split into two 240-bit streams. With two channel bonding methods, more user interface data width options are provided. For a four lane implementation with a Frame size of 256 bits, the data width can be 256 bits, 512 bits or 1024 bits. When implements RIFL on a low speed device such as a low end FPGA, wider interfaces with lower frequency can help timing closure, while on a high speed device, narrower interfaces are ideal for smaller circuit area. If there is only a single lane, then the channel bonding module will be omitted. The AXI4-Stream data width converter will directly connect the single-lane logic. Details of the single-lane architecture will be presented in the next subsection. ### Single-Lane Architecture Figure 6 shows the single-lane architecture of RIFL. As shown in the figure, there are two clock domains: the RX Domain is driven by the recovered clock generated by the transceiver, and the TX Domain is driven by two local clocks - a high speed clock drives transceiver-facing logic and a low speed clock drives the rest of the protocol logic. The high and low speed clocks are derived from the same clock source. The frequency of the faster one is a power of two times of the frequency of the slower one. Hence, the two TX clocks are synchronous to each other. In the example, the high speed clock runs at 437.5 MHz and the lower speed clock runs at 109.4 MHz. In the RX domain, the Lane Aligner converts the unaligned transceiver output stream to an aligned stream by locating the position of the Syncword. The Lane Aligner is essentially a two-level cascaded multiplexer chain. After the Lane Aligner, the Verification Code Validator is used to verify the correctness of the Verification Code. It is responsible for raising the Frame Error Flag. The scrambler and the descrambler used in RIFL are implemented in linear-feedback shift registers (LFSRs). The standard 33-bit scrambler code \((1+x^{13}+x^{33})\) is adopted for good DC balance and transition density [25]. After descrambling, the Clock Domain Crossing (CDC) module filters out the non-Data Frames by checking the Syncword. It then converts the filtered stream from the RX Domain to the TX Domain using a low latency asynchronous FIFO. The Control Event Monitor and the Flow Control Monitor are responsible for checking every Frame and generating the Pause Request flag, the Re-transmit flag and the flow control ON/OFF notifications. In the TX domain, the modules that are closer to the transceiver are driven by the high speed clock. They are the scrambler and the Verification Code generator. A pair of the GT data width converters are used to perform the conversion between the high-speed narrow stream used by the transceiver and the low-speed wide stream used by the rest of the protocol logic. The modules driven by the low-speed clock are the TX Controller, the Meta Code Encoding and Decoding modules, and the Flow Control Buffer. The finite-state machine (FSM) in the TX Controller implements the TX logic described in Section A. The Meta Code Encoding and Decoding modules convert the AXI4-Stream signals to the Meta Code signals. The Flow Control Buffer is a synchronous FIFO. It monitors its buffer queue length and issues flow control requests to the TX Controller. Finally, the Clock Compensation module takes the TX clock and the RX clock from the transceiver, and a free-running clock as inputs. Each transceiver clock drives a gray code counter. Both counters are then brought to the free-running clock domain for comparison. If the counter of the TX clock increases faster than the counter of the RX clock, then the difference of the counter values will be kept in a register. Whenever the difference increases, the Clock Compensation module will issue N cycles of pause signals to the TX controller. N is equal to the change of the difference between comparisons. ## 6 Performance Evaluation We have validated the functional correctness of RIFL on both Intel and Xilinx devices for line rates from 25 Gbps to 200 Gbps. In this section, we present the performance results of RIFL that we obtained from Xilinx devices. We will first introduce our test setup. Then, we will compare the bandwidth efficiency, the latency and the resource usage between RIFL and Xilinx's Aurora [9], Interlaken [26] and 100G Ethernet (CMAC) [27] implementations. We will then provide RIFL's performance result under various BER to demonstrate RIFL's reliability. ### Experimental setup Our prototype is implemented on Fidus Siderwinder-100 (SW100) [28] boards. There are two QSFP28 ports on the board, connected to an XCZU19EG FPGA. Ten-meter Active Optical Cable (AOC) and 3-meter Direct Attach Copper (DAC) cables are used for the QSFP28 connections. For the sake of simplicity, we only present the results for the AOC in this Section. A software-defined AXI4-Stream traffic generator is built to generate the testing traffic. This traffic generator allows AXI4-Stream traffic to be defined cycle by cycle in CSV format. The CSV file is then encoded into binary format and moved from an X86/ARM host to the FPGA memory. The hardware driver of the traffic generator retrieves the traffic data from the FPGA memory, performs decoding, and generates the traffic in a cycle-accurate manner according to the CSV definition. A traffic validator is also built. It can cache the transmitted packets and compare them against the loopback traffic to verify the correctness. It also internally time-stamps each packet to monitor the bandwidth and latency. Two different tests are designed for the performance comparison and the reliability test. The setup shown in Figure 6(a) is used for performance comparison between the RIFL implementations and the Xilinx cores. The designs under test (DUTs) are placed in two FPGA boards to represent their general use case. The bandwidth efficiency and the RTT is measured in the first board. The point-to-point latency is yielded by halving the RTT - assuming the latencies for both direction are the same. For fair comparison, all DUTs use four Xilinx GTY transceivers. The line rate of each transceiver is set to 25.78 Gbps. The reliability test setup is shown in Figure (b)b. In this test, the same BER is imposed to both directions. To make the error patterns of the two directions independent, their random seeds are set different. In this case, the point-to-point latency cannot be considered as a half of the RTT anymore, because the link is not symmetric. For example, in a round trip, errors may happen in one of the directions, causing the latencies of both directions to be unequal. Therefore, the point-to-point latency has to be directly measured. As a result, both RIFL cores are placed in the same FPGA. Traffic generators and traffic validators are connected to both RIFL cores. The bandwidth efficiency and the average latency are computed by averaging the test results of both directions. The tail latencies are computed from the aggregated results of both directions. In the reliability test, each RIFL core uses four GTVs [21] running at 28 Gbps. The aggregated line rate is 112 Gbps, which is the maximum line rate a QSFP28 cable can support. ### RIFL vs Aurora vs Interlaken vs CMAC In this subsection, we compare the bandwidth efficiency, latency, and resource usage performance between RIFL, Aurora, Interlaken and CMAC. For bandwidth efficiency comparison, we test the bandwidth efficiency results for different user payload sizes. The payload sizes sweep from 1 byte to 8192 bytes9, with a step of one byte. When the size of a payload is larger than the maximum frame size of the DUT (32 bytes for RIFL256, 64 bytes for RIFL512, Interlaken and Aurora, 9600 bytes for CMAC), it is divided into multiple frames for transmission. For each payload size, a traffic of ten gigabytes is sent. The traffic generator saturates the available bandwidth of the DUT by sending out a fli of traffic whenever the DUT can accept one. Footnote 9: CMAC starts at 64 bytes because its minimal accepted payload size is 64 bytes. Figures (a)a, (b)b and (c)c show the bandwidth efficiency comparison between RIFL, Aurora, Interlaken and CMAC. In the figures, RIFL256 represents the RIFL implementation with a Frame size of 256 bits and RIFL 512 represents RIFL with a Frame size of 512 bits. To preserve more details for small payload sizes, the results for payload sizes that are larger than 1500 bytes are not included in the figures. As the figures show, in terms of bandwidth efficiency, from the best to the worst, it is CMAC, RIFL512, RIFL256 and Interlaken. Unlike the zigzag curves of the other three cores, CMAC shows a much smoother curve. It is because for RIFL, Aurora and Interlaken, if the payload size is not a multiple of the user interface data width, then for the last fli of the packet, only a fraction of the user interface will receive valid data. After receiving the partial valid fli, the entire fli is fed into the pipeline, the invalid bits are replaced with bubbles. Meanwhile, for CMAC, the data received from the user interface is first buffered, and is then reconstructed. The last fli of packet Figure 6: RIFL Single-Lane Architecture Figure 7: Performance Test Setups Figure 8: Performance Comparison between RIFL, Aurora, Interlaken and CMAC Figure 9: Bandwidth and Latency under different BERs N can be concatenated with the first flit of packet N+1 to eliminate the pipeline bubbles as much as possible. While buffering and reconstructing benefit the bandwidth efficiency, they come with a trade-off of the latency and the complexity. For latency comparison, the same traffic patterns are used. Same with the bandwidth comparison, the traffic generator saturates the available bandwidth of the DUT. Figure 8d shows the point-to-point latency comparison result. From the best to the worst, it is RIFL256, RIFL512, Aurora, CMAC and Interlaken. For CMAC, as previously mentioned, by buffering and reconstructing the user packets, the latency is increased. The latency for small packets varies significantly more than the large ones. For Aurora and Interlaken, without knowing their implementation details, we cannot infer what form up their latency. However, we are confident that it is our micro-architecture optimizations mentioned in the previous sections that make RIFLs the lowest latency implementations. Table 4 shows the resource usage comparison between three different implementations of RIFL and Aurora. In Table 4, RIFL(X,Y) represents RIFL with a Frame size of X bits and a user interface width of Y bits. Interlaken and Aurora are not included in the resource usage comparison because they are both hard cores, i.e., they are not implemented in FPGA soft logic. It can be learned from the table that RIFL uses more resources than Aurora. One of the main reasons is that RIFL adds the re-transmission buffer and the flow control buffer for reliability. Another reason is that our FPGA prototype is not fully optimized for resource usage. For example, the data widths of BRAMs in the Sidewinder board are at most 64 bits while the buffer data width in RIFL is equal to its Frame size, being at least 256 bits. Although the capacity of a single BRAM is enough for the flow control buffer, we have to use multiple BRAMs for enough data width. Both reasons are related to the FPGA itself. If RIFL is hardened, the resource usage can be significantly reduced. ### Reliability Test In this subsection, we present the bandwidth ratio, latency, and MTBF result of RIFL256 under different BERs. The bandwidth ratio is the ratio of the bandwidth under current BER to the bandwidth of a error-free link. In the test, the size of the traffic is set to ten gigabytes. The traffic consists of mixed length packets. Payload sizes are randomly distributed from 1 byte to 8192 bytes. The BERs sweep from \(10^{-12}\) to \(10^{-5}\), with a step of \(10^{0.25}\). As shown in Figures 8(a) and 8(b), the bandwidth and latency of RIFL do not degrade until the BER increases beyond about \(10^{-7}\). The bandwidth ratio starts to drop when the BER is \(5.6\times 10^{-10}\), and it drops to 96.3% when the BER is \(10^{-7}\). The results agree with the theoretical calculation result of Equation 11. The latency of RIFL starts to increase when the BER is worse than \(1.7\times 10^{-6}\). When the BER is better than \(10^{-7}\), the average latency and the tail latencies remain within 107 nanoseconds. This also agrees with the theoretical calculation. As we discussed in Section C, during a re-transmission, even a Frame of white noise is impossible to be mis-detected as a correct Frame. Therefore, for RIFL, Equation 14 should be modified as: \[(1-FFR)^{\frac{\lambda_{total}+MTBF}{\lambda_{actual}}}=99\% \tag{26}\] where \(\lambda_{actual}\) denotes the actual bandwidth. With the bandwidth result, MTBF can be calculated. As shown in Table 5, when BER is \(10^{-7}\), the MTBF is \(1.88\times 10^{7}\) years. Therefore, it is safe to claim that RIFL is reliable for any BER that is better than \(10^{-7}\). ### Cross-Vender Communication We have successfully validated RIFL on a link between an Intel Agilex device and a Xilinx Vertex Ultrascale+ device. ### Summary In this section, we compare the latency and bandwidth efficiency result between two implementations of RIFL and three other Link Layer protocol implementations. We show that RIFL has the best latency and second best bandwidth efficiency while it is the only protocol that ensures lossless transmission. We also show RIFL can keep good performance and long MTBF when the BER is better than \(10^{7}\). ## 7 Related Work In this section we describe the works that are most relevant to RIFL. Ethernet [13] was introduced in the 1980s and it is the most common protocol used in modern data centers [12]. In the three-layer model we introduced in Section 1, Ethernet includes not only Layer 1 functionalities, but also some Layer 2 functionalities, such as switching. Ethernet (Layer 1) allows variable frame sizes from 72 bytes to 1530 bytes (some implementations allow jumbo frames larger than 9000 bytes, but it is not compatible with the IEEE 802.3 standard). A 32-bit CRC is included in every Ethernet frame, enabling error detection but not error correction. Any re-transmission protocol working on top of Ethernet has to be end-to-end, which means Constraint C is not met any more. Moreover, the re-transmission buffer has to be large enough to handle a burst of the maximum-size frames. To summarize, a re-transmission protocol working on top of Ethernet would be more complex and less efficient than RIFL. Also, the experimental results in Section 6 show that RIFL perform better than CMAC, which is the Xilinx 100G Ethernet implementation [27]. Aurora [9] is a link layer protocol developed by Xilinx. It is made for point-to-point communication between FPGAs. There are two versions of Aurora, using two different line codes-8b/10b for lower line rates and 64b/66b for higher line rates. The user payload is broken into multiple eight-byte frames called Data Blocks. The remaining bytes are transmitted using a special \begin{table} \begin{tabular}{|l l l l l|} \hline Protocol & LUTs & Flip Flops & BRAM36Ks & DSPs \\ \hline RIFL(256,256) & 15308 & 15935 & 16 & 0 \\ \hline RIFL(256,1024) & 20048 & 14098 & 16 & 0 \\ \hline RIFL(512,512) & 28995 & 28960 & 32 & 0 \\ \hline Aurora & 10192 & 9447 & 4 & 0 \\ \hline \end{tabular} \end{table} Table 4: Resource Comparison \begin{table} \begin{tabular}{|l|l|} \hline \(BER\) & \(MTBF\) (year) \\ \hline 1.00E-11 & 1.81E+23 \\ \hline 1.00E-09 & 1.81E+15 \\ \hline 1.00E-07 & 1.88E+8 \\ \hline 1.00E-05 & 6.33 \\ \hline \end{tabular} \end{table} Table 5: \(MTBF\) vs \(BER\) frame called the Separator Block. The Separator Block serves as an indicator of the end of a packet. A 32-bit CRC code is used in Aurora for error detection. Flow control directives are also provided. Interlaken [8] is invented by Cisco Systems and Cortina Systems. It uses 64b/67b encoding for better DC balance. There are two methods of packetization for Interlaken: BurstMax and BurstShort. The user payload is first broken into multiple 64-byte blocks and then transmitted using the BurstMax method. The remaining bytes are transmitted using BurstShort. The size of BurstShort can be from 32 bytes to 56 bytes, with 8-byte increments. Both BurstMax and BurstShort are ended with an 8-byte block named the Control Word. A 24-bit CRC code is integrated into the Control Word. Interlaken also provides in-band and out-of-band flow control, as well as out-of-band re-transmission. Sanchez Correa et al. [10] create a protocol stack for FPGA-based high performance computing. Their Layer 1 is based on the 10 Gigabit Media Independent Interface (XGMII), limiting the throughput per lane to 10 Gbps. Their work is based on the assumption that the link channels are error free, hence reliability is not being taken care of at all. None of the related works described here can provide or implement the low-latency, high bandwidth and especially reliable protocol that we require for our Layer 1 link layer protocol. ## 8 Conclusion We have presented RIFL, a low latency and reliable Link Layer network protocol. Because of its novel in-band re-transmission protocol, RIFL is capable of providing lossless point-to-point links with ultra-low latency and high bandwidth. We implemented RIFL on Sidewinder boards and showed that at the line rate of 112 Gbps, approximately 100 nanoseconds point-to-point latency is achieved. We have also demonstrated that RIFL is capable of correcting all the data corruptions for standard point-to-point links. With RIFL at the bottom layer, there is no need for the upper layer protocols to deal with any checksum. Therefore, the logic of the upper layer protocols can be simplified, and more resources can be used to deal with congestion control. This suggests that it is feasible to build a low-latency, high-bandwidth network for a data center environment based on RIFL. Our future work will address the Network Layer to enable congestion-free multi-hop communication. ## 9 Acknowledgements This work is generously supported by Xilinx, Alibaba and NSERC. The authors declare that there is no conflict of interest regarding the publication of this paper.
2306.17655
Groupoid morphisms as an algebraic structure for nonautonomous dynamics
We present groupoid morphisms as an algebraic structure for nonautonomous dynamics, as well as a generalization of group morphisms, which describe classic dynamical systems. We introduce the structure of cotranslations, as a specific kind of groupoid morphism, and establish a correspondence between cotranslations and skew-products. We give applications of cotranslations to nonautonomous equations, both in differences and differential. We obtain results about the differentiability of cotranslations, as well as dimension invariance and diagonalization (through a generalized notion of kinematic similarity) for a partial version of them, admitting noninvertible transformations.
Néstor Jara
2023-06-30T13:39:04Z
http://arxiv.org/abs/2306.17655v3
# Groupoid morphisms as an algebraic realization of nonautonomous dynamics ###### Abstract. We present groupoid morphisms as an algebraic realization for nonautonomous dynamics, as well as a generalization of group morphisms, which describe classic dynamical systems. We state a correspondence between groupoid morphisms and skew-products. Finally, we give applications of groupoid morphisms to nonautonomous equations, both in differences and differential. Key words and phrases:Nonautonomous dynamics, Dynamical systems, Groupoids 2020 Mathematics Subject Classification: 18B40, 22A22, 37B55, 37C60 This research has been partially supported by ANID, Beca de Doctorado Nacional 21220105. ## 1. Introduction Consider a category \(\mathscr{C}\) (for instance, topological spaces, vector spaces, Banach spaces, among others) and an element \(X\) on said category. We also consider * \(\mathbb{B}_{X}\) the collection of all the morphisms of the category \(\mathscr{C}\) of \(X\) on itself. * \(\mathbb{A}_{X}\) the collection of all invertible elements of \(\mathbb{B}_{X}\) (whose inverse is also in \(\mathbb{B}_{X}\)). It is easy to see that \(\mathbb{A}_{X}\) has group structure, with the composition. Depending on the category \(\mathscr{C}\), both sets \(\mathbb{B}_{X}\) and \(\mathbb{A}_{X}\) may have more structure, but for now we keep it general. A dynamical system, independently of the category \(\mathscr{C}\), considers a group \(G\) (with maybe some extra structure) and a group morphism \(\gamma:G\to\mathbb{A}_{X}\). Particularly, in the topological case with \(X\) a topological space and \(G\) a topological group (for simplicity, let us consider both \(X\) and \(G\) to be locally compact and Hausdorff), we set on \(\mathbb{A}_{X}\) the compact-open topology [2, Definition I, p. 301], and the group morphism \(\gamma\) is required to be continuous. Equivalently, we may define that a left (topological) dynamical system is a triple \((X,G,\alpha)\), where \(X\) and \(G\) satisfy the same conditions as before and \(\alpha\) is a continuous _left action_ of \(G\) on \(X\), _i.e._ a map \(\alpha:G\times X\to X\) verifying that each \(\alpha(g,\cdot)\) is a homeomorphism of \(X\) on itself and \[\alpha(g,\alpha(h,x))=\alpha(gh,x),\qquad\forall\,g,h\in G,\,x\in X.\] It is easy to see that by setting \(\hat{\alpha}:G\to\mathbb{A}_{X}\) by \(\left[\hat{\alpha}(g)\right](x)=\alpha(g,x)\) we obtain a continuous group morphism. On the other hand, we can define _right actions_ as maps \(\beta:X\times G\to X\) verifying that each \(\beta(\cdot,g)\) is a homeomorphism of \(X\) on itself and \[\beta(\beta(x,g),h)=\beta(x,gh),\qquad\forall\,g,h\in G,\,x\in X.\] Once more, by setting \(\hat{\beta}:G\to\mathbb{A}_{X}\) by \(\left[\hat{\beta}(g)\right](x)=\beta(x,g^{-1})\) we obtain a continuous group morphism (although, if \(G\) is Abelian, we can just use \(g\) instead of \(g^{-1}\)). Thus, both left and right actions describe dynamical systems and the use of one over the other is just a matter of convenience in notation. This correspondence between left or right actions and their algebraic counterpart on group morphisms is a well known fact at the basis for classic dynamics. A very important on its own right case of dynamical system is given by the flow of solutions of an autonomous differential equation (satisfying standard existence and uniqueness conditions). To rephrase it, an autonomous differential equation defines a continuous action of \(\mathbb{R}\) on (for instance) \(\mathbb{R}^{d}\). We usually call such system an autonomous dynamics. However, as soon as we consider a nonautonomous differential equations, dynamical systems (hence group morphisms and actions) no longer describe properly the dynamics given by the flow of solutions. In other words, nonautonomous dynamics [5] do not have a widely discussed analogous notion of _action_, even less what could be its algebraic counterpart. The main goal of this paper is to present an algebraic generalization of dynamical systems, which can be applied to describe the flows given by nonautonomous equations (both differential and in differences). The paper is organized as follows. In the second section we study the structure of _skew-product dynamical system_, objects which, to the best of our knowledge, emerge for the first time on 1950 by H. Anzai [1], whom uses them to describe a certain ergodic dynamic, and then later on 1965 were connected to differential equations thanks to the work of R. K. Miller [7]. This concept refers to a generalization of dynamical systems given by left actions, but instead of considering one action, it uses a family of action-like functions with a certain compatibility relation (we give more details later). On the third section we present the structure of groupoids and groupoid morphisms, which will give an algebraic realization of the dynamics that can be represented by skew-products. We also state some results regarding the relation of groupoid morphisms to discrete nonautonomous dynamics. On the final section we study differentiable groupoid morphisms, give some basic properties and give an application to the existence of solutions to linear nonautonomous differential equations on Banach spaces. ## 2. Skew-product dynamical systems On this section we present skew-products dynamical systems. Although we present them in the topological case, a similar structure can be defined and studied for objects on different categories. As described by R. J Sacker [10] on a symposium on 1976, a (topological) skew-product is constituted from locally compact Hausdorff topological space \(X\) and a locally compact Hausdorff topological group \(G\), both will be fixed on this section unless stated otherwise. Thus, in this context, \(\mathbb{A}_{X}\) denotes the topological group of all homeomorphisms of \(X\) on itself, given the compact-open topology. The main characteristic of this generalization of dynamical systems is that instead of a unique group action, we consider a family of action-like maps. It is worth noting that in [4, 9, 10] the groups are \(\mathbb{R}\) or \(\mathbb{Z}\), while here we present the construction for any locally compact Hausdorff topological group. For \(A,B\) locally compact Hausdorff spaces, we set \(\mathcal{C}\big{(}A;B\big{)}\) the space of continuous functions from \(A\) to \(B\), given the compact-open topology. Let us consider the evaluation maps: * \(\mathfrak{e}:\mathcal{C}\big{(}G\!\times\!X;X\big{)}\times G\times X\to X\), \((\psi,g,x)\mapsto\psi(g,x)\), * \(\mathfrak{e}_{G}:\mathcal{C}\big{(}G;X\big{)}\times G\to X\), \((\varphi,g)\mapsto\varphi(g)\), * \(\mathfrak{e}_{X}:\mathcal{C}\big{(}X;X\big{)}\times G\to X\), \((\varphi,x)\mapsto\varphi(x)\), which are all continuous [2, Corollary I, p. 303], as well as the partial evaluation map: * \(\widetilde{\mathfrak{e}}_{G}:\mathcal{C}\big{(}G\!\times\!X;X\big{)}\times G \rightarrow\mathcal{C}\big{(}X;X\big{)}\), \((\psi,g)\mapsto\psi(g,\cdot)\), * \(\widetilde{\mathfrak{e}}_{X}:\mathcal{C}\big{(}G\!\times\!X;X\big{)}\times X \rightarrow\mathcal{C}\big{(}G;X\big{)}\), \((\psi,x)\mapsto\psi(\cdot,x)\), which is also continuous [2, Corollary II, p. 303]. For a subspace \(Y\subset\mathcal{C}\big{(}G\!\times\!X;X\big{)}\), we define \[Y_{X}:=\widetilde{e}_{X}(Y\!\times\!X)\subset\mathcal{C}\big{(}G;X\big{)}, \quad\text{and}\quad Y_{G}:=\widetilde{e}_{G}(Y\!\times\!G)\subset\mathcal{C} \big{(}X;X\big{)},\] both as topological subspaces. It is easy to see that \[\mathfrak{e}(\psi,g,x)=\mathfrak{e}_{G}\left(\widetilde{\mathfrak{e}}_{X}( \psi,x),g\right)=\mathfrak{e}_{X}\left(\widetilde{\mathfrak{e}}_{G}(\psi,g), x\right),\quad\forall\,\psi\in Y,g\in G,x\in X.\] **Definition 2.1**.: _We say that a space of functions \(Y\subset\mathcal{C}\big{(}G\!\times\!X;X\big{)}\) is **admissible**, if:_ * \(Y_{G}\subset\mathbb{A}_{X}\)__ * \(\psi(e,x)=x\) _for every_ \(\psi\in Y\) _and_ \(x\in X\)_, where_ \(e\) _is the unit of_ \(G\)_._ _In [9], such a space \(Y\) is called a **Hull**._ On the other hand, if we consider the right action \(\theta\) of \(G\) on itself by translations, _i.e._ \[\theta:G\times G\to G,\quad(g,h)\mapsto gh,\] it lifts to a left action \(\Theta\) of \(G\) on spaces of functions with domain \(G\). In particular, we have \(\Theta:G\times\mathcal{C}\big{(}G;X\big{)}\to\mathcal{C}\big{(}G;X\big{)}\) given by \[\big{[}\Theta(h,\varphi)\big{]}\,(g)=\varphi\,\big{(}\theta(g,h)\big{)}= \varphi(gh),\quad\forall\,\varphi\in\mathcal{C}\big{(}G;X\big{)},\,g,h\in G,\] and an easy topology exercise shows that \(\Theta\) is continuous. For an admissible collection \(Y\), \(Y_{X}\) is contained on \(\mathcal{C}\big{(}G;X\big{)}\), thus we may consider the saturation \(\widetilde{Y}_{X}:=\Theta\big{(}G\times Y_{X}\big{)}\), which is invariant under the action \(\Theta\), hence \(\left(\widetilde{Y}_{X},G,\Theta\right)\) is a (left) dynamical system. With this system, we may write the map \(\widetilde{\Theta}:X\times Y\times G\times G\to X\) by \[\widetilde{\Theta}(x,\psi,g,h)=\left[\Theta\left(h,\widetilde{\mathfrak{e}}_ {X}(\psi,x)\right)\right](g)=\psi(gh,x) \tag{2.1}\] Now, consider a (topological) dynamical system \((Y,G,\sigma)\), given by a continuous left action \(\sigma:G\times Y\to Y\). The _skew-flow_ associated to it is the map \(\pi:X\times Y\times G\to X\times Y\) given by \(\pi=\mathfrak{e}\times\sigma\), that is \[\pi(x,\psi,h)=\big{(}\mathfrak{e}(\psi,h,x),\sigma(h,\psi)\big{)}=\big{(}\psi( h,x),\sigma(h,\psi)\big{)}\,,\quad\forall\,\psi\in Y,\,h\in G,\,x\in X,\] which is usually the way in which skew-products are depicted on [9]. With this system, we may write the map \(\widetilde{\pi}:X\times Y\times G\times G\to X\) by \[\widetilde{\pi}(x,\psi,g,h)=\left[\sigma\big{(}h,\psi\big{)}\right]\left(g, \psi\big{(}h,x\big{)}\right),\quad\forall\,\psi\in Y,\,g,h\in G,\,x\in X. \tag{2.2}\] Now, we have all the components of a skew-product, all that is left is to state the compatibility of the systems given by \(\Theta\) and \(\sigma\). This means that the maps \(\widetilde{\Theta}\) and \(\widetilde{\pi}\) on (2.1) and (2.2) respectively, must coincide, _i.e._ \[\big{[}\sigma(h,\psi)\big{]}\,\big{(}g,\psi(h,x)\big{)}=\psi(gh,x),\quad \forall\,\psi\in Y,\,g,h\in G,\,x\in X.\] In this last statement, it is clear that all the information regarding the skew-product is contained on the properties of \(\sigma\). We formalize this discussion on the following definition: **Definition 2.2**.: _A **skew-product dynamical system** is a quadruple \((X,G,Y,\sigma)\), where:_ * \(Y\subset\mathcal{C}\big{(}G\times X;X\big{)}\) _is admissible._ * \((Y,G,\sigma)\) _is a dynamical system, where_ \(\sigma:G\times Y\to Y\) _is a continuous left action._ * \(\big{[}\sigma(h,\psi)\big{]}\,\big{(}g,\psi(h,x)\big{)}=\psi(gh,x)\) _for every_ \(\psi\in Y,\,g,h\in G\,\ y\,\,x\in X\)_._ **Remark 2.3**.: It is clear that if \((X,G,\alpha)\) is a topological dynamical system where \(\alpha\) is a continuous left action, then \((X,G,\{\alpha\},\mathfrak{q})\) is a skew-product, where \(\mathfrak{q}:G\times\{\alpha\}\to\{\alpha\}\) is the only possible action, _i.e._ the trivial action. In other words, a dynamical system is a skew-product where the hull contains only one function. Let us illustrate this construction on an example. **Example 2.4**.: Consider \(\mathbb{R}^{d}\) as a topological space, hence \(\mathbb{A}_{X}\) is the group of all of its homeomorphisms. Set \(G=\mathbb{Z}\). Consider the following nonautonomous difference equation \[x(n+1)=F\left(n,x(n)\right), \tag{2.3}\] where for every \(n\in\mathbb{Z}\), \(F(n,.)\in\mathbb{A}_{X}\). Let \(n\mapsto x(n,m,\xi)\) be its unique solution such that \(x(m,m,\xi)=\xi\). Define \(Y=\{\psi_{m}:m\in\mathbb{Z}\}\), the collection of all solutions of (2.3) parameterized by its _temporal_ initial condition, _i.e._ \[\psi_{m}(n,\xi)=x\big{(}n+m,m,\xi\big{)}.\] Set now \(\sigma:Y\times\mathbb{Z}\to Y\) the action of _left translations on the temporal initial condition_, that is \[\sigma(n,\psi_{m})=\psi_{n+m},\quad\forall\,n,m\in\mathbb{Z},\] or, evaluating \[\big{[}\sigma\left(\psi_{m},n\right)\big{]}\,(p,\xi)=\psi_{n+m}(p,\xi)=x\big{(} p+n+m,n+m,\xi\big{)},\quad\forall\,p,n,m\in\mathbb{Z},\,\xi\in\mathbb{R}^{d},\] and its associated skew-flow \(\pi:\mathbb{R}^{d}\!\times\!Y\!\times\!\mathbb{Z}\to\mathbb{R}^{d}\!\times\!Y\) given by \(\pi\left(\xi,\psi_{m},n\right)=\big{(}\psi_{m}(n,\xi),\psi_{n+m}\big{)}\). Evaluating, we have \[\widetilde{\pi}(\xi,\psi_{m},p,n) =\psi_{n+m}\left(p,\psi_{m}\big{(}n,\xi\big{)}\right)\] \[=x\left(p+n+m,n+m,x\big{(}n+m,m,\xi\big{)}\right),\quad\forall\,p,n,m\in\mathbb{Z},\,\xi\in\mathbb{R}^{d}.\] On the other hand, note that \[\widetilde{\Theta}(\xi,\psi_{m},p,n)=\psi_{m}(p+n,\xi)=x\big{(}p+n+m,m,\xi \big{)},\quad\forall\,p,n,m\in\mathbb{Z},\,\xi\in\mathbb{R}^{d}.\] Now, it is well know that, by uniqueness of solutions, we have \[x\left(p+n+m,n+m,x\big{(}n+m,m,\xi\big{)}\right)=x\big{(}p+n+m,m,\xi\big{)}, \quad\forall\,p,n,m\in\mathbb{Z},\,\xi\in\mathbb{R}^{d},\] thus \(\widetilde{\Theta}\) and \(\widetilde{\pi}\) coincide, which implies that \(\big{(}\mathbb{R}^{d},\mathbb{Z},\{\psi_{m}:m\in\mathbb{Z}\},\sigma\big{)}\) is indeed a skew-product dynamical system. On a presentation given on 2004, S. Elaydi and R. J. Sacker [4] give further applications of skew-products on the theory of nonautonomous difference equations, as the search for asymptotically stable solutions for Beverton-Holt equations [3]. On the other hand, in [9] the authors use this structure to develop the exponential dichotomy spectrum for nonautonomous linear differential equations when the admissible space of functions is compact. ## 3. Groupoid morphisms In this section we present the structure of groupoid morphisms as an algebraic alternative to skew-products and a generalization of group morphisms, which are known to describe dynamical systems. **Definition 3.1**.: _[_11_, Definition 1.2]_ _We say that a set \(\Xi\), doted of a subset \(\Xi^{(2)}\subset\Xi\times\Xi\) (called the collection of composite pairs) and two maps \(\bullet:\Xi^{(2)}\to\Xi\) given by \((\eta,\xi)\mapsto\eta\bullet\xi\) (called composition law), and \(\mathfrak{inv}:\Xi\to\Xi\), is a **groupoid** if the following conditions are verified_ * _(associativity) If_ \((\eta,\xi)\)_,_ \((\xi,\zeta)\in\Xi^{(2)}\)_, then_ \((\eta\bullet\xi,\zeta)\)_,_ \((\eta,\xi\bullet\zeta)\in\Xi^{(2)}\) _and_ \((\eta\bullet\xi)\bullet\zeta=\eta\bullet(\xi\bullet\zeta)\)_,_ * _(involution)_ \(\mathfrak{inv}(\mathfrak{inv}(\mathfrak{inv}(\mathfrak{\eta}))=\eta\) _for every_ \(\eta\in\Xi\)_,_ * _(identity) for every_ \(\eta\in\Xi\)_, we have_ \((\eta,\mathfrak{inv}(\eta)\big{)}\in\Xi^{(2)}\) _and_ \((\eta,\xi)\in\Xi^{(2)}\) _implies that_ \(\mathfrak{inv}(\eta)\bullet(\eta\bullet\xi)=\xi\) _and_ \((\eta\bullet\xi)\bullet\mathfrak{inv}(\xi)=\eta\)_._ _Moreover, we define \(\Xi^{(0)}:=\big{\{}\eta\in\Xi:\eta=\mathfrak{inv}(\eta)=\eta\bullet\eta\big{\}}\) and call it the **unit space** of the groupoid._ **Example 3.2**.: Consider a topological space \(\mathfrak{T}\), the collection \(\widetilde{\Xi}=\big{\{}\eta:[0,1]\to\mathfrak{T}:\,\eta\text{ is continuous}\big{\}}\) and the quotient \(\Xi=\widetilde{\Xi}/\sim\), where \(\sim\) is the equivalence relation given by homotopies that fix start and end. Doting \(\Xi\) of \[\Xi^{(2)}:=\left\{(\eta,\xi)\in\Xi^{2}:\eta(1)=\xi(0)\right\},\] and \[(\eta\bullet\xi)(t)=\left\{\begin{array}{l}\eta(2t)\qquad\quad\text{if}\quad 0 \leq t\leq 1/2\\ \\ \xi(2t-1)\quad\text{if}\quad 1/2\leq t\leq 1\end{array}\right.,\qquad\mathfrak{inv}( \eta)(t)=\eta(1-t),\] it is easily deduced that \(\Xi\) us a groupoid which is usually called the **paths groupoid** with the operation of **concatenation**. In this case, the unit space corresponds to the collection of the homotopy classes of constant paths. **Example 3.3**.: The most trivial example of a groupoid is a group \(G\), where \(\bullet\) is the composition law of the group and \(\mathfrak{inv}(g)=g^{-1}\). In this case \(G^{(2)}=G^{2}\) and \(G^{(0)}=\{e\}\). **Example 3.4**.: If a group \(G\) acts by the left on a set \(M\), the product \(M\!\times\!G\) has groupoid structure. Indeed, setting \[(M\!\times\!G)^{(2)}:=\left\{\big{(}(x,g),(y,h)\big{)}\in(G\!\times\!M)^{2}:x=h \cdot y\right\},\] and \[\bullet\big{(}(x,g),(y,h)\big{)}=(y,gh),\quad\mathfrak{inv}(x,g)=(g\cdot x,g^ {-1}),\] the groupoid axioms are easily followed. In this case, the unit space corresponds to the collection of point of the form \((x,e)\), where \(e\) is the group unit. Analogously we can define a groupoid for right actions. This example is in particular useful when \(G\) acts on itself by left translations, in which case we call \(G{\times}G\) the **left translations groupoid** for \(G\). Note that the groupoid \(G{\times}G\) is never commutative, even if \(G\) is Abelian, since \(\big{(}(g,h),(k,l)\big{)}\in\left(G{\times}G\right)^{(2)}\) does not imply \(\big{(}(k,l),(g,h)\big{)}\in\left(G{\times}G\right)^{(2)}\). **Definition 3.5**.: _[_11_, Definition 1.8]_ _If \(\Xi\) and \(\Upsilon\) are groupoids with composition laws \(\bullet\) and \(\star\) respectively, a **groupoid morphism** is a map \(\vartheta:\Xi\to\Upsilon\) verifying that \((\eta,\xi)\in\Xi^{(2)}\), implies \(\big{(}\vartheta(\eta),\vartheta(\xi)\big{)}\in\Upsilon^{(2)}\) and \(\vartheta(\eta\bullet\xi)=\vartheta(\eta)\star\vartheta(\xi)\)._ **Example 3.6**.: Consider an element \(X\) on some category \(\mathscr{C}\). Set a group \(G\), \(\gamma:G\to\mathbb{A}_{X}\) a group morphism and give \(G{\times}G\) the left translations groupoid structure. By setting \(Z:G{\times}G\to\mathbb{A}_{X}\) by \(Z(g,h)=\gamma(h)\) we obtain a groupoid morphism. Indeed: \[Z(g,kh)=\gamma(kh)=\gamma(k)\gamma(h)=Z(hg,k)Z(g,h).\] **Remark 3.7**.: This example shows that every dynamical system is in particular given by this kind of groupoid morphisms, since a dynamical system is always given by a group morphism \(\gamma:G\to\mathbb{A}_{X}\). **Example 3.8**.: Consider the left translations groupoid structure on \(\mathbb{R}{\times}\mathbb{R}\). Consider \(X=\mathbb{R}^{d}\) as a topological space, thus \(\mathbb{A}_{X}\) is the group of all of its homeomorphisms. Consider a nonautonomous real differential equation \(\dot{x}=F(t,x)\) such that for every \(\xi\in\mathbb{R}^{d}\) and every \(r\in\mathbb{R}\), there is a unique and globally defined solution \(x_{r,\xi}:\mathbb{R}\to\mathbb{R}^{d}\) such that \(x_{r,\xi}(r)=\xi\). Then the map \(Z:\mathbb{R}{\times}\mathbb{R}\to\mathbb{A}_{X}\) given by \(\big{[}Z(r,t)\big{]}\left(\xi\right)=x_{r,\xi}(t+r)\) is a groupoid morphism. Indeed: \[\big{[}Z(r,t+s)\big{]}\left(\xi\right)=x_{r,\xi}(t+s+r)=x_{s+r,x_{r,\xi}(s+r)} (t+s+r)=\big{[}Z(s+r,t)\circ Z(r,s)\big{]}\left(\xi\right),\] where the second equality is a well known fact deduced from the uniqueness of solutions. Moreover, giving \(\mathbb{A}_{X}\) the compact-open topology and the groupoid \(\mathbb{R}{\times}\mathbb{R}\) the product topology, \(Z\) turns out to be continuous. In order for this manuscript to be self-contained, we will prove some basic lemmas. **Lemma 3.9**.: _Let \(\Xi\) and \(\Upsilon\) groupoids and let us denote both composition laws by \(\cdot\) and involutions by \(\mathfrak{inp}\). Then \(\mathfrak{inp}(\vartheta(\eta))=\vartheta(\mathfrak{inp}(\eta))\) for all \(\eta\in\Xi\)._ Proof.: We know that \[\vartheta(\eta)\cdot\vartheta(\mathfrak{inp}(\eta))\cdot\vartheta(\eta)= \vartheta(\eta\cdot\mathfrak{inp}(\eta)\cdot\eta)=\vartheta(\eta)\] thus, by the axioms of identity and associativity we have \[\vartheta(\mathfrak{inp}(\eta))=\vartheta(\mathfrak{inp}(\eta)) \cdot\Big{(}\vartheta(\eta)\cdot\mathfrak{inp}\left[\vartheta(\eta)\right] \Big{)} =\left(\mathfrak{inp}\left[\vartheta(\eta)\right]\cdot\vartheta( \eta)\right)\cdot\vartheta(\mathfrak{inp}(\eta))\cdot\Big{(}\vartheta(\eta) \cdot\mathfrak{inp}\left[\vartheta(\eta)\right]\Big{)}\] \[=\mathfrak{inp}\left[\vartheta(\eta)\right]\cdot\big{(}\vartheta (\eta)\cdot\vartheta(\mathfrak{inp}(\eta))\cdot\vartheta(\eta)\big{)}\cdot \mathfrak{inp}\left[\vartheta(\eta)\right]\] \[=\mathfrak{inp}\left[\vartheta(\eta)\right].\] **Lemma 3.10**.: _Let \(\Xi\) and \(\Upsilon\) groupoids and let us denote both composition laws by \(\cdot\) and involutions by \(\mathfrak{inp}\). A groupoids morphism \(\vartheta:\Xi\to\Upsilon\) maps \(\Xi^{(0)}\) into \(\Upsilon^{(0)}\)._ Proof.: Let \(\eta\in\Xi^{(0)}\), then \((\eta,\eta)\in\Xi^{(2)}\), since \(\eta=\mathfrak{inp}(\eta)\). Now \(\vartheta(\eta)=\vartheta(\eta\cdot\eta)=\vartheta(\eta)\cdot\vartheta(\eta)\). Moreover, by the previous lemma we have \[\mathfrak{inp}(\vartheta(\eta))=\vartheta(\mathfrak{inp}(\eta))=\vartheta(\eta),\] hence \(\vartheta(\eta)=\vartheta(\eta)\cdot\vartheta(\eta)=\mathfrak{inp}(\vartheta( \eta))\), thus \(\vartheta(\eta)\in\Upsilon^{(0)}\). Now we present our main theorem. Although we state it for the topological case, it is easily generalized for objects and skew-products on different categories. **Theorem 3.11**.: _Let \(X\) be a locally compact Hausdorff topological space and \(G\) a locally compact Hausdorff topological group. Give all function spaces, including \(\mathbb{A}_{X}\), the compact-open topology. There is a bijective correspondence between skew-product dynamical systems \((X,G,Y,\sigma)\), where the action \(\sigma\) is transitive, and continuous groupoid morphisms \(Z:G\!\times\!G\to\mathbb{A}_{X}\), where \(G\!\times\!G\) has the left translations groupoid structure._ Proof.: Let \((X,G,Y,\sigma)\) be a skew-product dynamical system with \(\sigma\) being transitive. The admissibility condition implies that \(Y_{G}\subset\mathbb{A}_{X}\). On the other hand, as \(\sigma\) is transitive, basic theory of group actions states that we can identify \(Y\) with the quotient group, \(G/\mathrm{stab}(\sigma)\), where \[\mathrm{stab}(\sigma)=\left\{g\in G:\sigma(g,y)=y,\,\forall\,y\in Y\right\}.\] Hence, we write \(Y=\left\{\psi_{\overline{g}}:g\in G\right\}\), where \(\overline{g}\) denotes the class of \(g\) on the quotient \(G/\mathrm{stab}(\sigma)\). Moreover, the left action \(\sigma:G\!\times\!Y\to Y\) is rewritten as the left action \(\tilde{\sigma}:G\!\times\!Y\to Y\) given by \(\tilde{\sigma}(\psi_{h,\overline{g}})=\psi_{\overline{hg}}\), that is, it is identified with the action of left translations of \(G\) on the quotient \(G/\mathrm{stab}(\sigma)\). Now, we can define the continuous map \(Z:G\!\times\!G\to\mathbb{A}_{X}\) given by \[Z(g,h)=\widetilde{\mathfrak{e}}_{G}(\psi_{\overline{g}},h)=\psi_{\overline{g }}(h,\cdot),\] and we have \[Z(g,kh)=Z(hg,k)\circ Z(g,h) \Leftrightarrow \left[Z(g,kh)\right](x)=\left[Z(hg,k)\circ Z(g,h)\right](x),\quad \forall\,x\in X\] \[\Leftrightarrow \left[\widetilde{\mathfrak{e}}_{G}(\psi_{\overline{g}},kh)\right] (x)=\widetilde{\mathfrak{e}}_{G}(\psi_{\overline{hg}},k)\left[\left[ \widetilde{\mathfrak{e}}_{G}(\psi_{\overline{g}},h)\right](x)\right],\quad \forall\,x\in X\] \[\Leftrightarrow \psi_{\overline{g}}(kh,x)=\psi_{\overline{hg}}\left(k,\psi_{ \overline{g}}(h,x)\right),\quad\forall\,x\in X\] \[\Leftrightarrow \psi_{\overline{g}}(kh,x)=\left[\tilde{\sigma}(h,\psi_{\overline {g}})\right]\left(k,\psi_{\overline{g}}(h,x)\right),\quad\forall\,x\in X,\] and as the last condition is guaranteed by third the axiom of skew-products (see Definition 2.2), then \(Z\) is indeed a groupoid morphism. Conversely, given a continuous groupoid morphism \(Z:G\!\times\!G\to\mathbb{A}_{X}\), for each \(g\in G\) we define \(\psi_{g}:G\!\times\!X\to X\) given by \(\psi_{g}(h,x)=\left[Z(g,h)\right](x)\). Then, defining \(Y:=\left\{\psi_{g}:g\in G\right\}\), Lemma 3.10 implies that \(Y\) is admissible. On the other hand, defining and the action \(\sigma:G\times Y\to Y\) given by \(\sigma(h,\psi_{g})=\psi_{hg}\), or equivalently \[\left[\sigma(h,\psi_{g})\right](k,x)=\left[Z(hg,k)\right](x),\] we obtain, by the same previous argument, that \((X,G,Y,\sigma)\) is a skew-product dynamical system, where the action \(\sigma\) clearly results transitive. **Remark 3.12**.: The previous theorem illustrates that all the generalizations of dynamical systems we can obtain from skew-products, are also covered by this kind of groupoid morphisms. However, the virtue of the latter is that they more clearly represent the algebraic structure behind these dynamics, just as group morphisms represent the algebraic structure of group actions. Now we will give further properties and applications for this kind of groupoid morphisms. In the following, we fix a group \(G\), an element \(X\) on some category \(\mathscr{C}\) and will be interested on groupoid morphisms \(Z:G\!\times\!G\to\mathbb{A}_{X}\), where \(G\!\times\!G\) has the left translations groupoid structure. **Notation 3.13**.: For a groupoid morphism \(Z:G\!\times\!G\to\mathbb{A}_{X}\) we set \(Z^{\mathrm{inv}}:G\!\times\!G\to\mathbb{A}_{X}\), the map given by \(Z^{\mathrm{inv}}(g,h)=\left[Z(g,h)\right]^{-1}\). We do not use \(Z^{-1}\) in order to avoid confusion with a possible inverse function. **Lemma 3.14**.: _Set \(Z:G\!\times\!G\to\mathbb{A}_{X}\) a groupoid morphism. If \(e\in G\) is the group unit, then for every \(g,h\in G\) it is verified_ \[Z(g,e)=\mathrm{Id}\quad\text{and}\quad Z^{\mathrm{inv}}(g,h)=Z(hg,h^{-1}).\] Proof.: The first equality follows immediately from Lemma 3.10. For the second it is enough to see that \[\mathrm{Id}=Z(g,e)=Z(g,h^{-1}h)=Z(hg,h^{-1})Z(g,h).\] **Remark 3.15**.: The previous lemma shows that the invertibility of \(Z(g,h)\) depends heavily on the invertibility of \(h\) in \(G\). We could define a similar theory replacing the group by a semigroup, but in that case we lose the invertibility of \(Z(g,h)\) and the properties we can deduce from it. **Proposition 3.16**.: _let \(Z:G\!\times\!G\to\mathbb{A}_{X}\) be a groupoid morphism. Let \(\gamma:G\to\mathbb{A}_{X}\) be a group morphism such that \(\gamma(hk)=\gamma(kh)\) for every \(h,k\in G\) and_ \[\gamma(k)Z(g,h)=Z(g,h)\gamma(k),\qquad\forall\,g,k,h\in G,\] _then \(W:G\!\times\!G\to\mathbb{A}_{X}\) given by \(W(g,h)=Z(g,h)\gamma(h)\) is a groupoid morphism_ Proof.: It is enough to see that \[W(g,hk)=Z(g,hk)\gamma(hk)=Z(g,hk)\gamma(kh) =Z(hg,k)Z(g,h)\gamma(k)\gamma(h)\] \[=Z(hg,k)\gamma(k)Z(g,h)\gamma(h)\] \[=W(hg,k)W(g,h).\] **Example 3.17**.: Consider once more on \(\mathbb{R}\!\times\!\mathbb{R}\) with the translations groupoid structure. Consider \(X=\mathbb{R}^{d}\) as a Banach space, thus \(\mathbb{B}_{X}\) is the Banach algebra of linear continuous operators and \(\mathbb{A}_{X}\) is the topological group of all of its homeomorphic isomorphisms, _i.e._\(GL_{d}(\mathbb{R})\). Consider a real linear nonautonomous differential equation \(\dot{x}=A(t)x(t)\). Consider \(\Phi:\mathbb{R}\!\times\!\mathbb{R}\to\mathbb{A}_{X}\) the transition matrix associated to this equation. We can see, as in Example 3.8, that \(Z:\mathbb{R}\!\times\!\mathbb{R}\to\mathbb{A}_{X}\) given by \(Z(r,t)=\Phi(r,t+r)\) defines a groupoid morphism. Choose \(\lambda\in\mathbb{R}\) and define \(\gamma:\mathbb{R}\to\mathbb{A}_{X}\) given by \(\gamma(t)=e^{-\lambda t}\cdot\mathrm{Id}\). It is easy to see that it is a group morphism which verifies the conditions of the previous proposition. Hence, \(W:\mathbb{R}\times\mathbb{R}\to\mathbb{A}_{X}\) given by \(W(r,t)=Z(r,t)\gamma(t)\) is a groupoid morphism. Moreover, in this case \(W\) is once more the groupoid morphism associated to a linear differential equation, since it is obtained in the same fashion as \(Z\), but regarding the shifted linear nonautonomous differential equation \(\dot{x}=\left[A(t)-\lambda\cdot\mathrm{Id}\right]x(t)\). Before studying more properties for this kind of groupoid morphisms, we will introduce a specific type of them in order to highlight the difference between this construction and classic dynamical systems. Later, this will also show the difference between autonomous and nonautonomous equations. This motivates the following definition: **Definition 3.18**.: _Given a groupoid morphism \(Z:G\!\times\!G\to\mathbb{A}_{X}\), we say it is **autonomous** if \(Z(g,h)=Z(k,h)\) for every \(g,h,k\in G\)._ **Proposition 3.19**.: _A groupoid morphism \(Z:G\!\times\!G\to\mathbb{A}_{X}\) is autonomous if and only if there is a group morphism \(\gamma:G\to\mathbb{A}_{X}\) such that \(Z(g,h)=\gamma(h)\)._ Proof.: We saw on Example 3.6 that if \(\gamma:G\to\mathbb{A}_{X}\) is a group morphism and we define \(Z:G\!\times\!G\to\mathbb{A}_{X}\) by \(Z(g,h)=\gamma(h)\) we obtain a groupoid morphism, which is autonomous by construction. Conversely, if \(Z\) is an autonomous groupoid morphism, define \(\gamma:G\to\mathbb{A}_{X}\) by \(\gamma(g)=Z(e,g)\). It is easy to see that \(\gamma\) is a group morphism, since \[\gamma(kh)=Z(e,kh)=Z(h,k)Z(e,h)=Z(e,k)Z(e,h)=\gamma(k)\gamma(h).\] Note once more, as in Remark 3.12, how this proposition illustrates that this kind of groupoid morphisms generalize the notion of dynamical systems on any given category (as for instance, those given by autonomous differential equations, hence the name), since those are always given by a group morphism \(\gamma:G\to\mathbb{A}_{X}\), but clearly there are much more groupoid morphisms which are not autonomous. The following proposition shows the direct relation of groupoid morphisms when the group is \(\mathbb{Z}\) and nonautonomous difference equations. **Proposition 3.20**.: _Set \(X\) a Banach space, thus \(\mathbb{B}_{X}\) is the Banach algebra of linear continuous operators and \(\mathbb{A}_{X}\) is the topological group of its homeomorphic isomorphisms. There is a bijective correspondence between groupoid morphisms \(Z:\mathbb{Z}\times\mathbb{Z}\to\mathbb{A}_{X}\) and nonautonomous linear difference equations \(x(n+1)=A(n)x(n)\), with \(\mathbb{Z}\ni n\mapsto A(n)\in\mathbb{A}_{X}\), where the correspondence is given by_ \[Z(n,m)=\left\{\begin{array}{ll}A(n+m-1)A(n+m-2)\cdots A(n)&\text{ if }\quad m>0\\ \text{Id }&\text{if }\quad m=0\\ \\ A^{-1}(n+m)A^{-1}(n+m+1)\cdots A^{-1}(n-1)&\text{ if }\quad m<0.\end{array}\right.\] Proof.: By defining \(Z\) as in the statement of the proposition, starting from the function \(n\mapsto A(n)\in\mathbb{A}_{X}\), clearly we obtain a groupoid morphism. Conversely, if we start with a groupoid morphism, it is enough to define \(A(n)=Z(n,1)\) and we obtain the desired equation. In the next section we will show a continuous analogous to the previous proposition. An easy generalization of the preceding proposition is the next result (for which we do not give a proof, since it follows the same steps as before). **Proposition 3.21**.: _Let \(G\) be a discrete group with \(n\) generators \(\{\xi_{1},\dots,\xi_{n}\}\) and \(X\) an object on some category \(\mathscr{C}\). A multivariable differences equation of \(G\) on \(X\) is an equation of the form_ \[x(\eta\xi_{i})=A_{i}(\eta)x(\eta),\] _where each \(\xi_{i}\) has an associated map \(A_{i}:G\to\mathbb{A}_{X}\). A solution to this equation is a map \(x:G\to X\). There is a bijective correspondence between multivariable differences equations of \(G\) on \(X\) and groupoid morphisms \(Z:G\times G\to\mathbb{A}_{X}\), which is given_ \[A_{i}(\eta)=Z\left(\eta,\xi_{i}\right).\] ## 4. Differentiable groupoid morphisms on Banach spaces On in this section we study the differentiability of the type of groupoid morphisms we presented on the previous section and analyze their relation to differential equations. During this section we fix a Banach space \(X\) over the field \(\mathbb{K}\), which may be \(\mathbb{R}\) or \(\mathbb{C}\). We set * \(\mathbb{L}_{X}:=\mathbb{L}(X)\) the collection of all linear operators \(T:X\to X\) * \(\mathbb{B}_{X}:=\mathbb{B}(X)\) the Banach algebra of all continuous elements of \(\mathbb{L}_{X}\), given the operator norm * \(\mathbb{A}_{X}:=\mathbb{A}(X)\) the group of all invertible elements of \(\mathbb{B}_{X}\) whose inverse is also in \(\mathbb{B}_{X}\), as a topological subspace of \(\mathbb{B}_{X}\). It is a well known fact that if \(X\) has finite dimension \(d\), then \(\mathbb{L}_{X}=\mathbb{B}_{X}\) and \(\mathbb{A}_{X}\cong GL_{d}(\mathbb{K})\). The groupoid morphisms we will study on this section all have domain in the left translations groupoid \(\mathbb{K}\times\mathbb{K}\), but it is easy to see that our results are easily generalizable to groupoids given by Lie groups. We use Banach spaces in this section because we want \(\mathbb{B}_{X}\) to have the structure of a Banach algebra, thus we will be able to study derivatives for groupoid morphisms in the sense of Gateaux, however, it is easy to see that these constructions apply just as well in other categories that admit such sense of derivatives. It is worth noting that for groups it is well known that continuous morphisms between Lie groups are immediately smooth [6, Problem 20-11], however, in the groupoid case we have some differences which we study on this section. To begin, consider the following notation: **Notation 4.1**.: Consider \(\varphi:\mathbb{K}\times\mathbb{K}\to\mathbb{B}_{X}\) and \(\psi:\mathbb{K}\to\mathbb{B}_{X}\). We set the following notation for these limits (independently if they exist or not): * \(\partial_{1}\varphi(r,t)=\lim_{h\to 0}\frac{\varphi(r+h,t)-\varphi(r,t)}{h}\), * \(\partial_{2}\varphi(r,t)=\lim_{h\to 0}\frac{\varphi(r,t+h)-\varphi(r,t)}{h}\), * \(\frac{d}{du}\left[\psi(u)\right]=\lim_{h\to 0}\frac{\psi(u+h)-\psi(u)}{h}\). Notation iii) will be useful when we want to apply derivatives to functions obtained as composition of other functions or when we want to make an emphasis on the _variable_ respect to which we want to deviate, while notations i) y ii) are meant to make emphasis on the _position_ of the function respect to which we want to derivate. **Lemma 4.2**.: _Let \(Z:\mathbb{K}\times\mathbb{K}\to\mathbb{A}_{X}\) be a continuous groupoid morphism. If for every \(t\in\mathbb{K}\) the map \(r\mapsto Z(r,t)\) is derivable at \(r=r_{0}\), for some \(r_{0}\in\mathbb{K}\), then the map \(r\mapsto Z^{\mathrm{inv}}(r,t)\) is derivable at \(r=r_{0}\) and_ \[\partial_{1}Z^{\mathrm{inv}}(r_{0},t)=-Z^{\mathrm{inv}}(r_{0},t)\Big{[} \partial_{1}Z(r_{0},t)\Big{]}Z^{\mathrm{inv}}(r_{0},t).\] Proof.: By the following \[Z(r,t)Z^{\mathrm{inv}}(r,t)=\mathrm{Id} \Rightarrow \frac{d}{dr}\left[Z(r,t)Z^{\mathrm{inv}}(r,t)\right]=0\] \[\Leftrightarrow \Big{[}\partial_{1}Z(r,t)\Big{]}\ Z^{\mathrm{inv}}(r,t)+Z(r,t) \Big{[}\partial_{1}Z^{\mathrm{inv}}(r,t)\Big{]}=0\] \[\Leftrightarrow \partial_{1}Z^{\mathrm{inv}}(r,t)=-Z^{\mathrm{inv}}(r,t)\Big{[} \partial_{1}Z(r,t)\Big{]}Z^{\mathrm{inv}}(r,t),\] it is easy to see that the derivatives \(\partial_{1}\) at \(r=r_{0}\) exist simultaneously for both \(r\mapsto Z(r,t)\) and \(r\mapsto Z^{\mathrm{inv}}(r,t)\). The following result is easily proved with the same demonstration. **Lemma 4.3**.: _Let \(Z:\mathbb{K}\times\mathbb{K}\to\mathbb{A}_{X}\) be a continuous groupoid morphism. If for every \(r\in\mathbb{K}\) the map \(t\mapsto Z(r,t)\) is derivable at \(t_{0}\), for some \(t_{0}\in\mathbb{K}\), then the map \(t\mapsto Z^{\mathrm{inv}}(r,t)\) is derivable at \(t=t_{0}\) and_ \[\partial_{2}Z^{\mathrm{inv}}(r,t_{0})=-Z^{\mathrm{inv}}(r,t_{0})\Big{[} \partial_{2}Z(r,t_{0})\Big{]}Z^{\mathrm{inv}}(r,t_{0}).\] **Lemma 4.4**.: _Let \(Z:\mathbb{K}\times\mathbb{K}\to\mathbb{A}_{X}\) be a continuous groupoid morphism. Suppose that for every \(t\in\mathbb{K}\) the map \(r\mapsto Z(r,t)\) is derivable at \(r=0\). Then for every \(t\in\mathbb{K}\) the function \(r\mapsto Z(r,t)\) is derivable and_ \[\partial_{1}Z(r,t)=\Big{[}\partial_{1}Z(0,t+r)\Big{]}Z(r,-r)-Z(r,t)\Big{[} \partial_{1}Z(0,r)\Big{]}Z(r,-r).\] Proof.: We suppose the following limit exists \[\lim_{h\to 0}\frac{Z(0+h,t)-Z(0,t)}{h}.\] We have \[\frac{Z(r+h,t)-Z(r,t)}{h} = \frac{Z(h,t+r)Z^{\mathrm{inv}}(h,r)-Z(0,r+t)Z^{\mathrm{inv}}(0,r)} {h}\] \[= \frac{Z(h,t+r)-Z(0,r+t)}{h}Z^{\mathrm{inv}}(h,r)\] \[+Z(0,r+t)\frac{Z^{\mathrm{inv}}(h,r)-Z^{\mathrm{inv}}(0,r)}{h}.\] Both addends at the right hand side have limit when \(h\to 0\) by hypothesis (and Lemma 4.2), thus the left hand side as limit as well, hence \(r\mapsto Z(r,t)\) is derivable at every point for every fixed \(t\in\mathbb{K}\) and \[\partial_{1}Z(r,t) = \Big{[}\partial_{1}Z(0,t+r)\Big{]}Z^{\mathrm{inv}}(0,r)+Z(0,r+t) \Big{[}\partial_{1}Z^{\mathrm{inv}}(0,r)\Big{]}\] \[= \Big{[}\partial_{1}Z(0,t+r)\Big{]}Z(r,-r)-Z(0,r+t)Z(r,-r)\Big{[} \partial_{1}Z(0,r)\Big{]}Z(r,-r)\] \[= \Big{[}\partial_{1}Z(0,t+r)\Big{]}Z(r,-r)-Z(r,t)\Big{[} \partial_{1}Z(0,r)\Big{]}Z(r,-r).\] Analogously we have the following result. **Lemma 4.5**.: _Let \(Z:\mathbb{K}\times\mathbb{K}\to\mathbb{A}_{X}\) be a continuous groupoid morphism. Suppose that for every \(r\in\mathbb{K}\) the function \(t\mapsto Z(r,t)\) is derivable at \(t=0\). Then for every \(r\in\mathbb{K}\) the function \(t\mapsto Z(r,t)\) is derivable and_ \[\partial_{2}Z(r,t)=\Big{[}\partial_{2}Z(r+t,0)\Big{]}Z(r,t).\] Proof.: Suppose that for every \(r\in\mathbb{K}\) the following limit exists \[\lim_{h\to 0}\frac{Z(r,h)-Z(r,0)}{h}.\] We have \[\frac{Z(r,t+h)-Z(r,t)}{h} = \frac{Z(r+t,h)Z(r,t)-Z(r,t)}{h}\] \[= \frac{Z(r+t,h)-\mathrm{Id}}{h}Z(r,t)\] \[= \frac{Z(r+t,h)-Z(r+t,0)}{h}Z(r,t).\] By taking limits \(h\to 0\) we obtain the desired identity. Now we show results which show how to deduce derivability respect to a coordinate when we have information about the other. **Lemma 4.6**.: _Let \(Z:\mathbb{K}\times\mathbb{K}\to\mathbb{A}_{X}\) be a continuous groupoid morphism such that for every \(r\in\mathbb{K}\) the map \(t\mapsto Z(r,t)\) is derivable. Then, the map \(r\mapsto Z(r,t)\) is derivable for every \(t\in\mathbb{K}\) and_ \[\partial_{1}Z(r,t)=\partial_{2}Z(r,t)-Z(r,t)\Big{[}\partial_{2}Z(r,0)\Big{]}.\] Proof.: Suppose that for every \(r,t\in\mathbb{K}\) the following limit exists \[\lim_{h\to 0}\frac{Z(r,t+h)-Z(r,t)}{h}.\] Note that \[\frac{Z(r,t+h)-Z(r,t)}{h} = \frac{Z(r+h,t)Z(r,h)-Z(r,t)}{h}\] \[= \frac{Z(r+h,t)Z(r,h)-Z(r,t)Z(r,h)}{h}+\frac{Z(r,t)Z(r,h)-Z(r,t)} {h}\] \[= \frac{Z(r+h,t)-Z(r,t)}{h}Z(r,h)+Z(r,t)\frac{Z(r,h)-Z(r,0)}{h}.\] from where, reorganizing terms and taking limits we obtain \[\lim_{h\to 0}\frac{Z(r+h,t)-Z(r,t)}{h} = \Big{[}\partial_{2}Z(r,t)\Big{]}Z(r,0)-Z(r,t)\Big{[}\partial_{2} Z(r,0)\Big{]}\] \[= \partial_{2}Z(r,t)-Z(r,t)\Big{[}\partial_{2}Z(r,0)\Big{]}.\] We would like to give a reciprocal result to Lemma 4.6, but we have not yet been able to prove or find a counterexample for this. **Lemma 4.7**.: _Let \(Z:\mathbb{K}\times\mathbb{K}\to\mathbb{A}_{X}\) a continuous groupoid morphism. If for every \(r\in\mathbb{K}\) the map \(t\mapsto Z(r,t)\) is derivable, then \(Z\) is differentiable as a two-variable function._ Proof.: From Lemma 4.6 we know that as for every \(r\in\mathbb{K}\) the map \(t\mapsto Z(r,t)\) is derivable, then \(r\mapsto Z(r,t)\) is derivable for every \(t\in\mathbb{R}\). Hence, for \(h_{1},h_{2}\in\mathbb{K}\), with \(h_{2}\neq 0\) and \(h_{1}+h_{2}\neq 0\), we have \[\frac{Z(r+h_{1},t+h_{2})-Z(r,t)}{|h_{1}|+|h_{2}|}= \frac{Z(r+h_{1}+h_{2},t)-Z(r,t)}{|h_{1}|+|h_{2}|}Z(r+h_{1},h_{2})\] \[+Z(r,t)\frac{Z(r+h_{1},h_{2})-\mathrm{Id}}{|h_{1}|+|h_{2}|}\] \[= \frac{h_{1}+h_{2}}{|h_{1}|+|h_{2}|}\frac{Z(r+h_{1}+h_{2},t)-Z(r,t )}{h_{1}+h_{2}}Z(r+h_{1},h_{2})\] \[+Z(r,t)\frac{Z(r+h_{1},h_{2})-Z(r+h_{1},0)}{h_{2}}\frac{h_{2}}{|h _{1}|+|h_{2}|}.\] All elements at the right hand side on the last equality have limit when \((h_{1},h_{2})\to 0\), since \(Z\) is derivable respect to each of its variables. If either \(h_{2}=0\) or \(h_{1}+h_{2}=0\), some terms nullify at the second equality. As a summary of the previous lemmas we state the following: **Corollary 4.8**.: _For a continuous groupoid morphism \(Z:\mathbb{K}\!\times\!\mathbb{K}\to\mathbb{A}_{X}\), the following statements are equivalent:_ * \(t\mapsto Z(r,t)\) _is derivable at_ \(t=0\)_, for every_ \(r\in\mathbb{K}\)_._ * \(t\mapsto Z(r,t)\) _is derivable for every_ \(r\in\mathbb{K}\)_._ * \(Z\) _is differentiable._ _Moreover, each one of them implies the following statements, which are equivalent:_ * \(r\mapsto Z(r,t)\) _is derivable at_ \(r=0\)_, for every_ \(t\in\mathbb{K}\)_._ * \(r\mapsto Z(r,t)\) _is derivable for every_ \(t\in\mathbb{K}\)_._ In the following we study the relation of groupoid morphisms and nonautonomous differential equations. As we want to consider more general spaces than \(\mathbb{R}^{d}\), let us consider the following definition: **Definition 4.9**.: _A linear nonautonomous differential equation on \(X\) is an equation of the form_ \[\frac{dx}{dt}=A(t)x(t),\] _where \(t\mapsto A(t)\in\mathbb{L}_{X}\), and its solutions are differentiable maps \(x:\mathbb{K}\to X\)._ Even in the autonomous case (_i.e._, \(t\mapsto A(t)\) is constant) the existence of solutions of these kind of equations can be a hard problem when \(X\) is an infinite dimensional space. On the other hand, a usual generalization is to search for solutions only partially defined, for instance with domain \([0,\infty)\subset\mathbb{R}\). Moreover, on infinite dimensional spaces it may be interesting to study such equations where \(A\) is not globally defined, but only on a dense subspace, in which case it can be difficult enough to find solutions defined on a compact interval like \([0,t]\), in contrast to the solutions defined on the whole group \(\mathbb{K}\) as we propose. For more details we refer the reader to [8, Chapter 4]). On the other hand, the nonautonomous case presents even greater difficulties. On finite dimension, it is well known that it is enough to ask the function \(t\mapsto A(t)\) to be locally integrable in order to guarantee the existence and uniqueness of solutions, _i.e._, the existence of an evolution matrix. On infinite dimensional spaces, the problem for the existence of globally defined solutions is much harder. A partial result stays that if \(t\mapsto A(t)\in\mathbb{B}_{X}\) is continuous under the uniform operator norm, then we have the existence and uniqueness of solutions defined on a bounded interval [8, Theorem 5.1.1]. In general, the problem of existence and uniqueness of globally defined solutions has not been solved, but we only know specific conditions under which we have this, as hyperbolicity or parabolicity (for more details we refer the reader to [8, Chapter 5]). We dedicate the end of this section to give a (partial) answer to this problem using the structure of groupoid morphisms. **Proposition 4.10**.: _Let \(Z:\mathbb{K}\!\times\!\mathbb{K}\to\mathbb{A}_{X}\) a continuous groupoid morphism. Suppose the map \(t\mapsto Z(r,t)\) is derivable for every \(r\in\mathbb{R}\). Define \(A:\mathbb{K}\to\mathbb{L}_{X}\) and the operator \(\Psi:\mathbb{K}^{2}\to\mathbb{A}_{X}\) by_ \[A(u):=\partial_{2}Z(u,0),\qquad\Psi(u,v):=Z(v,u-v),\] _then, the following are verified:_ * \(\Psi(u,v)\Psi(v,w)=\Psi(u,w)\) _for every_ \(u,v,w\in\mathbb{K}\)_,_ * \(\frac{d\Psi}{du}=A(u)\Psi(u,v)\)_,_ * \(\frac{d\Psi}{du}=-\Psi(u,v)A(v)\)_,_ * _for every_ \(\xi\in X\)_, the map_ \(\psi_{v,\xi}:\mathbb{K}\to X\)_, given by_ \(\psi_{v,\xi}(u)=\left[\Psi(u,v)\right](\xi)\) _verifies_ \(\psi_{v,\xi}(v)=\xi\) _and is a solution to the equation_ (4.1) \[\frac{dx}{du}=A(u)x(u).\] Proof.: It is easy to see that each \(A(u)\) is a linear transformation of \(X\) (however, we cannot ensure in general that it is continuous). Similarly, we cannot in general state that \(u\mapsto A(u)\) is continuous (even when \(\mathbb{L}_{X}\) has a topology that extends that of \(\mathbb{B}_{X}\)). To verify _i)_ it is enough to see: \[\Psi(u,v)\Psi(v,w)=Z(v,u-v)Z(w,v-w)=Z\left(w,(v-w)+(u-v)\right)=Z(w,u-w)=\Psi(u,w).\] On the other hand, on Lemma 4.5 we proved the identity \[\partial_{2}Z(r,t)=\Big{[}\partial_{2}Z(r+t,0)\Big{]}Z(r,t),\] hence \[\frac{d\Psi}{du}(u,v)=\frac{d}{du}\left[Z(v,u-v)\right]=\partial_{2}Z(v,u-v)= \Big{[}\partial_{2}Z(v+u-v,0)\Big{]}Z(v,u-v)=A(u)\Psi(u,v),\] from where _ii)_ follows. Then, trivially \(\psi_{v,\xi}\) is a solution to (4.1) and \[\psi_{v,\xi}(v)=\big{[}\Psi(v,v)\big{]}\left(\xi\right)=\big{[}Z(v,v-v)\big{]} \left(\xi\right)=\mathrm{Id}_{X}(\xi)=\xi,\] thus verifying _iv)_. Finally, note that \[\Psi(u,v)=Z(v,u-v)=Z(0,u)Z(v,-v)=Z(0,u)Z^{\mathrm{inv}}(0,v),\] hence, using the identity from Lemma 4.3 we obtain: \[\frac{d\Psi}{dv}(u,v)=Z(0,u)\frac{d}{dv}\left[Z^{\mathrm{inv}}(0, v)\right] =Z(0,u)\Big{[}\partial_{2}Z^{\mathrm{inv}}(0,v)\Big{]}\] \[=-Z(0,u)Z^{\mathrm{inv}}(0,v)\Big{[}\partial_{2}Z(0,v)\Big{]}Z^ {\mathrm{inv}}(0,v)\] \[=-\Psi(u,v)A(v),\] where the second to last equality follows from the identify of Lemma 4.5, thus proving _iii)_. The existence of a function with the properties of \(\Psi\) on the previous proposition is exactly what is needed to describe globally defined solutions, for every initial condition, to a linear differential equation. It is easy to see that they are generalization of the well known transition matrices on finite dimension. To formalize this notion we state the following definition, adapted from [8, Definition 5.1.3]. **Definition 4.11**.: _A map \(\Psi:\mathbb{K}\times\mathbb{K}\to\mathbb{B}_{X}\) is an **evolution operator** (or evolution system) if the following conditions are verified_ * \(\Psi(r,r)=\mathrm{Id}\)_,_ \(\Psi(r,t)\Psi(t,s)=\Psi(r,s)\) _for every_ \(r,s,t\in\mathbb{K}\)_,_ * \((r,t)\to\Psi(r,t)\in\mathbb{B}_{X}\) _is strongly continuous, i.e.,_ \((r,t)\to\big{[}\Psi(r,t)\big{]}\left(x\right)\in X\) _is continuous for every_ \(x\in X\)_._ _If furthermore, there is a map \(t\mapsto A(t)\in\mathbb{L}_{X}\) such that_ * \(\partial_{1}\Psi(r,t)=A(r)\Psi(r,t)\) \(y\) \(\partial_{2}\Psi(r,t)=-\Psi(r,t)A(t)\)_,_ _we say that \(\Psi\) is the **evolution operator associated** to the differential equation \(\dot{x}=A(t)x(t)\)._ On [8, Theorem 4.1.3] it is proved that a real autonomous linear differential equation \(\dot{x}=Ax\) on a Banach space has a solution defined on \([0,\infty)\) and uniquely defined for every initial condition on \(X\) if and only if \(A\) is a linear operator, defined on a dense subspace of \(X\) which is obtained as the strong derivative at the origin of a semigroup morphism \(T:[0,\infty)\to\mathbb{B}_{X}\). Inspired by this, we present the following definition and theorem. **Definition 4.12**.: _For a differentiable groupoid morphism \(Z:\mathbb{K}\times\mathbb{K}\to\mathbb{A}_{X}\), its **infinitesimal generator** is the function \(A:\mathbb{K}\to\mathbb{L}_{X}\) given by \(A(t)=\partial_{2}Z(t,0)\)._ Note that the infinitesimal generator of a groupoid morphism corresponds to the derivative respect to the second coordinate evaluated on the unit space \(\left(\mathbb{K}\times\mathbb{K}\right)^{(0)}\) of the groupoid. **Theorem 4.13**.: _A linear nonautonomous differential equation on a Banach space \(X\)_ \[\dot{x}(t)=A(t)x(t), \tag{4.2}\] _has an evolution operator associated to it (i.e. unique and globally defined solution for every initial condition on \(X\)) if and only if \(A\) is the infinitesimal generator of a differentiable groupoid morphism \(Z:\mathbb{K}\times\mathbb{K}\to\mathbb{A}_{X}\)._ Proof.: Proposition 4.10 states that if \(A(t)=\partial_{2}Z(t,0)\) for some groupoid morphism, then there is an evolution operator associated to (4.2). On the other hand, if (4.2) has an evolution operator associated, say \(\Psi\), then set \(Z:\mathbb{K}\times\mathbb{K}\to\mathbb{A}_{X}\) by \(Z(r,t)=\Psi(t+r,r)\). It is easy to see that such \(Z\) is a differentiable groupoid morphism and furthermore, by Lemma 4.5, we conclude \(A(t)=\partial_{2}Z(t,0)\).
2309.10967
Concentration Dependence of Elastic and Viscoelastic Properties of Aqueous Solutions of Ficoll and Bovine Serum Albumin by Brillouin Light Scattering Spectroscopy
The cellular environment is crowded with macromolecules of different shapes and sizes. The effect of this macromolecular crowding has been studied in a variety of synthetic crowding environments: two popular examples are the compact colloid-like Ficoll macromolecule, and the globular protein bovine serum albumin (BSA). Recent studies have indicated a significant component of bound or surface-associated water in these crowders reduces the available free volume. In this work, Brillouin light scattering experiments were performed on aqueous solutions of Ficoll 70 and Ficoll 400 with concentrations ranging from 1 wt% to 35 wt% and BSA with concentrations of 1 wt% to 27 wt%. From the dependence of spectral peak parameters on polymer concentration, we determined fundamental solution properties: hypersound velocity, adiabatic bulk modulus and compressibility, apparent viscosity, and hypersound attenuation. Existing theory that ignores intermolecular interactions can only capture the observed linear trends in the frequency shift up to a threshold concentration, beyond which a quadratic term accounting for intermolecular interactions is necessary. This likely indicates a transition from the dilute to semi-dilute regime. In the Ficoll solutions (but not BSA) we see evidence for a central mode, with a characteristic relaxation time of 20 ps, that we attribute to exchange of the bound water.
Stephen J. Spencer, Venketesh Thrithamara Ranganathan, Anand Yethiraj, G. Todd Andrews
2023-09-19T23:40:30Z
http://arxiv.org/abs/2309.10967v1
Concentration Dependence of Elastic and Viscoelastic Properties of Aqueous Solutions of Ficoll and Bovine Serum Albumin by Brillouin Light Scattering Spectroscopy ###### Abstract The cellular environment is crowded with macromolecules of different shapes and sizes. The effect of this macromolecular crowding has been studied in a variety of synthetic crowding environments: two popular examples are the compact colloid-like Ficoll macromolecule, and the globular protein bovine serum albumin (BSA). Recent studies have indicated a significant component of bound or surface-associated water in these crowders reduces the available free volume. In this work, Brillouin light scattering experiments were performed on aqueous solutions of Ficoll 70 and Ficoll 400 with concentrations ranging from 1 wt% to 35 wt% and BSA with concentrations of 1 wt% to 27 wt%. From the dependence of spectral peak parameters on polymer concentration, we determined fundamental solution properties: hypersound velocity, adiabatic bulk modulus and compressibility, apparent viscosity, and hypersound attenuation. Existing theory that ignores intermolecular interactions can only capture the observed linear trends in the frequency shift up to a threshold concentration, beyond which a quadratic term accounting for intermolecular interactions is necessary. This likely indicates a transition from the dilute to semi-dilute regime. In the Ficoll solutions (but not BSA) we see evidence for a central mode, with a characteristic relaxation time of 20 ps, that we attribute to exchange of the bound water. pacs: Valid PACS appear here ## I Introduction Physical systems consisting of liquid water and macromolecules are ubiquitous. The fluid medium inside a biological cell is an aqueous solution that consists of macromolecules of different sizes and shapes which occupy a significant volume (typically assumed to be 30 - 40% of the cell) [1]. It has been understood for more than two decades that the crowded macromolecular environment can affect biochemical reactions within the cell [2]. Any volume other than the water volume is inaccessible to other molecules, and this excluded volume affects molecular structure, motions and chemical kinetics. In addition, for compact molecules with internal bound water, the accessible water volume will be less than the total water volume. Thus, the simplest picture of macromolecular crowding is entropic: the macromolecules and the inaccessible water reduce the free volume and increase the excluded volume. While there is an increasing realization of the role of other non-specific interactions [3], experimental model systems have focused on aqueous solutions with a simple crowder such as a polysaccharide (_e.g._, the compact Ficoll macromolecule or the chain-like dextran) or a globular protein (_e.g._, bovine serum albumin) [4; 5; 6]. Structure and dynamics in these crowders has been reported extensively [7; 8; 9; 10; 11]. More realistic, heterogeneous crowding media, such as bacterial cell lysate, have been employed as well [12; 13]. In recent work, Ranganathan _et al._[14] have provided quantitative evidence, via pulsed field gradient nuclear magnetic resonance spectroscopy, for water that is bound and thus inaccessible to other molecules, implying that the true excluded volume is larger than what is usually inferred. Brillouin light scattering spectroscopy has been recognized as a niche technique for probing the mechanical and viscous properties in heterogeneous biomaterials [15]. Brillouin scattering experiments on aqueous macromolecular solutions and hydrogels typically report the dependence of spectral peak parameters and derived elastic and viscoelastic properties on solute concentration [16; 17; 18]. These parameters and properties are usually measured over a large concentration range and include Brillouin peak frequency shift and linewidth, hypersound velocity and attenuation, apparent viscosity, and various elastic and viscoelastic moduli. Attempts at using existing theoretical models to describe the concentration dependence of these quantities, however, have met with limited success. For example, Brillouin studies of so-called "aqueous biorelevant solutions" [16] reveal a common dependence of peak frequency shift on solute concentration for those solutions containing macromolecules (lysozyme, bovine serum albumin, and gelatin) up to 40 wt%, implying that this behaviour is largely independent of the nature of the solute. Application of the Reuss effective medium model for a two-component system resulted in an equation for the shift as a function of concentration that agreed well with most experimental data. A theoretical expression relating peak linewidth to concentration that incorporated this equation, however, was unable to fully reproduce the observed trends. Moreover, the sparsity and relatively large concentration interval (\(\sim 10\) wt%) between adjacent data points means that possible discontinuities in the observed trends, such as might occur in proximity to the polymer overlap concentration, were not accounted for in the model. In related Brillouin scatter ing work on collagen hydrogels [19] the storage modulus of collagen was extracted by fitting an expression for the concentration dependence of the effective storage modulus from the Voigt model to a high-hydration subset of the full experimental dataset for the concentration dependence of the gel storage modulus. It was stated that the two-component Voigt model gave a better fit than the Reuss model used in previous works [19]. No model was advanced to describe the data trend over the entire measured concentration range. Moreover, Brillouin scattering studies of cross-linked polyvinyl alcohol hydrogels [20] found only crude qualitative agreement between the observed dependence of Brillouin peak frequency shift and linewidth on gel network volume fraction and that calculated from a theory incorporating frictional damping and coupling between elastic waves in polymer network and fluid [21]. In this paper we report on Brillouin light scattering studies of aqueous solutions of polymers Ficoll 70, Ficoll 400, and Bovine Serum Albumin (BSA). Solution elastic and viscoelastic properties were determined over the dilute and semi-dilute ranges from the dependence of spectral peak parameters on solute (polymer) concentration. The observed trends in these properties are not consistent with existing theory but instead were found to be well-described by expressions derived from a new model relating hypersound frequency and solute concentration. The sensitivity of the Brillouin scattering technique to changes in structure and water-macromolecule dynamics also allowed the polymer overlap concentration and the relaxation time associated with the hydration of Ficoll molecules to be determined. The extent of the hydration, manifested in the unexpectedly low overlap concentration, provides independent confirmation of recent results that suggest that the effective volume fraction occupied by hydrated macromolecules in solution is much larger than expected for the bare unhydrated variety. In characterizing the viscoelastic properties of commonly-used experimental model systems over a wide solute concentration range and advancing a model that incorporates interparticle interaction, this study provides important new insight into the physics of macromolecular crowding and biomacromolecular systems in general. ## II Experimental details ### Sample Preparation Solutions of Ficoll-70 (\(m=70\) kDa) and Ficoll-400 (\(m=400\) kDa) were prepared under identical conditions by dissolving Ficoll powder in 99.9% pure D\({}_{2}\)O at room temperature in small glass vials. A Scientific Industries Vortex Genie was used to promote initial mixing of D\({}_{2}\)O and Ficoll, and subsequent homogenization was performed using a Fisherbrand Homogenizer 850 with a rod diameter of 7 mm and speed of 11,000 RPM. The homogenization sequence, which was performed five times per sample, consisted of three minutes of mixing followed by one minute of settling. The resulting solutions were clear and colourless and had concentrations ranging from 1 wt% to 35 wt%, with a noticeable increase in viscosity for those at the upper end of this range. BSA solutions with concentrations of 1% w/w to 27% w/w were prepared by dissolving BSA powder in 0.1 M phosphate buffer solution with pH of 7.0 at room temperature and subjecting them to the same mixing and homogenization procedure as used for the Ficoll solutions. Solutions with concentrations \(<20\) % w/w were clear and colourless while those with concentrations higher than this value were somewhat cloudy. ### Brillouin Light Scattering Spectroscopy #### ii.2.1 Apparatus Brillouin light scattering experiments were performed under ambient conditions using a 180\({}^{\circ}\) backscattering geometry. The incident light source was a Nd:YVO\({}_{4}\) solid state laser with an emission wavelength of 532 nm and output power of 1.66 W. To minimize Fresnel reflection losses, a half-wave plate was used to rotate the plane of polarization from vertical (\(s\)-polarized) to horizontal (\(p\)-polarized). Neutral density filters placed in the beam path were used to reduce the power level at the sample to \(\sim 100\) mW. Light was focused on samples with a 5 cm lens of \(f\)-number 2.8. Scattered light was collected and collimated by the same lens and subsequently focused by a 40 cm focal length lens onto the 450 \(\mu\)m entrance pinhole of a six-pass tandem Fabry-Perot interferometer for spectral analysis. The interferometer had a free spectral range of 15 GHz and a finesse of \(\sim 100\). A schematic of this experimental setup can be found in Refs. [22; 23]. #### ii.2.2 Quantities Derived from Spectra Elastic and viscoelastic properties of the Ficoll solutions were deduced from Brillouin spectra. Hypersound velocity was determined using the well-known Brillouin equation applied to the case of a backscattering geometry, \[v=\frac{f\lambda}{2n}, \tag{1}\] where \(f\) is the measured Brillouin peak frequency shift, \(\lambda\) is the incident light wavelength, \(n=n(x)\) is the concentration-dependent solution refractive index, and \(x\) is the concentration in weight percent. The latter was obtained for each Ficoll 70 and Ficoll 400 solution using \(\partial n/\partial x\) relationships provided in the literature [24], however this was found to be constant at \(n=1.33\) for the full concentration range. This, however, was not the case for solutions of BSA. At higher concentrations of BSA, solutions became noticeably cloudy. As such, refractive indices for BSA solutions were calculated using the relationship \(\partial n/\partial C=0.190\) mL/g (with C expressed as g/mL), found by Tumolo \(et\)\(al.\)[25]. Knowledge of the hypersound velocity allowed the adiabatic bulk modulus to be found from \[B=\rho v^{2}, \tag{2}\] where \(\rho\) is the mass density of the solution. The density of the solution showed little variation over the full concentration range, and was approximated to a constant 1110 kg/m\({}^{3}\) for the purposes of fitting. The adiabatic compressibility, \(\kappa\), was also determined using the fact that \(\kappa=1/B\). The apparent viscosity, \(\eta\), and hypersound attenuation, \(\alpha\), in the solution were deduced from Brillouin spectral data via \[\eta=\frac{4}{3}\eta_{s}+\eta_{b}=\frac{\rho v^{2}\Gamma_{B}}{4\pi^{2}f^{2}}= \frac{\rho\lambda^{2}\Gamma_{B}}{16\pi^{2}n^{2}} \tag{3}\] and \[\alpha=\frac{\pi\Gamma_{B}}{v}, \tag{4}\] respectively [26], where \(\Gamma_{B}\) is the Brillouin peak full width at half-maximum (FWHM), \(\eta_{s}\) and \(\eta_{b}\) are the shear and bulk viscosities, and the other quantities are as already defined. ## III Results and Discussion ### Spectra #### iii.1.1 General Features and Mode Assignment Figure 1 shows a series of Brillouin spectra collected from the Ficoll 70 solutions and BSA solutions. Spectra of the Ficoll 400 solutions are similar to Ficoll 70 spectra in all respects (see Figure S1 in the Supplementary Material). A single set of Brillouin peaks was observed in all spectra, with a frequency shift ranging from \(\sim 6.8\) GHz to \(\sim 8.4\) GHz. Although not obvious from the spectra shown in Figure 1, there is also a broad, weak peak at the center of the spectra obtained from Ficoll 70 and Ficoll 400 solutions with solute concentrations \(\geq 20\%\). The presence of this peak was inferred from the fact that the baseline intensity in the region between the central elastic peak and the Brillouin doublet in spectra of high concentration solutions was noticeably higher than that in spectra of the low concentration solutions and also higher than that on the high frequency shift side of the Brillouin doublet. An example of this for Ficoll 70 solutions with 3% and 30% concentration is shown in Figure 2 A. In contrast, no central peak was discernible in spectra of the BSA solutions. The Brillouin doublet and the peak in the center of the spectra of high concentration Ficoll solutions have different origins. The former is assigned to the usual longitudinal acoustic mode propagating through the solution based on the similarity of its frequency shift to that of the corresponding mode in water [27]. The central peak was attributed to a diffusive relaxation mode based on its zero frequency shift and the fact that the width of the peak showed no significant change with changing concentration [28]. These properties are consistent with other Brillouin and Rayleigh scattering studies which have observed this relaxation mode in macromolecular solutions. [17; 18; 29; 30] #### iii.1.2 Extraction of Peak Parameters Longitudinal acoustic mode peak frequency shift and linewidth were obtained by fitting Lorentzian functions to the Brillouin peaks, with the latter being processed prior to plotting by subtraction of the 0.3 GHz instrumental contribution to the best-fit linewidth. To obtain an estimate of the central mode linewidth, Figure 1: Normalized Brillouin spectra collected from solutions of (A) Ficoll 70 and (B) BSA of various concentrations (wt%). L represents a longitudinal bulk mode. \(\Gamma_{C}\), it was first necessary to remove data from regions of the spectrum containing other peaks so as to minimize its impact on any subsequent fit. This included the region containing the central elastic peak and other data contained within the central Fabry-Perot interferometer control window, \(\pm 2\) GHz from the center of the spectrum, and that containing the two longitudinal mode peaks, which was typically a \(\sim 6\) GHz range around the longitudinal peak. The remaining data was fitted to a Lorentzian function (see Fig. 2A), revealing a central mode that is very weak and exceptionally wide, with a FWHM of \(\sim 14\) GHz. As can be seen by the high degree of overlap between the experimental data and the dotted curve in Fig. 2A, the addition of this Lorentzian to the best-fit Lorentzians for the longitudinal mode peaks results in a function that well represents the original Brillouin spectrum (without the central elastic peak). ### Longitudinal Acoustic Mode: Elastic and Viscoelastic Properties #### iii.2.1 Dependence of Brillouin Peak Parameters on Solute Concentration Figure 3 shows longitudinal acoustic mode peak frequency shift and linewidth versus concentration for the Ficoll 70, Ficoll 400, and BSA solutions. For all solutions, both quantities increase monotonically with increasing concentration, with the linewidth being much more sensitive than the shift to changes in concentration. While the frequency shift for each solution increases by \(\sim 20\%\) over the range probed, the peak FWHM for the Ficoll and BSA solutions increases by a factor of \(\sim 4\) and \(\sim 2\), respectively. For a given concentration, the frequency shift for Ficoll 400 solution is slightly larger than that for Ficoll 70 solution. At low concentrations, the peak linewidths obtained from the Ficoll 70 and Ficoll 400 solutions are nearly equal, while for concentrations in excess of \(\sim 15\) wt% the linewidths begin to diverge, with that for Ficoll 400 being greater than that for Ficoll 70 over this range. This subtle change in the relationship between concentration and frequency or linewidth may represent the overlap concentration at which the system physics changes as it transitions from the dilute to the semi-dilute regime. The solute concentration dependence of Brillouin peak frequency shift of some so-called biorerelevant aqueous solutions have been well-fit for concentrations up to \(x\sim 40\) wt% by the relation [16] \[f_{R}(x)=\frac{f_{w}}{\sqrt{1-x+xv_{w}^{2}/v_{s}^{2}}}=\frac{f_{w}}{\sqrt{1+ \alpha x}}, \tag{5}\] where \(\alpha\coloneqq v_{w}^{2}/v_{s}^{2}-1\). Also, the concentration dependence of linewidth has been well fit up to \(x\sim 20\) wt% by \[\Gamma_{R}^{B}(x)=Af_{R}(x)^{2}+Bx, \tag{6}\] respectively, where \(A\) and \(B\) are constants and \(f_{w}\), \(v_{w}\), and \(v_{s}\) are the Brillouin peak frequency shift for water, and the hypersound velocities of water and the solute, respectively. The basis of Equation 5 is the two-component Reuss model for which the effective elastic modulus \(M\) of the solution is given by \[\frac{1}{M}=\frac{\rho\mu_{s}}{\rho_{s}}\left[\frac{1}{M_{s}}-\frac{1}{M_{w}} \right]+\frac{1}{M_{w}}, \tag{7}\] where \(\mu_{s}=m_{s}/(m_{s}+m_{w})\) is the solute mass fraction and \(M_{s}\) and \(M_{w}\) are the elastic moduli of the solute and water, respectively. If the density of the solution \(\rho\) equals that of the solute \(\rho_{s}\), then Equation 7 simplifies to \[\frac{1}{M}=\frac{\mu_{s}}{M_{s}}+\frac{1-\mu_{s}}{M_{w}}, \tag{8}\] Equation 5 can be obtained by from Equations 1 and 8 with the approximation of fixed density and refractive index. Fitting Equation 5 to the \(\{f,x\}\) data for the Ficoll and BSA solutions with \(v_{w}^{2}/v_{s}^{2}\) as an adjustable parameter, however, yielded best-fit relations \(f(x)\) that show only mediocre agreement with experiment - overestimating \(f\) at low concentrations and underestimating it at higher concentrations over the range probed. Substitution of these "best-fit" expressions for \(f(x)\) into Equation 6 give similar quality fits to the \(\{\Gamma_{B},x\}\) data shown Figure 2: (A) Brillouin spectrum of an aqueous solution of Ficoll 70 with a solute concentration of 30%. Solid lines - Best-fit Lorentzian functions for central peak and Brillouin peaks. Dotted line - sum of central peak and Brillouin peak best-fit Lorentzians. (B) Anti-Stokes Brillouin peaks for aqueous Ficoll solutions with solute concentrations of 3% and 30%. 30% concentration peak is shifted horizontally from 7.96 GHz to 6.86 GHz so that the peaks overlap to highlight the slight asymmetry and significantly higher baseline intensity of the peak for the 30% concentration solution compared to that for the 3% solution on the low frequency shift side. in Figure 3. In addition, a fit of Equation 5 to only the low concentration data yields an excellent fit for \(x\leq 10\%\) but the resulting function does not describe the higher concentration data (see dotted curve in Figure 3). This sub-optimal agreement between theory and experiment is likely due to it not properly accounting for molecular crowding arising from hydration of Ficoll or BSA as the solution concentration increases, the onset of which occurs at the transition from the dilute to semi-dilute regime. This crowding leads to increased polymer-polymer interaction in the solution. As such, the hypersound frequency can no longer be described by Equation 5; there are also contributions from volumetric and entropic changes due to polymer-polymer interactions, specifically the onset of contact and the possible formation of loosely packed regions of solute molecules. The failure of the above model to accurately reproduce the concentration dependence of \(f\) and \(\Gamma_{B}\) for the Ficoll and BSA solutions, coupled with the lack of other appropriate theoretical models, lead us to propose a new model to describe the current experimental data. This model assumes that the concentration dependence of \(f\) and \(\Gamma_{B}\) both increase smoothly with concentration according to \[f(x)=f_{R}(x)+A_{1}x^{2}, \tag{9}\] where \(f_{R}\) is given by Equation 5, and \[\Gamma_{B}(x)=Af(x)^{2}+Bx+A_{2}x^{2}, \tag{10}\] where the \(A_{i}\) are fit parameters and \(f(x)\) is given by Equation 9. Furthermore, the second order term in Equation 10, \(A_{2}x^{2}\) is a phenomenological extension to van't Hoff's law, as explained below [31, 32]. From this, solute molecules in a dilute solution may be treated as an ideal gas. As such, the linewidth is proportional to solution density, as expressed in the equation \[\Gamma_{B}(x)\propto 1+c_{1}\rho+c_{2}\rho^{2}+c_{3}\rho^{3}+\cdots. \tag{11}\] At low concentrations, corresponding to lower density, the higher order terms are insignificant. At higher concentrations, however, these higher order terms begin to dominate [32]. The second order term of the virial equation is attributed to interactions between solute molecules, the contribution of which becomes more important at higher solute concentrations. Figure 3 shows best-fits of Equations 9 and 10 to the \(\{f,x\}\) and \(\{\Gamma_{B},x\}\) data for the Ficoll and BSA solutions. The fits reproduce the trends of the experimental data very well. For reference, the best-fit equations are given in Table 2. Although not obvious, there is also what could be a subtle change in behaviour which is visible in both the frequency shift and linewidth, which is localized around \(\sim 15\) wt% concentration for both Ficoll solutions (denoted by the arrow in the inset in Figure 3 for change in behaviour of Ficoll 70 peak width). The subtle change in behaviour of frequency and linewidth around \(\sim\)15% solute concentration is also observed in the BSA data. This change in behaviour oversver may also be indicative of the overlap concentration, and the transition from the dilute to semi-dilute regime. #### iii.2.2 Hypersound Velocity Figure 4 (A) shows the evolution of hypersound velocity with concentration for the Ficoll 70, Ficoll 400, and BSA solutions. There is a consistent increase in velocity with increasing solute concentration for all solutions, as expected from the increase in Brillouin peak frequency shift with increasing concentration. As with the frequency data, the hypersound velocity in Ficoll 400 solutions is systematically higher than in Ficoll 70 solutions. Furthermore, while the velocity of BSA solutions being larger than that of Ficoll solutions can be attributed to the different solvent, the change in velocity with respect to concentration was much lower for BSA than for both Ficoll solutions. Figure 4 (A) shows curves for velocity as a function of concentration which were derived by substituting the best-fit equation for frequency Figure 3: Brillouin peak frequency shift (A) and linewidth (B) as a function of concentration for aqueous solutions of Ficoll 70, Ficoll 400, and BSA. Solid lines represent best fits of \(f(x)=f_{R}(x)+A_{1}x^{2}\) and \(\Gamma_{B}(x)=Af(x)^{2}+Bx+A_{2}x^{2}\). Dashed lines represent frequency relationship provided by Equation 5 for Ficoll 70 and BSA. Inset in (B) is linewidth of Ficoll 70. as a function of concentration into Equation 1. In the dilute regime, the hypersound velocity of macromolecular solutions can be expressed in a manner similar to Equation 5, as hypersound velocity is linearly proportional to phonon frequency [16]. However, as the concentration increases beyond 10%, we begin to see a significant increase in packing of solute molecules in solution due to crowding effects. This is demonstrated by the deviation of Brillouin peak frequency and linewidth from previous theory. It also is important to note that previous studies have shown that solute hydration leads to volume fraction of solute being much larger than expected than expected for bare (no bound water) solute molecules at similar weight concentrations.[14]. This concept is further supported by Brillouin scattering results in this work, as the onset of polymer-polymer interactions was seen at concentrations as low as 10% by weight. This increase in macromolecular packing leads to an increase in interactions between solute molecules [33]. This is the transition between the dilute and the semi-dilute limits. By using the \(f_{R}(x)\) expression of Equation 9, the hypersound velocity for Facoll 70, Facoll 400, and BSA was calculated. Values for these velocities are shown in Table 1. Velocities calculated for solutes used in this work ranged between \(\sim 2300\) m/s and \(\sim 2900\) m/s, which is comparable to results from previous studies on macromolecular solutions [16]. #### iii.1.3 Bulk Modulus & Adiabatic Compressibility Figure 4 (B) shows that the bulk modulus for the Facoll 70, Facoll 400, and BSA solutions increase with increasing solute concentration. This trend was expected not only from the relationship between bulk modulus and hypersound velocity given by Equation 2, but also from an intuitive perspective as with an increase in concentration a greater proportion of the available volume is occupied by the solute. This leads to the solution being less compressible and therefore to a higher bulk modulus. For low concentrations \(B\) increases approximately linearly with \(x\) as predicted by Equation 7 in the limit of small \(x\). While there is no obvious change in trend, at \(x\sim 15\) wt% the slope of \(B(x)\) is noticeably larger and continues to increase with increasing concentration. Following the reasoning in Sec. III.B.1, this is caused by the solution transitioning from dilute to semi-dilute. The increased interaction between solute molecules (_e.g._, entanglement) results in a decrease in compressibility and, consequently, an increase in bulk modulus. This deviation of the solution compressibility from a volumetric average of the compressibilities of the constituents (_i.e._, the Reuss model) can be attributed to entropic and volume contributions due to these interactions as discussed in Sec. III.B.1[33]. #### iii.1.4 Apparent Viscosity The apparent viscosity of the Facoll and BSA solutions for a range of concentrations in the dilute and semi-dilute regimes are shown in Figure 4 (C). The \(\eta(x)\) curves in this figure were obtained from Equation 3 using \(\Gamma_{B}(x)\) given by Equation 6 with the best-fit \(f(x)\) from Equation 3. As can be seen, the apparent viscosity values for the two Facoll solutions are roughly equal at low concentrations but begin to diverge from eachother at \(x\sim 15\) wt%, behaviour similar to that observed for relative viscosity in rheology studies of aqueous Facoll 70 and Facoll 400 solutions [14]. This change in behaviour approximately at a concentration corresponding to the dilute-to-semi-dilute transition. Within this region, the contribution to viscosity from Facoll becomes more dominant than that of D\({}_{2}\)O. It is important to note that previous rheology studies have shown that the intrinsic viscosities of Facoll 70 and Facoll 400 differ by \(\sim 12\%\)[14]. Furthermore, the range of apparent viscosities presented in this work are significantly different the ranges of shear viscosities shown in previous rheology studies. While the very low concentration viscosities are comparable, the shear viscosities presented in previous rheology work is larger than the apparent viscosities derived from Brillouin scattering by a factor of \(\sim 100\)[14]. At low concentrations, apparent viscosity of BSA solutions is slightly larger than that of either solution of Facoll. The curve of Equation 3 for BSA, however, is considerably less steep than that of either solution of Facoll, demonstrating a lower intrinsic viscosity for BSA. #### iii.1.5 Hypersound Attenuation Figure 4 (D) shows hypersound attenuation for all solutions at various solute concentrations determined from Equation 4. The curves that appear in this figure were obtained in a manner analogous to those for apparent viscosity. The attenuation increases monotonically with increasing concentration for all solutions. As was the case for apparent viscosity, hypersound attenuation in the Facoll 70 solution is approximately equal to that in the Facoll 400 solution at low concentrations. As concentration increases, the solutions transition from the dilute to the semi-dilute and the attenuation values for the two solutions diverge. This is an indication that hypersound attenuation is much more strongly correlated with D\({}_{2}\)O \begin{table} \begin{tabular}{c c} \hline \hline Solute & Velocity (m/s) \\ \hline Facoll 70 & 2320 \\ Facoll 400 & 2850 \\ BSA & 2580 \\ \hline \hline \end{tabular} \end{table} Table 1: Hypersound velocities for Facoll 70, Facoll 400, and Bovine Serum Albumin (BSA) solute, calculated using the equation \({v_{w}}^{2}/{v_{s}}^{2}-1=\alpha\), where \(\alpha\) is the fit parameter in the denominator of \(f_{R}(x)\) from equation 5. than with Ficoll in the dilute limit. In the semi-dilute limit, the dependence on Ficoll concentration becomes more important. Furthermore, similar to viscosity, at low concentrations, where attenuation is more closely related to the solvent, attenuation of BSA solutions is greater than that of either Ficoll solution. Once again, however, the concentration dependence of BSA solutions with concentration is less sharp than that of Ficoll. ### Central Mode: Relaxation #### iii.3.1 Origin Figure 2 shows the central peak present in spectra of high concentration Ficoll solutions. Such peaks were not observed in BSA spectra. This peak was observed in the spectra of solutions with \(x\geq 20\%\), from which it was noted that its intensity increases with increasing concentration for both Ficoll 70 and Ficoll 400 solutions. The width of this central peak, however, shows no systematic change with change in concentration. This central peak was attributed to a relaxation mode due to D\({}_{2}\)O molecules within the hydration shells of Ficoll, a phenomenon which has been observed in Brillouin and Rayleigh scattering experiments on other high concentration macromolecular solutions [17; 18; 29; 36]. This phenomenon occurs when D\({}_{2}\)O molecules briefly bind to the hydration shell of the Ficoll molecules before returning to the bulk solvent. This relaxation time is, therefore, also considered the residence time for solvent molecules within the hydration shell [37]. #### iii.3.2 Relaxation Time Relaxation times associated with the central peaks were determined from the linewidths using the relationship \(\tau=1/\pi\Gamma_{C}\)[38]. These times were found to be \(22\pm 2\) ps and \(18\pm 2\) ps for the Ficoll 70 and Ficoll 400 solutions, respectively, and are representative of the duration that D\({}_{2}\)O molecules spend within the hydration shells of Ficoll before returning to the bulk solution. Table 4 compares these times to hydration relaxation times for other macromolecular solutions. As shown, the relaxation time for both varieties of Ficoll solution are lower than those for aqueous solutions of DNA or hyaluronic acid [18; 29]. This is a reasonable result because relaxation time is directly proportional to polymer chain length and Ficoll is spherical in shape [14] while DNA and hyaluronic acid are long chain polymers [28]. It should also be noted that Ranganathan et al [14] found that the NMR relaxation Figure 4: Hypersound velocity (A), solution bulk modulus (B), apparent viscosity (C), and hypersound attenuation (D) as a function of concentration in aqueous solutions of Ficoll 70, Ficoll 400, and BSA. Solid lines are curves are based on the frequency fits from Figure 3 and Equations 1, 2, 3, and 4, respectively rates had reached their saturation value by concentration of 35%, with Ficoll-400 having higher rates than Ficoll-70. This is consistent with the smaller hydration timescale observed here. ### System Properties and Dynamics Table 2 contains a summary of the dependence of hypersound frequency and Brillouin peak linewidth on concentration given by Equations 3 and 10. All elastic and viscoelastic properties explored in the present study increase monotonically with increasing solute concentration. These increases can be attributed to an increase in packing of solute molecules within the solution. For aqueous solutions of Ficoll or BSA, there are two components which impact the volume fraction. The first is the natural addition of solute molecules to solution as concentration increases. The second component to consider is the hydration of solute molecules. This process is seen to primarily occur in higher concentration solutions, but makes a large contribution to the volume fractions of Ficoll 70 and Ficoll 400. A change in volume fraction due to hydration also occurs in BSA, but to a lesser degree than in Ficoll. As such, there is a much more dramatic increase in volume fraction of these macromolecules compared to solutions where there is no hydration of solute molecules. Because of this, Ficoll solutions used in this study range from dilute solutions to nearly maximum packing of randomly distributed spheres, and BSA solutions also experience a very high degree of packing at the highest concentrations. From the Brillouin scattering results, it is apparent that BSA experiences less packing compared to Ficoll. Table 2 shows fit equations for frequency versus solute concentration. The second order term for BSA, corresponding to polymer-polymer interactions, is \(\sim\)20% lower than that of either type of Ficoll. This increase in volume occupied by globular molecules results in a reduction in solution compressibility and a corresponding increase in bulk modulus. Furthermore, the increase in solute volume fraction makes the aqueous solution more viscous and the packing of spheres increases the attenuation of sound within the solution. To further discuss the effects of hydration, above 15% concentration a relaxation mode was observed in Brillouin spectra for both Ficoll 70 and Ficoll 400 solutions. This relaxation was caused by hydration of Ficoll, specifically due to an exchange of D\({}_{2}\)O between the bulk solvent and the hydration shell of Ficoll. This addition of relaxation due to hydration further validates the larger solute volume fraction due to hydration, compared to bare solute at similar mass concentrations, by demonstrating a fundamental change in the mechanics of the solution as concentration surpasses the overlap concentration. ## IV Conclusion In the present study Brillouin light scattering experiments were performed on solutions of Ficoll 70 and Ficoll 400 dissolved in D\({}_{2}\)O with concentrations ranging from 1% to 35 wt%, and BSA dissolved in phosphate buffer with concentrations ranging from 0% to 27%. Brillouin spectra for all such solutions exhibited a single Bril \begin{table} \begin{tabular}{c c c c} Solution & \multicolumn{2}{c}{Macromolecule} & \multicolumn{1}{c}{Relaxation} \\ & & Structure & Time (ps) \\ \hline 35 wt\% Ficoll 70 - Pres Work & Globular & 22 \(\pm\) 2 \\ 35 wt\% Ficoll 400 - Pres Work & Globular & 18 \(\pm\) 2 \\ DNA [18] & Chain & 40 \\ Hyaluronic Acid [29] & Chain & 50 \\ Polyethylene glycol 600 [30] & Chain & 60 \\ \end{tabular} \end{table} Table 4: Hydration relaxation time for aqueous Ficoll solutions (present work) and other aqueous macromolecular solutions. \begin{table} \begin{tabular}{c c c c} Quantity & Base Equation & Solution & Solution-Specific Equation \\ \hline Hypersound Frequency & \multirow{2}{*}{\(f(x)=f_{R}(x)+A_{1}x^{2}\)} & Ficoll 70-D\({}_{2}\)O & \(f(x)=(6.80/\sqrt{1-0.00653x})+0.000412x^{2}\) \\ & & Ficoll 400-D\({}_{2}\)O & \(f(x)=(6.82/\sqrt{1-0.00771x})+0.000314x^{2}\) \\ & & BSA & \(f(x)=(7.45/\sqrt{1-0.00665x})+0.000253x^{2}\) \\ \hline Brillouin Linewidth & \multirow{2}{*}{\(\Gamma_{B}(x)=Af(x)^{2}+Bx+A_{2}x^{2}\)} & Ficoll 70-D\({}_{2}\)O & \(\Gamma_{B}(x)=0.0123x+0.000566x^{2}+0.0108f(x)^{2}\) \\ & & Ficoll 400-D\({}_{2}\)O & \(\Gamma_{B}(x)=0.00899x+0.000753x^{2}+0.0108f(x)^{2}\) \\ & & BSA & \(\Gamma_{B}(x)=0.00896x+0.000446x^{2}+0.0124f(x)^{2}\) \\ \end{tabular} \end{table} Table 2: Empirical equations describing the concentration dependence of the elastic and viscoelastic properties of aqueous solutions of Ficoll 70, Ficoll 400, and Bovine Serum Albumin (BSA). \begin{table} \begin{tabular}{c c c c c c} Solution & \multicolumn{4}{c}{Overlap} & Concentration (wt. \%) \\ & PW-C & PW-Q & Ref [14] & Ref [34] & Ref [35] \\ \hline Ficoll 70 & \(\sim 15\) & \(\sim 10\) & 10-15 & - & 22.9 \\ Ficoll 400 & \(\sim 17\) & \(\sim 20\) & 10-15 & 5.33 & 13.5 \\ BSA & \(\sim 13\) & \(\sim 20\) & - & - & - \\ \end{tabular} \end{table} Table 3: Estimated overlap concentration for aqueous solutions of Ficoll 70, Ficoll 400, and Bovine Serum Albumin (BSA) obtained in present work and previous studies. PW-C: Present Work - Overlap concentration estimated from visible change in behaviour observed in plot of Frequency shift vs. Solute Concentration and/or FWHM vs. Solute Concentration. PW-Q: Present Work - Overlap concentration estimated from the divergence of previous model resulting from removing quadratic term from Eq. 9. louin peak which was attributed to a longitudinal bulk mode. The frequency shifts and linewidths of these Brillouin peaks were used to calculate hypersound velocity, attenuation, and bulk modulus, all of which exhibited an increase with increasing concentration. For the solutions studied in this work, the relationship between hypersound frequency and solute concentration cannot accurately be described by models which have been previously presented for non-interacting macromolecular solutions. A model was derived to describe the change in hypersound frequency in macromolecular solutions which incorporates solute-solute interactions. Finally, a central peak was observed in high concentration spectra for both Ficoll 70 and 400 but was not observed in BSA spectra. This peak was attributed to relaxation associated with hydration of Ficoll by D\({}_{2}\)O, and the occupation time of D\({}_{2}\)O within the hydration shell. The widths of this central peaks were used to calculate relaxation times of \(\sim 22\) ps and \(\sim 18\) ps for Ficoll 70 and Ficoll 400 solutions, respectively. This work provides physical insight into the interaction of macromolecules and water in crowded macromolecular environments, which are of immense importance in biological systems. Further, it provides quantitative spectroscopic signatures for bound or surface-associated water. Concentration dependence, with complementary work on temperature dependence, of aqueous biomacromolecular solutions is important because of the wide range of solution concentrations found in naturally occurring biological systems. This work also further establishes Brillouin spectroscopy as a valuable probe of the elasticity and viscoelasticity of aqueous biomacromolecular systems. ###### Acknowledgements. AY and GTA acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (RGPIN-2019-04970 and RGPIN-2015-04306, respectively).
2307.16662
Graph Structure from Point Clouds: Geometric Attention is All You Need
The use of graph neural networks has produced significant advances in point cloud problems, such as those found in high energy physics. The question of how to produce a graph structure in these problems is usually treated as a matter of heuristics, employing fully connected graphs or K-nearest neighbors. In this work, we elevate this question to utmost importance as the Topology Problem. We propose an attention mechanism that allows a graph to be constructed in a learned space that handles geometrically the flow of relevance, providing one solution to the Topology Problem. We test this architecture, called GravNetNorm, on the task of top jet tagging, and show that it is competitive in tagging accuracy, and uses far fewer computational resources than all other comparable models.
Daniel Murnane
2023-07-31T13:44:22Z
http://arxiv.org/abs/2307.16662v1
# Graph Structure from Point Clouds: Geometric Attention is All You Need ###### Abstract The use of graph neural networks has produced significant advances in point cloud problems, such as those found in high energy physics. The question of how to produce a graph structure in these problems is usually treated as a matter of heuristics, employing fully connected graphs or K-nearest neighbors. In this work, we elevate this question to utmost importance as the Topology Problem. We propose an attention mechanism that allows a graph to be constructed in a learned space that handles geometrically the flow of relevance, providing one solution to the Topology Problem. We test this architecture, called GravNetNorm, on the task of top jet tagging, and show that it is competitive in tagging accuracy, and uses far fewer computational resources than all other comparable models. ## 1 Introduction Relational neural networks such as transformers and graph neural networks (GNNs) have pushed the limits of ML performance on many tasks, and the attention mechanism has been shown to be a key ingredient for achieving these state-of-the-art (SotA) results [1]. In natural language processing for example, attention-based transformers treat sentences as graphs, where words are represented by nodes and are "fully connected" (FC) - that is, all nodes are connected to all other nodes [2]. Attention-based GNNs have also been successfully employed in high energy physics (HEP) [3; 4; 5; 6; 7; 8; 9; 10; 11; 12], a domain where data is often represented by point clouds of objects in space. The choice of how to connect point-cloud nodes into a graph is often non-obvious. The FC topology scales poorly with the complexity of the problem, possibly being prohibited by hardware constraints. Additionally, attention in many models is handled in a separate stage from the construction of the graph (the "choice of topology"), and is usually obtained as a learned function of pairs of node features (as in [13]), which can be computationally expensive. In short, if not handled carefully, an attention mechanism applied to a fully-connected point cloud scales as \(O(N_{nodes}^{2})\). In this work, we seek to address both of these hurdles - the choice of topology and the cost of attention - with a single solution. By adapting an existing architecture called GravNet [14], we propose an attention mechanism that is entirely dependent on a learned embedding space, and in doing so construct the topology of the graph in that space, at each iteration of message passing. The resulting network is called GravNetNorm as it extends GravNet to handle a subtle shortcoming of the original implementation, where the relevance of neighboring nodes was diffused through a mixture of geometry and node features. This required the use of a K-nearest-neighbor graph construction to function well. Our updated model instead learns the appropriate neighborhood size node-by-node, and in doing so uses fewer computational resources and performs with better accuracy than the original GravNet. Additionally, we apply GravNetNorm to a classic point cloud problem - jet flavor tagging - and show it is competitive with SotA methods, while taking an order of magnitude less memory, and a factor of four less time. We propose several extensions to this model that may improve accuracy further, while still retaining the learned geometric attention that makes it desirable for point cloud applications. ## 2 Geometric Attention and the Topology Problem ### Constructing a Graph Much work has been done in applying machine learning techniques to point cloud problems[15], and in particular attention models, typically for 3D points [16; 17; 18; 19]. We take as a case-study the problem of tagging jets of reconstructed particles as coming from either a top quark or a lighter hadronic particle [20]. In this case as in most point cloud problems, we are given only a set of points (herein called "nodes"), each with a feature vector, but without any notion of inter-node connections or relationships (herein called "edges"). To apply a GNN to these problems, there are two limiting approaches. The first is to treat the nodes as unconnected - that is, as a set. The DeepSets architecture [21] has been used in jet tagging with, at the time, SotA results [22; 23]. The other limit is to to treat the point cloud as fully connected, and this is the approach taken in transformer models, such as the Particle Transformer [24], which outperforms the set-limit approach in top tagging, although with significant computational overhead. A happy medium is struck by ParticleNet [25], a model that applies a GNN to neighborhoods of K=16 neighbors and achieves very good results1. Given these three working points (unconnected, fully-connected, and sparsely connected), we therefore suggest that including graph structure benefits a model's predictive power, but that most node-pair connections are not relevant to the prediction task. Footnote 1: Apples-to-apples comparisons are subtle, as training dataset size is a large factor in performance. See [24] for a comprehensive analysis. The attention mechanism addresses exactly this hypothesis. A multilayer perceptron (MLP), applied to pairs of nodes, learns which neighboring nodes carry relevant features and up-weight them in the message passing aggregation. The catch-22 is that nodes must be connected somehow in order to apply the weighted aggregation. The question of how to form edges we refer to as the Topology Problem: _Given a variable sized set of nodes (a "point cloud") and a loss function, then aside from a set of optima achieved by the learned GNN MLP weights, there is also a set of optima achieved by the topology of the attention based message passing._ For a sparsely connected GNN, for a particular message passing step, it is non-obvious which nodes are most informational or relevant to other nodes. Many construction approaches, such as that used in ParticleNet, assume the best topology to be homophilic - that is, nodes with similar latent features should be topologically close. However, this is an arbitrary constraint, and some message steps may benefit from connections with dissimilar nodes2. The solution is partly provided by having a second, independent latent space in which the graph is constructed. This is the mechanism adopted by GravNet and GarNet, two models proposed for GNN learning on point clouds. Footnote 2: See [26] for a review of heterophily in graph neural networks. In the elegant approach suggested by the authors of GravNet, two latent spaces are learned for each node update step. The first is the hidden features to be aggregated, \(h_{i}\). The second is an embedding space vector \(\vec{s}_{i}\in S\) to be used to calculate the KNN neighborhood \(K\) and attention weights \(A_{ij}\). Both latent spaces are learned by MLPs applied independently to the input features of each GravNet convolution layer. The aggregated node features are thus given as \[h_{i}^{\prime}=\sum_{j\in K}A(d_{ij},h_{j})\cdot\hat{h}_{j},\qquad\text{where} \qquad A(d_{ij},h_{j})=|h_{i}|_{L1}e^{-Gd_{ij}^{2}},\qquad d_{ij}=|\vec{s}_{i }-\vec{s}_{j}|_{L2} \tag{1}\] where \(G\) is a hyperparameter that acts like a gravitational constant. We define normalized hidden vectors \(\hat{h}_{i}=h_{i}/|h_{i}|_{L1}\), using L1 norm. ### Geometry as Attention: GravNetNorm We are motivated to refine the GravNet architecture by the Topology Problem: Is the attention given to each neighboring node completely captured by the embedding space, and thus is an optimal topology constructed? Intuitively, we look at which information or relevance flows from one node to the next in message passing3. In the original GravNet model, nodes are influenced proportionally to both the closeness of a neighbor \(d_{ij}\) and the _size_ of a neighbor \(|h_{j}|_{L1}\). This is sketched in fig. 0(a). The latter \(|h_{j}|_{L1}\) factor means that a distant neighbor may still have an oversized influence if it is an "important" node (whatever this may mean in the problem being considered). Thus, a graph constructed according to nearness in \(S\) will not necessarily reflect the attention function, leading to important connections possibly being missed, and a suboptimal solution to the Topology Problem. Flow of information as a function of both neighbor size and distance is well-defined in a FC graph, hence the excellent performance of transformers. However if we require a sparse topology, we need to know which neighbors to connect. In the GravNet case, they will be connections that maximize \(\frac{\text{size}}{\text{distance}}\) - an expensive calculation needing to be made across all pairs. Instead, if the weighting of information is only a function of distance, we only need to consider neighbors within a radius \(r\) in \(S\), which can be calculated efficiently and scales well with graph size. Footnote 3: One can formalize this intuition using Layerwise Relevance Propogation (LRP) analysis. An introduction to this is given in [27] and an application to GNNs developed in [28]. The full calculation of LRP in geometry-constrained attention will be provided in an upcoming study. The solution is simple: Normalize hidden features such that all nodes have a total size of 1, and therefore constrain the GNN to pass all relevance through the geometry of \(S\) alone. That is, we take \[A(d_{ij},h_{j})=\exp(-G\frac{d_{ij}^{2}}{r^{2}}) \tag{2}\] Although a seemingly minor alteration, this produces a most-minimal implementation of geometry-constrained attention mechanism. We also introduce a factor \(\frac{1}{r^{2}}\) in the attention function. This new hyperparameter \(r\) appears in the following training procedure: Assuming now that all attention is constrained to the neighbourhood of each node in \(S\), we should train and inference our model using topology built from that neighbourhood only. That is, we construct a radius graph in each message passing step, with radius \(r\). Once this \(r\) hyperparameter is set, e.g. to \(r=1\), then the gravitational constant \(G\) can then be used to tune the sparsity of the topology. E.g. a choice of \(G=3\) means that nodes at distance \(r=1\) will be given an attention weight of around \(0.05\). For the problem considered here, this seems to be the choice of G above which performance plateaus. The effect of normalizing node sizes is sketched in fig. 0(b). Note that the embedding space \(S\) need not be normalized, so we continue to use Euclidean distance as the learned attention function. The details of the implementation and the training procedure are available in a public Github repository [29]. Figure 1: Sketch of the GravNet attention mechanisms. The original GravNet node update propagates features \(h\) proportionally to \(|h|/d\), such that a node is affected by nearby (in embedded space \(S\)) _and_ heavy nodes. GravNetNorm constrains information to flow only through a function of distance, and therefore the geometry fully captures the attention mechanism. Thus only _nearby_ nodes need to be considered in the node update function. ## 3 Results ### Top Tagging Problem The dataset used in this study is made available in [30], which contains a set of 1.4m training jets, and 400k each of validation and test jet samples. A jet contains up to 200 constituent reconstructed particle 4-vectors, which we take as nodes. A further 17 hand-engineered features are attached to each node, taken to match those described in [25]. The task of top tagging is to classify each jet as either originating from the decay of a top quark, or from the decay of a lighter quark or gluon. We thus treat this as a graph-level binary classification problem, where the GNN must output a classification score between 0 and 1 for each graph, which is used in a binary cross entropy loss function, with no positive weighting as the dataset is well-balanced. ### Physics Performance An initial study of the physics performance of the original GravNet and GravNetNorm is presented in table 1, along with several other high-performing deep neural networks4. Both the accuracy and area under the ROC curve (AUC) are given, as well as the background rejection rate \(\epsilon_{B}^{-1}\) (where \(\epsilon_{B}\) is the false positive rate) at a working point of \(30\%\) efficiency. Footnote 4: A note on ParticleNet performance: This is the published performance. We were not able to obtain this result. The training techniques used in that work could also be used to improve GravNetNorm performance. One can see that GravNetNorm outperforms all other models, except for ParticleNet. This shortcoming in performance can be attributed to several factors. The first is that layer sizes are heuristically taken from existing models, and may not be optimally suited to this new architecture. Additionally, in training, we note significant overfitting even on the full training set of 1.2 million jets and with a dropout of 0.2. Performance plateaus above this dropout rate. As such, we propose in an upcoming work to use a larger dataset such as that created in [24], to fully explore the predictive power of GravNetNorm. One can also see in the table that the original GravNet performs well, but not equivalently with the updated variant. Further improvements are being studied, and will be presented in a near-future work, to boost the physics performance of GravNetNorm. These include dividing the spatial vector to use as a multi-headed attention (a mechanism implicit in the ParticleNet architecture), and learning dynamically the _number_ of message passing steps each node requires, just as we do with the number of topological neighbors. These will both add expressiveness without losing the geometry-constrained attention mechanism. ### Computational Performance Inference performance is here measured by both the peak memory usage (taken as a proxy for the kind of hardware limitation these models may impose), and the average jet inference time in microseconds. \begin{table} \begin{tabular}{l c c c} \hline \hline Model & Acc & AUC & \(\epsilon_{B}^{-1}|_{30\%}\) \\ \hline P-CNN & 0.936 & 0.9837 & 1174 \(\pm\) 58 \\ PFN & 0.932 & 0.9819 & 888 \(\pm\) 17 \\ Gravnet & 0.937 & 0.9844 & 1340 \(\pm\) 69 \\ ParticleNet & 0.940 & 0.9858 & 1615 \(\pm\) 93 \\ \hline GravnetNorm & 0.939 & 0.9850 & 1438 \(\pm\) 35 \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of top tagging physics performance for a selection of DNNs [20, 31, 23]. The performance of the first three models is quoted from [25], and all results are averaged across five training runs. Variation across these runs is given for background rejection, while variation of accuracy and AUC is negligible. Other high-performing taggers ([32, 24]) are not compared here as they contain features orthogonal to geometric attention, such as equivariance. Future work will seek to combine these mechanisms. Presented in table 2, we see that GravNetNorm is by far the most computationally efficient. Despite having a comparable number of parameters to other DNNs, this model has two features that allow superior performance. The first is the geometric attention mechanism. Since attention is learned node-wise in embedded space, the embedding step (i.e. the forward pass from \(h_{i}\rightarrow\vec{s}_{i}\)) scales as \(O(N_{nodes})\). We see that both GravNet variants benefit from this. Compare this with the standard edge-wise attention, such as that employed in ParticleNet, which scales as \(O(N_{edges})\). The second feature is that the topology is completely learned, so neighborhoods are only as large as required for good performance5. This allows GravNetNorm to consume fewer resources than GravNet. In particular, a radius graph construction scales naively as \(O(kN_{nodes})\) (where k is the average neighborhood size), while a KNN construction requires neighbors to be sorted and scales naively as \(O(N_{nodes}^{2})\)[33]. The particular implementation used here is from Pytorch Cluster [34], but performance can be boosted further for large point clouds with dedicated radius-graph algorithms [35]. Footnote 5: It is indeed the case that the attention varies smoothly with the geometry, so some arbitrary choice of radius still needs to be made. However, we can quantify exactly the relevance of nodes outside this radius by \(e^{-G}\), which is less than 5% for \(G=3\) Additionally, K values are set arbitrarily by hand, but GravNetNorm learns to build neighborhoods of mean size [3, 8, 13] (in the top tagging case, in order of node update step), significantly improving the throughput of both the graph-building and aggregation operations. While hyperparameter tuning of K may improve a KNN-based model throughput - as there appear to be optimal choices of neighborhood size - this would still be a static value, rather than dynamic from point-to-point and event-to-event. ## 4 Conclusion In this work, we have explored a long-standing obstacle in the application of graph neural networks to point clouds, which we term the Topology Problem. We present one set of solutions to this, in the form of a geometry-constrained attention. In particular, we alter the pre-existing GravNet architecture to construct a minimal geometric attention model, and show how it intuitively leads to a topology that captures the node connections with highest attention. We have taken an example use case to be graph-level top tagging, however the use of geometric attention could be applied to node-level or edge-level prediction tasks, and we will present results on those tasks in upcoming work. We show that our GravNetNorm variation is competitive in tagging accuracy with other state-of-the-art taggers, while requiring far fewer computational resources. As this is the "most-minimal" geometric attention model, future work will present techniques to combine geometric attention with other SotA architectures to further boost tagging accuracy. The codebase is available on Github [29]. \begin{table} \begin{tabular}{l l l l} \hline \hline Model & \# Parameters & Max. memory (Gb) & Time (\(\mu s\) per jet) \\ \hline P-CNN & 348k & - & 110 \\ PFN & 82k & - & 120 \\ ParticleNet & 467k & 3.1 & 88 \\ Gravnet & 545k & 0.87 & 37 \\ \hline GravnetNorm & 545k & **0.23** & **22** \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison of memory and time requirements of top taggers. Best performances are given in bold. Performance is measured on an Nvidia 40Gb A100, with batch size 1000. Timings are given _per jet_, that is \(t_{jet}=t_{batch}/1000\). The first two model timings are quoted from [32]. ## 5 Impact Statement In this work, we propose several ideas that we hope will stimulate further discussion and research directions. These include: * an oft-overlooked issue that is usually solved ad hoc in graph neural network application to point clouds. In reality, as high energy physics datasets grow in size and complexity, a careful analysis of how graph topology is constructed will be essential to scaling up production-ready models in collider and astroparticle experiments. * A **geometry-constrained attention operator**, as applied in an amended version of the GravNet architecture. This can be seen as a most-minimal construction of a GNN that propagates all relevance entirely through geometry, and may open the door to more sophisticated attention geometries. Regardless, the operation as presented here can be dropped into existing architectures to greatly improve computational efficiency. * Some suggestions are given of further exploration of geometric-constrained attention, including multi-headed attention and learned number of message passing iterations. We do not expect this work to have any negative societal or ethical impacts. ## 6 Acknowledgements This work is supported by the US DoE's Office of Science, under contract # DE-AC02-05CH11231 (CompHEP Exa.TrkX) and the Exascale Computing Project (17-SC-20-SC). This research used resources of the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility located at Lawrence Berkeley National Laboratory, operated under Contract No. DE-AC02-05CH11231. I am grateful to Paolo Calafiura for comments on this work, as well as Ryan Liu, Gage DeZoort and Tuan Pham for discussions.
2309.04906
Stability and Regularity for Double Wall Carbon Nanotubes Modeled as Timoshenko Beams with Thermoelastic Effects and Intermediate Damping
This research studies two systems composed by the Timoshenko beam model for double wall carbon nanotubes, coupled with the heat equation governed by Fourier's law. For the first system, the coupling is given by the speed the rotation of the vertical filament in the beam $\beta\psi_t$ from the first beam of Tymoshenko and the Laplacian of temperature $\delta\theta_{xx}$, where we also consider the damping terms fractionals $\gamma_1(-\partial_{xx})^{\tau_1}\phi_t$, $\gamma_2(-\partial_{xx})^{\tau_2} y_t$ and $\gamma_3(-\partial_{xx})^{\tau_3} z_t$, where $(\tau_1, \tau_2, \tau_3) \in [0,1]^3$. For this first system we proved that the semigroup $S_1(t)$ associated to system decays exponentially for all $(\tau_1 , \tau_2 , \tau_3 ) \in [0,1]^3$. The second system also has three fractional damping $\gamma_1(-\partial_{xx})^{\beta_1}\phi_t$, $\gamma_2(-\partial_{xx})^{\beta_2} y_t$ and $\gamma_3(-\partial_{xx})^{\beta_3} z_t$, with $(\beta_1, \beta_2, \beta_3) \in [0,1]^3$. Furthermore, the couplings between the heat equation and the Timoshenko beams of the double wall carbon nanotubes for the second system is given by the Laplacian of the rotation speed of the vertical filament in the beam $\beta\psi_{xxt}$ of the first beam of Timoshenko and the Lapacian of the temperature $\delta\theta_{xx}$. For the second system, we prove the exponential decay of $S_2(t)$ for $(\beta_1, \beta_2, \beta_3) \in [0,1]^3$ and also show that $S_2(t)$ admits Gevrey classes $s>(\phi+1)/(2\phi)$ for $\phi=\min\{\beta_1,\beta_2,\beta_3\}, \forall (\beta_1,\beta_2,\beta_3)\in (0,1)^3$, and proving that $S_2(t)$ is analytic when the parameters $(\beta_1, \beta_2, \beta_3) \in [1/2,1]^3$. One of the motivations for this research was the work; Ramos et al. \cite{Ramos2023CNTs}, whose partial results are part of our results obtained for the first system for $(\tau_1, \tau_2, \tau_3) = (0, 0, 0)$.
Fredy M. Sobrado Suárez, Lesly D. Barbosa Sobrado, Gabriel L. Lacerda de Araujo, Filomena B. Rodrigues Mendes
2023-09-10T01:22:15Z
http://arxiv.org/abs/2309.04906v1
Stability and Regularity for Double Wall Carbon Nanotubes Modeled as Timoshenko Beams with Thermoelastic Effects and Intermediate Damping ###### Abstract This research studies two systems composed by the Timoshenko beam model for double wall carbon nanotubes, coupled with the heat equation governed by Fourier's law. For the first system, the coupling is given by the speed the rotation of the vertical filament in the beam \(\beta\psi_{t}\) from the first beam of Tymoshenko and the Laplacian of temperature \(\delta\theta_{xx}\), where we also consider the damping terms fractionals \(\gamma_{1}(-\partial_{xx})^{\tau_{1}}\phi_{t}\), \(\gamma_{2}(-\partial_{xx})^{\tau_{2}}y_{t}\) and \(\gamma_{3}(-\partial_{xx})^{\tau_{3}}z_{t}\), where \((\tau_{1},\tau_{2},\tau_{3})\in[0,1]^{3}\). For this first system we proved that the semigroup \(S_{1}(t)\) associated to system decays exponentially for all \((\tau_{1},\tau_{2},\tau_{3})\in[0,1]^{3}\). The second system also has three fractional damping \(\gamma_{1}(-\partial_{xx})^{\beta_{1}}\phi_{t}\), \(\gamma_{2}(-\partial_{xx})^{\beta_{2}}y_{t}\) and \(\gamma_{3}(-\partial_{xx})^{\beta_{3}}z_{t}\), with \((\beta_{1},\beta_{2},\beta_{3})\in[0,1]^{3}\). Furthermore, the couplings between the heat equation and the Timoshenko beams of the double wall carbon nanotubes for the second system is given by the Laplacian of the rotation speed of the vertical filament in the beam \(\beta\psi_{x\pm t}\) of the first beam of Timoshenko and the Lapacian of the temperature \(\delta\theta_{xx}\). For the second system, we prove the exponential decay of the associated semigroup \(S_{2}(t)\) for \((\beta_{1},\beta_{2},\beta_{3})\in[0,1]^{3}\) and also show that this semigroup admits Gevrey classes \(s>(\phi+1)/(2\phi)\) for \(\phi=\min\{\beta_{1},\beta_{2},\beta_{3}\},\forall(\beta_{1},\beta_{2},\beta_ {3})\in(0,1)^{3}\), and we finish our investigation proving that \(S_{2}(t)\) is analytic when the parameters \((\beta_{1},\beta_{2},\beta_{3})\in[1/2,1]^{3}\). One of the motivations for this research was the work recently published in 2023; Ramos et al. [20], whose partial results are part of our results obtained for the first system for \((\tau_{1},\tau_{2},\tau_{3})=(0,0,0)\). keyword: Asymptotic Behavior, Stability, Regularity, Analyticity, DWCNTs-Fourier System, Gevrey Class. ## 1 Introduction The discovery of structures called carbon nanotubes (CNTs) occurred in 1987 and later officially disclosed to the scientific community in 1991 [11] as the multi wall carbon nanotubes (MWCNTs); they were discovered experimentally in the search for a molecular structure called Fullerene. Fullerene is a closed carbon structure with a spherical format (geodesic dome) formed by 12 pentagons and 20 hexagons, whose formula is \(C_{60}\). Carbon nanotubes are cylindrical macromolecules composed of carbon atoms in a periodic hexagonal array with \(sp^{2}\) hybridization, similar to graphite [9]. They are made like rolled sheets of graphene and can be as thick as a single carbon atom. They receive this name due to their tubular morphology in nanometric dimensions (\(1nm=10^{-9}m\).). According to Shen and Brozena [24], CNTs are classified in three ways: single wall carbon nanotubes (SWCNTs), double wall carbon nanotubes (DWCNTs) and (MWCNTs), where the concentric cylinders interact with each other through the Van der Walls force, the authors also point out that DWCNTs are an emerging class of carbon nanostructures and represent the simplest way to study the physical effects of coupling between the walls of carbon nanotubes. The discovery of this new structure at the molecular level contributed in the last decade to the advancement of nanotechnology. In [31] an analysis of the main properties of CNTs was presented, the study confirmed that CNTs have excellent properties; mechanical, electronic and chemical; they are about ten times stronger and six times lighter than steel. They transmit electricity like a superconductor and are excellent transmitters of temperature. Due to their superior electronic and mechanical properties to currently used materials, carbon nanotubes are candidates to be used in products and equipment that require nanoscale structures. In the future, CNTs should become the base material for nanoelectronics, nanodevices, and nanocomposites. The main problems that have to be overcome for this to happen are the difficult controlled experiments at the nanoscale: the high cost of molecular dynamics simulations and the high time consumption of these simulations. Knowing better the models of continuous mechanics, which are governed by the modeling through the Euler elastic beam model and the Timoshenko beam model used to study the mechanics of linear and nonlinear deformations, should help to make this possible. The Euler-Bernoulli beam model disregards the effects of shear and rotation, and according to [30, 31] the vibrations in carbon nanotubes are animated by high frequencies, above \(1Thr\). According to Yoon and others [34], the effects of rotational inertia and shear are significant in the study of terahertz frequencies (\(10^{12}\)), hence Yoon [34] considers questionable the Euler-Bernoulli model applied to (CNTs). Therefore, the Timoshenko Model is the most suitable. For double-walled nanotubes (DWCNTs) or concentric multi-walled nanotubes (MWCNTs), the most used continuous models in the literature assume that all nested tubes of MWCNTs remain coaxial during deformation and thus can be described by a single deflection model. However, this model cannot be used to describe the relative vibration between adjacent tubes of MWCNTs. In 2003, it was proposed by [31] that the fittings of concentric tubes are considered as individual beams, and that the deflections of all nested tubes are coupled through the van der Waals interaction force between two adjacent tubes [3, 4]. So, each of the inner and outer tubes is modeled as a beam. In the pioneering work on the carbon nanotube model by Yoon et al. [33], the authors proposed a coupled system of partial differential equations inspired by the Timoshenko beam model to model DWCNTs. The model consists of the following equations \[\rho A_{1}\frac{\partial^{2}Y_{1}}{\partial t^{2}}-\kappa GA_{1} \bigg{(}\frac{\partial^{2}Y_{1}}{\partial x^{2}}-\frac{\partial\varphi_{1}}{ \partial x}\bigg{)}-P = 0,\] \[\rho I_{1}\frac{\partial^{2}\varphi_{1}}{\partial t^{2}}-EI_{1} \frac{\partial^{2}\varphi_{1}}{\partial x^{2}}-\kappa GA_{1}\bigg{(}\frac{ \partial Y_{1}}{\partial x}-\varphi_{1}\bigg{)} = 0,\] \[\rho A_{2}\frac{\partial^{2}Y_{2}}{\partial t^{2}}-\kappa GA_{2} \bigg{(}\frac{\partial^{2}Y_{2}}{\partial x^{2}}-\frac{\partial\varphi_{2}}{ \partial x}\bigg{)}+P = 0,\] \[\rho I_{2}\frac{\partial^{2}\varphi_{2}}{\partial t^{2}}-EI_{2} \frac{\partial^{2}\varphi_{2}}{\partial x^{2}}-\kappa GA_{2}\bigg{(}\frac{ \partial Y_{2}}{\partial x}-\varphi_{2}\bigg{)} = 0.\] Where \(Y_{i}\) and \(\varphi_{i}\) (\(i=1,2\)) represent respectively the total deflection and the inclination due to the bending of the nanotube \(i\) and the constants \(I_{i}\), \(A_{i}\) denote the moment of inertia and the cross-sectional area of the tube \(i\), respectively, and \(P\) is the Van der Waals force acting on the interaction between the two tubes per unit of axial length. Also according to [33], it can be seen that the deflections of the two tubes are coupled through the Van der Waals interaction \(P\) (see [29]) between the two tubes, and as the tubes inside and outside of a DWCNTs are originally concentric, the Van der Waals interaction is determined by the spacing between the layers. Therefore, for a small-amplitude linear vibration, the interaction pressure at any point between the two tubes linearly depends on the difference in their deflection curves at that point, that is, it depends on the term \[P=\jmath(Y_{2}-Y_{1}). \tag{1}\] In particular, the Van der Waals interaction coefficient \(\jmath\) for the interaction pressure per unit axial length can be estimated based on an effective interaction width of the tubes as found in [32; 21]. Thus, this model treats each of the nested and concentric nanotubes as individual Timoshenko beams interacting in the presence of Van der Waals forces (see Figure: (3)). Currently in the literature there are few investigations related to the study of asymptotic behavior and/or regularity for DWCNTs models, or for DWCNTs systems coupled with the heat equation governed by Fourier's law (DWCNTs-Fourier). The DWCNTs model was studied in 2015 in the thesis [17], where the author studied the asymptotic behavior of the model: \[\rho_{1}\varphi_{tt}-\kappa_{1}(\varphi_{x}-\psi)_{x}-\jmath(y- \varphi)+\alpha_{0}\varphi_{t}=0\quad\text{in}\quad(0,l)\times(0,\infty), \tag{2}\] \[\rho_{2}\psi_{tt}-b_{1}\psi_{xx}-\kappa_{1}(\varphi_{x}-\psi)+ \alpha_{1}\psi_{t}=0\quad\text{in}\quad(0,l)\times(0,\infty),\] (3) \[\rho_{3}y_{tt}-\kappa_{2}(y_{x}-z)_{x}+\jmath(y-\varphi)+\alpha_ {2}y_{t}=0\quad\text{in}\quad(0,l)\times(0,\infty),\] (4) \[\rho_{4}z_{tt}-b_{2}z_{xx}-\kappa_{2}(y_{x}-z)+\alpha_{3}z_{t}=0 \quad\text{in}\quad(0,l)\times(0,\infty), \tag{5}\] with the initial conditions \[\varphi(x,0)=\varphi_{0}(x),\quad\varphi_{t}(x,0)=\varphi_{1}(x),\quad\psi(x,0)=\psi_{0}(x),\quad\psi_{t}(x,0)=\phi_{1}(x)\quad\text{in}\quad x \in(0,l), \tag{6}\] \[y(x,0)=y_{0}(x),\quad y_{t}(x,0)=y_{1}(x),\quad z(x,0)=z_{0}(x),\quad z_{t}(x,0)=z_{1}(x)\quad\text{in}\quad x\in(0,l), \tag{7}\] and subject to boundary conditions \[\varphi(0,t)=\varphi(l,t)=\psi(0,t)=\psi(l,t)=0\quad\text{for} \quad\text{all}\quad t>0, \tag{8}\] \[y(0,t)=y(l,t)=z(0,t)=z(l,t)=0\quad\text{for}\quad\text{all} \quad t>0. \tag{9}\] For the case that \(\alpha_{0}=0\) and \(\alpha_{i}>0\), for \(i=1,2,3\), in [17] the author demonstrated the lack of exponential decay of the semigroup \((S(t))_{t\geq 0}\) associated with the system (2)-(9), when \(\frac{\rho_{1}}{\kappa_{1}}\neq\frac{\rho_{2}}{b_{1}}\) and \(J\big{(}\frac{\rho_{3}}{b_{1}}-\frac{\rho_{4}}{\kappa_{1}}\big{)}\neq\frac{ \kappa_{1}}{b_{1}}\), and also proved that if \(\chi=\frac{\kappa_{1}\rho_{2}-b_{1}\rho_{1}}{\kappa_{1}^{2}-\jmath\rho_{2} \kappa_{1}+\jmath\rho_{1}}=0\), then \((S(t))_{t\geq 0}\) is exponentially stable, and if \(\chi\neq 0\), \((S(t))_{t\geq 0}\) is exponentially stable. Beyond that if \(\chi\neq 0\), then Figure 3: 2D and 3D Representations of the Double Wall Carbono Nanotubes Model [20] \((S(t))_{t\geq 0}\) is polynomially stable with optimal rate \(o(t^{-\frac{1}{2}})\). In addition, in Chapter 4 of [17], the author validates through numerical analysis, using the finite difference method, the results previously demonstrated, and in addition to presenting graphs of other cases, such as considering \(\alpha_{i}=0\) and \(\alpha_{i}>0\) for \(i=1,2,3,4\). Recently, in 2023, two new investigations emerged: One of them is the DWCNTs-Fourier system with friction dampers, see [20]; in this work the authors consider the problem of heat conduction in carbon nanotubes modeled as Timoshenko beams, inspired by the work of Yoon et al. [Comp. Part B: Ing. 35 (2004) 87-93]. The system is given by \[\rho_{1}\varphi_{tt}-\kappa_{1}(\varphi_{x}-\psi)_{x}-J(y-\varphi )+\gamma_{1}\varphi_{t} =0\quad\text{in}\quad(0,l)\times(0,\infty), \tag{10}\] \[\rho_{2}\psi_{tt}-b_{1}\psi_{xx}-\kappa_{1}(\varphi_{x}-\psi)+ \delta\theta_{xx} =0\quad\text{in}\quad(0,l)\times(0,\infty),\] (11) \[\rho_{3}y_{tt}-\kappa_{2}(y_{x}-z)_{x}+J(y-\varphi)+\gamma_{2}y_{ t} =0\quad\text{in}\quad(0,l)\times(0,\infty),\] (12) \[\rho_{4}z_{tt}-b_{2}z_{xx}-\kappa_{2}(y_{x}-z)+\gamma_{3}z_{t} =0\quad\text{in}\quad(0,l)\times(0,\infty),\] (13) \[\rho_{5}\theta_{t}-K\theta_{xx}+\beta\psi_{t} =0\quad\text{in}\quad(0,l)\times(0,\infty), \tag{14}\] subject to boundary conditions (8), (9) and \[\theta(0,t)=\theta(l,t)=0\qquad\text{ for all}\quad t>0. \tag{15}\] Note that the system (8)-(15) presents three friction dissipators (weak damping): \(\gamma_{1}\varphi_{t},\gamma_{2}y_{t}\) and \(\gamma_{3}z_{t}\). The authors apply semigroup theory of linear operators to demonstrate the exponential stabilization of the semigroup \(S(t)\) associated with the system (8)-(15), and their results are independent of the relationship between the coefficients. Furthermore, they analyze the totally discrete problem using a finite difference scheme, introduced by a space-time discretization that combines explicit and implicit integration methods. The authors also show the construction of numerical energy and simulations that validate the theoretical results of exponential decay and convergence rates. By the year of 2023, [22] investigated the one-dimensional equations for the double wall carbon nanotubes modeled by coupled Timoshenko elastic beam system with nonlinear arbitrary localized damping: \[\rho_{1}\varphi_{tt}-\kappa_{1}(\varphi_{x}-\psi)_{x}-J(y-\varphi )+\alpha_{1}(x)g_{1}(\varphi_{t}) =0\quad\text{in}\quad(0,l)\times(0,\infty), \tag{16}\] \[\rho_{2}\psi_{tt}-b_{1}\psi_{xx}-\kappa_{1}(\varphi_{x}-\psi)+ \alpha_{2}(x)g_{2}(\psi_{t}) =0\quad\text{in}\quad(0,l)\times(0,\infty),\] (17) \[\rho_{3}y_{tt}-\kappa_{2}(y_{x}-z)_{x}+J(y-\varphi)+\alpha_{3}(x )g_{3}(y_{t}) =0\quad\text{in}\quad(0,l)\times(0,\infty),\] (18) \[\rho_{4}z_{tt}-b_{2}z_{xx}-\kappa_{2}(y_{x}-z)+\alpha_{4}(x)g_{4} (z_{t}) =0\quad\text{in}\quad(0,l)\times(0,\infty), \tag{19}\] where the localizing functions \(\alpha_{i}(x)\) are supposed to be smooth and nonnegative, while the nonlinear functions \(g_{i}(x),i=1,\cdots,4\), are continuous and monotonic increasing. The system (16)-(19) is subject to Dirichlet boundary conditions; in (8) and (9), see [22], the authors showed that damping placed on an arbitrary small support, not quantized at the origin, leads to uniform (time asymptotic) decay rates for the energy function of the system. In the same direction of this last paper, we would like to mention the work of Shubov and Rojas-Arenaza [27] where they considered the system (16)-(19) with \(\alpha_{i}(x)=1,g_{i}(s)=s,i=1,\cdots,4\), initial conditions (6)-(7). Subject to boundary conditions of type: \[\left\{\begin{array}{cc}\kappa_{1}(\varphi_{x}-\psi)(l,t)=-\rho_{2}\gamma_{1 }\varphi_{t}(l,t)&t\geq 0,\\ b_{1}\psi_{x}(l,t)=-\rho_{2}\gamma_{2}\psi_{t}(l,t),&t\geq 0,\\ \kappa_{2}(y_{x}-z)(l,t)=\rho_{4}\gamma_{3}y_{t}(l,t),&t\geq 0,\\ b_{2}z_{x}(l,t)=-\rho_{4}\gamma_{4}z_{t}(l,t),&t\geq 0.\end{array}\right. \tag{20}\] They first proved that the energy associated to the system, with boundary conditions (20), is decreasing if \(j=0\). After they proved that the semigroup generator is an unbounded non self-adjoint operator with a compact resolvent. The two systems that we study in this research are for models of carbon nanotubes coupled with the heat equation given by Fourier's Law. The difference between these two systems is in the coupling of the DWCNTs model and the heat equation. The first system is a generalization of the model presented in [20]: we consider the 3 fractional damping; \(\gamma_{1}(-\partial_{xx})^{\gamma_{1}}\varphi_{t}\), \(\gamma_{2}(-\partial_{xx})^{\gamma_{2}}y_{t}\) and \(\gamma_{3}(-\partial_{xx})^{\gamma_{3}}z_{t}\), for the parameters \(\tau_{i},i=1,2,3\), varying in the interval \([0,1]\). We note that when \((\tau_{1},\tau_{2},\tau_{3})=(0,0,0)\) the system is the one studied in [20]. The second system studied in this work is given by: \[\rho_{1}\varphi_{tt}-\kappa_{1}(\varphi_{x}-\psi)_{x}-J(y-\varphi )+\gamma_{1}(-\partial_{xx})^{\beta_{1}}\varphi_{t}=0\quad\text{in}\quad(0,l) \times(0,\infty), \tag{21}\] \[\rho_{2}\psi_{tt}-b_{1}\psi_{xx}-\kappa_{1}(\varphi_{x}-\psi)+ \delta\theta_{xx}=0\quad\text{in}\quad(0,l)\times(0,\infty),\] (22) \[\rho_{3}y_{tt}-\kappa_{2}(y_{x}-z)_{x}+J(y-\varphi)+\gamma_{2}(- \partial_{xx})^{\beta_{2}}y_{t}=0\quad\text{in}\quad(0,l)\times(0,\infty),\] (23) \[\rho_{4}z_{tt}-b_{2}z_{xx}-\kappa_{2}(y_{x}-z)+\gamma_{3}(- \partial_{xx})^{\beta_{3}}z_{t}=0\quad\text{in}\quad(0,l)\times(0,\infty),\] (24) \[\rho_{5}\theta_{t}-K\theta_{xx}-\delta\psi_{xxt}=0\quad\text{in} \quad(0,l)\times(0,\infty). \tag{25}\] We study the system (21)-(25) subject to boundary conditions \[\varphi(0,t)=\varphi(l,t)=\psi(0,t)=\psi(l,t)=0\quad\text{for} \quad\text{all}\quad t>0, \tag{26}\] \[y(0,t)=y(l,t)=z(0,t)=z(l,t)=0\quad\text{for}\quad\text{all}\quad t >0,\] (27) \[\theta(0,t)=\theta(l,t)=0\quad\text{for}\quad\text{all}\quad t>0. \tag{28}\] And the initial conditions are given by \[\varphi(x,0)=\varphi_{0}(x),\;\varphi_{t}(x,0)=\varphi_{1}(x),\; \psi(x,0)=\psi_{0}(x),\quad\text{for}\;x\in(0,l), \tag{29}\] \[\psi_{t}(x,0)=\psi_{1}(x),\;y(x,0)=y_{0}(x),\;y_{t}(x,0)=y_{1}(x), \quad\text{for}\;x\in(0,l),\] (30) \[z(x,0)=z_{0}(x),\;z_{t}(x,0)=z_{1}(x),\;\theta(x,0)=\theta_{0}(x ),\quad\text{for}\;x\in(0,l). \tag{31}\] Note that the difference between these systems is in the coupling term in the heat equation. For the first system, the coupling is \(\beta\psi_{t}\), which presents a derivative of zero order with respect to the spatial variable. In the second system, the coupling term is given by \(\delta\psi_{txx}\), which presents a second order derivative with respect to spatial variable \(x\). The coupling considered by the second system is the most common, this type of coupling is known as strong coupling. In our research it helps to show the existence of Gevrey classes and also to demonstrate the analyticity of the associated \(S_{2}(t)\) semigroup to the second system. During the development of this investigation, we were able to observe that the zero order of the derivative in relation to the space of the coupling term for the first system was decisive for not obtaining the estimates: \(|\lambda|^{\phi}\|v\|^{2}\leq C\|F\|_{\mathbb{H}_{1}}\|U\|_{\mathbb{H}_{1}}\) and \(|\lambda|^{\phi}|A^{\frac{1}{2}}\psi\|^{2}\leq C\|F\|_{\mathbb{H}_{1}}\|U\|_{ \mathbb{H}_{1}}\) for \(0<\phi\leq 1\), which made it impossible to obtain regularity results of the first system. During the last decades, various investigations focused on the study of the asymptotic behavior and regularity of the Tymoshenko beam system, thermoviscoelastic Timoshenko system with diffusion effect and also of Timoshenko beam systems coupled with heat equations from Fourier's law, Cateneo's and thermoelasticity of type III. Results of exponential decay and regularity for these systems are mostly provided with dissipative terms, at least in the equations that do not refer to heat or do not have heat coupling terms. We will cite some of these works below. In 2005, Raposo et al.[19] studies the Timoshenko system, provided for two frictional dissipations \(\varphi_{t}\) and \(\psi_{t}\), and proves that the semigroup associated with the system decays exponentially. For the same Timoshenko system, when the stress-strain constitutive law is of Kelvin-Voigt type, given by \[S=\kappa(\varphi_{x}+\psi)+\gamma_{1}(\varphi_{x}+\psi)_{t}\qquad\text{and} \qquad M=b\psi_{x}+\psi_{xt},\] Malacarne A. and Rivera J. in [14] shows that \(S(t)\) is analytical if and only if the viscoelastic damping is present in both the shear stress and the bending moment. Otherwise, the corresponding semigroup is not exponentially stable no matter the choice of the coefficients. They also showed that the solution decays polynomially to zero as \(t^{-1/2}\), no matter where the viscoelastic mechanism is effective and that the rate is optimal whenever the initial data are taken on the domain of the infinitesimal operator. In 2023, Suarez [26] studied the regularity of the model given in [19], substituting the two damping weeks \(\varphi_{t}\) and \(\psi_{t}\), for fractional dampings \((-\partial_{xx})^{\tau}\varphi_{t}\) and \((-\partial_{xx})^{\sigma}\psi_{t}\) where the parameters \((\tau,\sigma)\in[0,1]\), and proved the existence of Gevrey classes \(s>\frac{r+1}{2r}\), for \(r=\min\{\tau,\sigma\},\quad\forall(\tau,\sigma)\in(0,1)^{2}\), of the semigroup \(S(t)\) associated to the system, and analyticity of \(S(t)\) when the two parameters \(\tau\) and \(\sigma\) vary in the interval \([1/2,1]\). In 2021, M. Elhindi and T. EL Arwadi [7] studied the Timoshenko beam model with thermal, mass diffusion and viscoelastic effects: \[\left\{\begin{array}{l}\rho_{1}\varphi_{tt}-\kappa(\varphi_{x}-\psi)_{x}- \gamma_{1}(\varphi_{x}+\psi)_{xt}=0,\\ \rho_{2}\psi_{tt}-\alpha\psi_{xx}-\gamma_{2}\psi_{xxt}+\kappa(\varphi_{x}+ \psi)+\gamma_{1}(\varphi_{x}+\psi)_{t}-\xi_{1}\theta_{x}-\xi_{2}P_{x}=0,\\ c\theta_{t}+dP_{t}-\kappa\theta_{xx}-\xi_{1}\psi_{tx}=0,\\ d\theta_{t}+rP_{t}-hP_{xx}-\xi_{2}\psi_{tx}=0.\end{array}\right. \tag{32}\] Using semigroup theory, they proved that the considered problem is well posed with the Dirichlet boundary conditions. An exponential decay is obtained by constructing the Lyapunov functional. Finally, a numerical study based on the \(P_{1}-\)finite element approximation for spatial discretization and implicit Euler scheme for temporal discretization is carried out, where the stability of the scheme is studied, as well as error analysis and some numerical simulations are obtained. By the year of 2023, Mendes et al. in [15], present the study of the regularity of two thermoelastic beam systems defined by the Timoshenko beam model coupled with the heat conduction of Green-Naghdiy theory of type III; both mathematical models are differentiated by their coupling terms that arise as a consequence of the constitutive laws initially considered. The systems presented in this work have 3 fractional dampings: \((-\partial_{xx})^{\tau}\phi_{t},(-\partial_{xx})^{\sigma}\psi_{t}\) and \((-\partial_{xx})^{\xi}\theta_{t}\), where \(\phi,\psi\) and \(\theta\) are: transverse displacement, rotation angle and empirical temperature of the beam respectively and the parameters \((\tau,\sigma,\xi)\in[0,1]^{3}\). The main contribution of this article is to show that the corresponding semigroup \(S_{i}(t)=e^{\mathcal{B}_{i}t}\), with \(i=1,2\), is of Gevrey class \(s>(r+1)/(2r)\) for \(r=\min\{\tau,\sigma,\xi\}\ \forall(\tau,\sigma,\xi)\in(0,1)^{3}\). It is also showed that \(S_{1}(t)=e^{\mathcal{B}_{1}t}\) is analytic in the region \(RA_{1}:=\{(\tau,\sigma,\xi)\in[1/2,1]^{3}\}\) and \(S_{2}(t)=e^{\mathcal{B}_{2}t}\) is analytic in the region \(RA_{2}:=\{(\tau,\sigma,\xi)\in[1/2,1]^{3}/\tau=\xi\}\). Some articles published in the last decade that study the asymptotic behavior and regularity of coupled systems and/or fractional dissipations can be consulted at [1, 5, 10, 16, 23]. The paper is organized as follows. In section 2, we study the well-posedness and exponential decay of the system (33)-(43) through semigroup theory. In section 3, we study the well-posedness, exponential decay, existence of Gevrey classes and analyticity of the system (80)-(84) with initial conditions (29)-(31), for all the results we use again the semigroup theory, the good properties of fractional operator \(A^{r}:=(-\partial_{xx})^{r}\) for \(r\in\mathbb{R}\), a proper decomposition of the functions \(u,s,w\) and the Interpolation Theorem 16. ## 2 System 01 In this section we present the study of the well-posedness and the exponential decay of the first system, for both results the semigroup theory is used, the first system is given by: \[\rho_{1}\varphi_{tt}-\kappa_{1}(\varphi_{x}-\psi)_{x}-j(y-\varphi )+\gamma_{1}(-\partial_{xx})^{\tau_{1}}\varphi_{t}=0\quad\text{in}\quad(0,l) \times(0,\infty), \tag{33}\] \[\rho_{2}\psi_{tt}-b_{1}\psi_{xx}-\kappa_{1}(\varphi_{x}-\psi)+ \delta\theta_{xx}=0\quad\text{in}\quad(0,l)\times(0,\infty),\] (34) \[\rho_{3}y_{tt}-\kappa_{2}(y_{x}-z)_{x}+j(y-\varphi)+\gamma_{2}(- \partial_{xx})^{\tau_{2}}y_{t}=0\quad\text{in}\quad(0,l)\times(0,\infty),\] (35) \[\rho_{4}z_{tt}-b_{2}z_{xx}-\kappa_{2}(y_{x}-z)+\gamma_{3}(- \partial_{xx})^{\tau_{3}}z_{t}=0\quad\text{in}\quad(0,l)\times(0,\infty),\] (36) \[\rho_{5}\theta_{t}-K\theta_{xx}+\beta\psi_{t}=0\quad\text{in} \quad(0,l)\times(0,\infty), \tag{37}\] subject to boundary conditions \[\varphi(0,t)=\varphi(l,t)=\psi(0,t)=\psi(l,t)=0\quad\text{for} \quad\text{all}\quad t>0, \tag{38}\] \[y(0,t)=y(l,t)=z(0,t)=z(l,t)=0\quad\text{for}\quad\text{all} \quad t>0,\] (39) \[\theta(0,t)=\theta(l,t)=0\quad\text{for}\quad\text{all}\quad t>0. \tag{40}\] And the initial conditions are given by \[\varphi(x,0)=\varphi_{0}(x),\;\varphi_{t}(x,0)=\varphi_{1}(x),\; \psi(x,0)=\psi_{0}(x),\quad\text{for }x\in(0,l), \tag{41}\] \[\psi_{t}(x,0)=\psi_{1}(x),\;y(x,0)=y_{0}(x),\;y_{t}(x,0)=y_{1}(x), \quad\text{for }x\in(0,l),\] (42) \[z(x,0)=z_{0}(x),\;z_{t}(x,0)=z_{1}(x),\;\theta(x,0)=\theta_{0}(x), \quad\text{for }x\in(0,l). \tag{43}\] Let's define the operator \(A\colon\mathfrak{D}(A)=H^{2}(0,l)\cap H^{1}_{0}(0,l)\to L^{2}(0,l)\), such that \(A:=-\partial_{xx}\). Using this operator \(A\) the system (33)-(43) can be written in the following setting \[\rho_{1}\varphi_{tt}+\kappa_{1}A\varphi+\kappa_{1}\psi_{x}-j(y- \varphi)+\gamma_{1}A^{\tau_{1}}\varphi_{t}=0\quad\text{in}\quad(0,l)\times(0, \infty), \tag{44}\] \[\rho_{2}\psi_{tt}+b_{1}A\psi-\kappa_{1}(\varphi_{x}-\psi)-\delta A\theta =0\quad\text{in}\quad(0,l)\times(0,\infty),\] (45) \[\rho_{3}y_{tt}+\kappa_{2}Ay+\kappa_{2}z_{x}+j(y-\varphi)+\gamma_{ 2}A^{\tau_{2}}y_{t}=0\quad\text{in}\quad(0,l)\times(0,\infty),\] (46) \[\rho_{4}z_{tt}+b_{2}Az-\kappa_{2}(y_{x}-z)+\gamma_{3}A^{\tau_{2}}z _{t}=0\quad\text{in}\quad(0,l)\times(0,\infty),\] (47) \[\rho_{5}\theta_{t}+KA\theta+\beta\psi_{t}=0\quad\text{in}\quad(0, l)\times(0,\infty), \tag{48}\] with the initial conditions (41)-(43). **Remark 1**: _It is known that this operator \(A:=-\partial_{xx}\) is strictly positive, selfadjoint, has a compact inverse, and has compact resolvent. And the operator \(A^{\sigma}\) is self-adjoint positive for all \(\sigma\in\mathbb{R}\), bounded for \(\sigma\leq 0\), and the embedding_ \[\mathfrak{D}(A^{\sigma_{1}})\hookrightarrow\mathfrak{D}(A^{\sigma_{2}}),\] _is continuous for \(\sigma_{1}>\sigma_{2}\). Here, the norm in \(\mathfrak{D}(A^{\sigma})\) is given by \(\|u\|_{\mathfrak{D}(A^{\sigma})}:=\|A^{\sigma}u\|\), \(u\in\mathfrak{D}(A^{\sigma})\), where \(\langle\cdot,\cdot\rangle\) and \(\|\cdot\|\) denotes the inner product and norm in the complex Hilbert space \(\mathfrak{D}(A^{0})=L^{2}(0,l)\). Some of the most used spaces at work are \(\mathfrak{D}(A^{\frac{1}{2}})=H^{1}_{0}(0,l)\) and \(\mathfrak{D}(A^{-\frac{1}{2}})=H^{-1}(0,l)\)._ ### Well-posedness of the System 01 Next we are going to rewrite our system (41)-(48) in Cauchy abstract form to apply semigroup theory: Taking, \(\varphi_{t}=u\), \(\psi_{t}=v\), \(y_{t}=s\) and \(z_{t}=w\), the initial boundary value problem (38)-(48) can be reduced to the following abstract initial value problem for a first-order evolution equation \[\frac{d}{dt}U(t)=\mathbb{B}_{i}U(t),\quad U(0)=U_{0}, \tag{49}\] where \(U(t)=(\varphi,u,\psi,v,y,s,z,w,\theta)^{T}\), \(U_{0}=(\varphi_{0},\varphi_{1},\psi_{0},\psi_{1},y_{0},y_{1},z_{0},z_{1}, \theta_{0})^{T}\), \(i=1,2\) and the operator \(\mathbb{B}_{1}\colon\mathfrak{D}(\mathbb{B}_{1})\subset\mathbb{H}_{1} \rightarrow\mathbb{H}_{1}\) is given by \[\mathbb{B}_{1}U:=\left(\begin{array}{c}-\dfrac{\kappa_{1}}{\rho_{1}}A\varphi -\dfrac{\kappa_{1}}{\rho_{1}}\psi_{x}+\dfrac{y}{\rho_{1}}(y-\varphi)-\dfrac{ \gamma_{1}}{\rho_{1}}A^{\tau_{1}}u\\ v\end{array}\right)\\ \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad we have \(\|z_{x}\|^{2}=\langle z_{x},z_{x}\rangle=\int_{0}^{l}z_{x}\overline{z_{x}}dx=\int Az \overline{z}dx+z_{x}\overline{z}|_{0}^{l}=\langle Az,z\rangle=\|A^{\frac{1}{2}}z \|^{2}\), similarly we have \(\|\psi_{x}\|^{2}=\|A^{\frac{1}{2}}\psi\|^{2}\) and \(\|\theta_{x}\|^{2}=\|A^{\frac{1}{2}}\theta\|^{2}\). For every solution of the system (38)-(48) the total energy \(\mathfrak{E}_{1}\colon\mathbb{R}^{+}\to\mathbb{R}^{+}\) is given in the \(t\) by \[\mathfrak{E}_{1}(t)=\frac{1}{2}\bigg{[}\rho_{1}\|\varphi_{t}\|^{2 }+\rho_{2}\|\psi_{t}\|^{2}+\rho_{3}\|y_{t}\|^{2}+\rho_{4}\|z_{t}\|^{2}+b_{1}\| A^{\frac{1}{2}}\psi\|^{2}+b_{2}\|A^{\frac{1}{2}}z\|^{2}\\ +\kappa_{1}\|\varphi_{x}-\psi\|^{2}+\kappa_{2}\|y_{x}-z\|^{2}+j \|y-\varphi\|^{2}+\frac{\rho_{5}\delta}{\beta}\|A^{\frac{1}{2}}\theta\|^{2} \bigg{]}, \tag{51}\] and satisfies \[\frac{d}{dt}\mathfrak{E}_{1}(t)=-\gamma_{1}\|A^{\frac{\gamma_{1}}{2}}\varphi_ {t}\|^{2}-\gamma_{2}\|A^{\frac{\gamma_{2}}{2}}y_{t}\|^{2}-\gamma_{3}\|A^{\frac {\gamma_{3}}{2}}z_{t}\|^{2}-\frac{\delta K}{\beta}\|A\theta\|^{2}. \tag{52}\] This operator will be defined in a suitable subspace of the phase space \[\mathbb{H}_{1}:=[\mathfrak{D}(A^{\frac{1}{2}})\times\mathfrak{D}(A^{0})]^{4} \times\mathfrak{D}(A^{\frac{1}{2}}),\] that is a Hilbert space with the inner product \[\langle U_{1},U_{2}\rangle_{\mathbb{H}_{1}} := \rho_{1}\langle u_{1},u_{2}\rangle+\rho_{2}\langle v_{1},v_{1} \rangle+\rho_{3}\langle s_{1},s_{2}\rangle+\rho_{4}\langle w_{1},w_{2} \rangle+b_{1}\langle\psi_{1,x},\psi_{2,x}\rangle \tag{53}\] \[+b_{2}\langle z_{1,x},z_{2,x}\rangle+\kappa_{1}\langle\varphi_{1, x}-\psi_{1},\varphi_{2,x}-\psi_{2}\rangle+\kappa_{2}\langle y_{1,x}-z_{1},y_{2,x} -z_{2}\rangle\] \[+\jmath\langle y_{1}-\varphi_{1},y_{2}-\varphi_{2}\rangle+\frac {\rho_{5}\delta}{\beta}\langle\theta_{1,x},\theta_{2,x}\rangle.\] For \(U_{i}=(\varphi_{i},u_{i},\psi_{i},v_{i},y_{i},s_{i},z_{i},w_{i},\theta_{i})^{ T}\in\mathbb{H}_{1}\), \(i=1,2\) and induced norm: \[\|U\|_{\mathbb{H}_{1}}^{2}:=\rho_{1}\|u\|^{2}+\rho_{2}\|v\|^{2}+ \rho_{3}\|w\|^{2}+\rho_{4}\|s\|^{2}+b_{1}\|A^{\frac{1}{2}}\psi\|^{2}+b_{2}\|A^ {\frac{1}{2}}z\|^{2}\\ +\kappa_{1}\|\varphi_{x}-\psi\|^{2}+\kappa_{2}\|y_{x}-z\|^{2}+j \|y-\varphi\|^{2}+\frac{\rho_{5}\delta}{\beta}\|A^{\frac{1}{2}}\theta\|^{2}. \tag{54}\] In these conditions, we define the domain of \(\mathbb{B}_{1}\) as \[\mathfrak{D}(\mathbb{B}_{1}):=\Big{\{}U\in\mathbb{H}_{1}\colon(u, v,s,w)\in[\mathfrak{D}(A^{\frac{1}{2}})]^{4},\theta\in\mathfrak{D}(A^{\frac{1}{ 2}})\cap H^{3}(0,l)\quad\text{and}\\ (\varphi,\psi,y,z)\in(\mathfrak{D}(A)\cap\mathfrak{D}(A^{\tau_{ 1}}))\times\mathfrak{D}(A)\times(\mathfrak{D}(A)\cap\mathfrak{D}(A^{\tau_{2}}) )\times(\mathfrak{D}(A)\cap\mathfrak{D}(A^{\tau_{3}}))\Big{\}}, \tag{55}\] And it is easy to verify, that \[\operatorname{Re}(\mathbb{B}U,U)_{\mathbb{H}_{1}}=-\gamma_{1}\|A^{\frac{\gamma _{1}}{2}}u\|^{2}-\gamma_{2}\|A^{\frac{\gamma_{2}}{2}}s\|^{2}-\gamma_{3}\|A^{ \frac{\gamma_{3}}{2}}w\|^{2}-\frac{\delta K}{\beta}\|A\theta\|^{2}\leq 0. \tag{56}\] To show that the operator \(\mathbb{B}\) is the generator of a \(C_{0}\)-semigroup, we invoke a result from Liu-Zheng [13]. **Theorem 2** (see Theorem 1.2.4 in [13]): _Let \(\mathbb{B}\) be a linear operator with domain \(\mathfrak{D}(\mathbb{B})\) dense in a Hilbert space \(\mathbb{H}\). If \(\mathbb{B}\) is dissipative and \(0\in\rho(\mathbb{B})\), the resolvent set of \(\mathbb{B}\), then \(\mathbb{B}\) is the generator of a \(C_{0}\)-semigroup of contractions on \(\mathbb{H}\)._ **Proof**: See Lemma 2.1 [20]. \(\Box\) As a consequence of the previous Theorem 2, we obtain **Theorem 3**: _Given \(U_{0}\in\mathbb{H}\) there exists a unique weak solution \(U\) to the problem (49) satisfying_ \[U\in C([0,+\infty),\mathbb{H}).\] _Futhermore, if \(U_{0}\in\mathfrak{D}(\mathbb{B}^{k}),\;k\in\mathbb{N}\), then the solution \(U\) of (49) satisfies_ \[U\in\bigcap_{j=0}^{k}C^{k-j}([0,+\infty),\mathfrak{D}(\mathbb{B}^{j}).\] **Theorem 4** (Hille-Yosida): _A linear (unbounded) operator \(\mathbb{B}\) is the infinitesimal generator of a \(C_{0}-\)semigroup of contractions \(S(t)\), \(t\geq 0\), if and only if \((i)\)\(\mathbb{B}\) is closed and \(\overline{\mathfrak{D}(\mathbb{B})}=\mathbb{H}\), \((ii)\) the resolvent set \(\rho(\mathbb{B})\) of \(\mathbb{B}\) contains \(\mathbb{R}^{+}\) and for every \(\lambda>0\),_ \[\|(\lambda I-\mathbb{B})^{-1}\|_{\mathcal{L}(\mathbb{H})}\leq\frac{1}{\lambda}.\] **Proof**: See [18]. \(\Box\) ### Exponential Decay of System 01, for \((\tau_{1},\tau_{2},\tau_{3})\in[0,1]^{3}\) In this section, we will study the asymptotic behavior of the semigroup of the system (41)-(48). We will use the following spectral characterization of exponential stability of semigroups due to Gearhart[8] (Theorem 1.3.2 book of Liu-Zheng [13]). **Theorem 5** (see [13]): _Let \(S(t)=e^{t\mathbb{B}}\) be a \(C_{0}\)-semigroup of contractions on a Hilbert space \(\mathbb{H}\). Then \(S(t)\) is exponentially stable if and only if_ \[\rho(\mathbb{B})\supseteq\{i\lambda;\lambda\in\mathbb{R}\}\equiv i\mathbb{R} \tag{56}\] _and_ \[\limsup_{|\lambda|\to\infty}\|(i\lambda I-\mathbb{B})^{-1}\|_{\mathcal{L}( \mathbb{H})}<\infty \tag{57}\] _holds._ **Remark 6**: _Note that to show the condition (57) for system 01: (41)-(48), it is enough to show that: Let \(\delta>0\). There exists a constant \(C_{\delta}>0\) such that the solutions of the system (41)-(48) for \(|\lambda|>\delta\), satisfy the inequality_ \[\|U\|_{\mathbb{H}_{1}}\leq C_{\delta}\|F\|_{\mathbb{H}_{1}}\qquad\mbox{\rm for }\quad 0\leq\tau_{1},\tau_{2},\tau_{3}\leq 1. \tag{58}\] To use Theorem 5, we will try to obtain some estimates for \[U=(\varphi,u,\psi,v,y,s,z,w,\theta)^{T}\in\mathfrak{D}(\mathbb{B}_{1})\mbox{ and }F=(f^{1},f^{2},f^{3},f^{4},f^{5},f^{6},f^{7},f^{8},f^{9})^{T}\in \mathbb{H}_{1},\] such that \((i\lambda I-\mathbb{B}_{1})U=F\), where \(\lambda\in\mathbb{R}\). This system, written in components, reads \[i\lambda\varphi-u = f^{1}\quad\mbox{\rm in}\quad\mathfrak{D}(A^{\frac{1}{2}}) \tag{59}\] \[i\lambda u+\frac{\kappa_{1}}{\rho_{1}}A\varphi+\frac{\kappa_{1} }{\rho_{1}}\psi_{x}-\frac{j}{\rho_{1}}(y-\varphi)+\frac{\gamma_{1}}{\rho_{1}} A^{\tau_{1}}u = f^{2}\quad\mbox{\rm in}\quad\mathfrak{D}(A^{0})\] (60) \[i\lambda\psi-v = f^{3}\quad\mbox{\rm in}\quad\mathfrak{D}(A^{\frac{1}{2}})\] (61) \[i\lambda v+\frac{b_{1}}{\rho_{2}}A\psi-\frac{\kappa_{1}}{\rho_{2 }}(\varphi_{x}-\psi)-\frac{\delta}{\rho_{2}}A\theta = f^{4}\quad\mbox{\rm in}\quad\mathfrak{D}(A^{0})\] (62) \[i\lambda y-s = f^{5}\quad\mbox{\rm in}\quad\mathfrak{D}(A^{\frac{1}{2}})\] (63) \[i\lambda s+\frac{\kappa_{2}}{\rho_{3}}Ay+\frac{\kappa_{2}}{\rho _{3}}z_{x}+\frac{j}{\rho_{3}}(y-\varphi)+\frac{\gamma_{2}}{\rho_{3}}A^{\tau_{ 2}}s = f^{6}\quad\mbox{\rm in}\quad\mathfrak{D}(A^{0})\] (64) \[i\lambda z-w = f^{7}\quad\mbox{\rm in}\quad\mathfrak{D}(A^{\frac{1}{2}})\] (65) \[i\lambda w+\frac{b_{2}}{\rho_{4}}Az-\frac{\kappa_{2}}{\rho_{4}} (y_{x}-z)+\frac{\gamma_{3}}{\rho_{4}}A^{\tau_{3}}w = f^{8}\quad\mbox{\rm in}\quad\mathfrak{D}(A^{0})\] (66) \[i\lambda\theta+\frac{K}{\rho_{5}}A\theta+\frac{\beta}{\rho_{5}}v = f^{9}\quad\mbox{\rm in}\quad\mathfrak{D}(A^{\frac{1}{2}}). \tag{67}\] From (55), we have the first estimate \[|\gamma_{1}\|A^{\frac{\gamma_{1}}{2}}u\|^{2}+\gamma_{2}\|A^{\frac{ \gamma_{2}}{2}}s\|^{2}+\gamma_{3}\|A^{\frac{\gamma_{3}}{2}}w\|^{2}+\frac{\delta K }{\beta}\|A\theta\|^{2}|\] \[=|-\mathrm{Re}\langle\mathbb{B}U,U\rangle|=|\mathrm{Re}\{\langle i \lambda U-F,U\rangle\}|\] \[\leq|\langle F,U\rangle|\leq\|F\|_{\mathbb{H}_{1}}\|\|U\|_{ \mathbb{H}_{1}}.\] Therefore \[\gamma_{1}\|A^{\frac{\gamma_{1}}{2}}u\|^{2}+\gamma_{2}\|A^{\frac{ \gamma_{2}}{2}}s\|^{2}+\gamma_{3}\|A^{\frac{\gamma_{2}}{2}}w\|^{2}+\frac{ \delta K}{\beta}\|A\theta\|^{2}\leq\|F\|_{\mathbb{H}_{1}}\|\|U\|_{\mathbb{H}_ {1}}. \tag{68}\] Next, we show some lemmas that will lead us to the proof of the main theorem of this section. **Lemma 7**: _Let \(\delta>0\). There exists \(C_{\delta}>0\) such that the solutions of the system (41)-(48) for \((\tau_{1},\tau_{2},\tau_{3})\in[0,1]^{3}\), and for \(\varepsilon>0\), exists \(C_{\varepsilon}>0\) independent of \(\lambda\), such that_ \[\|v\|^{2} \leq C_{\varepsilon}\|F\|_{\mathbb{H}_{1}}\|U\|_{\mathbb{H}_{1}}+ \varepsilon\|U\|_{\mathbb{H}_{1}}^{2}. \tag{69}\] **Proof**: Applying the duality product between (67) and \(v\) and using (62), we have \[\frac{\beta}{\rho_{5}}\|v\|^{2}=\langle\theta,i\lambda v\rangle- \frac{K}{\rho_{5}}\langle A\theta,v\rangle+\langle f^{9},v\rangle\] \[=\langle\theta,-\frac{b_{1}}{\rho_{2}}A\psi+\frac{\kappa_{1}}{ \rho_{2}}(\varphi_{x}-\psi)+\frac{\delta}{\rho_{2}}A\theta+f^{4}\rangle-\frac {K}{\rho_{5}}\langle A\theta,v\rangle+\langle f^{9},v\rangle\] \[=-\frac{b_{1}}{\rho_{2}}\langle A^{\frac{1}{2}}\theta,A^{\frac{ 1}{2}}\psi\rangle+\frac{\kappa_{1}}{\rho_{2}}\langle\theta,(\varphi_{x}-\psi) \rangle+\frac{\delta}{\rho_{2}}\|A^{\frac{1}{2}}\theta\|^{2}+\langle\theta,f^ {4}\rangle-\frac{K}{\rho_{5}}\langle A\theta,v\rangle+\langle f^{9},v\rangle.\] Applying Cauchy-Schwarz and Young inequalities, continuous immersions: \(\mathfrak{D}(A)\hookrightarrow\mathfrak{D}(A^{\frac{1}{2}})\hookrightarrow \mathfrak{D}(A^{0})\), for \(\varepsilon>0\), exists \(C_{\varepsilon}>0\), such that \[\|v\|^{2}\leq C_{\varepsilon}\|A\theta\|^{2}+\varepsilon\{\|A^{\frac{1}{2}}\psi \|^{2}+\|\varphi_{x}-\psi\|^{2}+\|v\|^{2}\}+\|\theta\|\|f^{4}\|+\|f^{9}\|\|v\|,\] from estimative (68), finish proof this item. \(\Box\) **Lemma 8**: _Let \(\delta>0\). There exists \(C_{\delta}>0\) such that the solutions of the system (41)-(48) for \(|\lambda|>\delta\) and \((\tau_{1},\tau_{2},\tau_{3})\in[0,1]^{3}\), satisfy_ \[(i)\quad|\lambda|\|y-\varphi\|^{2}\leq C_{\delta}\|F\|_{\mathbb{H }_{1}}\|U\|_{\mathbb{H}_{1}}, \tag{70}\] \[(ii)\quad\kappa_{1}\|\varphi_{x}-\psi\|^{2}+b_{1}\|A^{\frac{1}{2} }\psi\|^{2}\leq\varepsilon\|U\|_{\mathbb{H}_{1}}^{2}+C_{\delta}\|F\|_{ \mathbb{H}_{1}}\|U\|_{\mathbb{H}_{1}},\] (71) \[(iii)\quad\kappa_{2}\|y_{x}-z\|^{2}+b_{2}\|A^{\frac{1}{2}}z\|^{2} \leq C_{\delta}\|F\|_{\mathbb{H}_{1}}\|U\|_{\mathbb{H}_{1}}. \tag{72}\] **Proof**: \((i)\) Making the difference between the equations (63) and (59), we have \[i\lambda(y-\varphi)-(s-u)=f^{5}-f^{1}.\] taking the duality product between this last equation and \(y-\varphi\), we arrive at: \[i\lambda\|y-\varphi\|^{2}=\langle s,y-\varphi\rangle-\langle u,y-\varphi \rangle+\langle f^{5},y-\varphi\rangle-\langle f^{1},y-\varphi\rangle. \tag{73}\] Applying Cauchy-Schwarz and Young inequalities, norms \(\|F\|_{\mathbb{H}_{1}}\) and \(\|U\|_{\mathbb{H}_{1}}\), and for \(\varepsilon>0\), exists \(C_{\varepsilon}>0\), such that \[|\lambda|\|y-\varphi\|^{2}\leq C_{\varepsilon}\{\|s\|^{2}+\|u\|^{2}\}+ \varepsilon\|y-\varphi\|^{2}+C_{\delta}\|F\|_{\mathbb{H}_{1}}\|U\|_{\mathbb{H}_ {1}}. \tag{74}\] finally the estimative (68) and considering \(|\lambda|>\delta>1\), we finish proof of item \((i)\). \((ii)\) Performing the duality product of (60) for \(\rho_{1}\varphi\) and using (59), we obtain \[\kappa_{1}\langle(\varphi_{x}-\psi),\varphi_{x}\rangle=\rho_{1}\langle u,i\lambda\varphi\rangle+\jmath\langle(y-\varphi),u\rangle-\gamma_{1}\langle A^{ \tau_{1}}u,\varphi\rangle+\rho_{1}\langle f^{2},\varphi\rangle\\ =\rho_{1}\|u\|^{2}+\rho_{1}\langle u,f^{1}\rangle+\jmath\langle(y -\varphi),u\rangle-i\lambda\gamma_{1}\|A^{\frac{\tau_{1}}{2}}\varphi\|^{2}\\ +\gamma_{1}\langle A^{\frac{\tau_{1}}{2}}f^{1},A^{\frac{\tau_{1}} {2}}\varphi\rangle+\rho_{1}\langle f^{2},\varphi\rangle,\] now, performing the duality product of (62) for \(\rho_{2}\psi\) and using (61), we obtain \[\kappa_{1}\langle(\varphi_{x}-\psi),\psi\rangle=-\rho_{2}\|v\|^{2}-\rho_{2} \langle v,f^{3}\rangle+b_{1}\|A^{\frac{1}{2}}\psi\|^{2}-\delta\langle A\theta, \psi\rangle-\rho_{2}\langle f^{4},\psi\rangle,\] subtracting the last two equations, we have \[\kappa_{1}\|\varphi_{x}-\psi\|^{2}+b_{1}\|A^{\frac{1}{2}}\psi\|^{ 2}=\rho_{2}\|v\|^{2}+\rho_{1}\|u\|^{2}+\rho_{1}\langle u,f^{1}\rangle+\jmath \langle(y-\varphi),u\rangle-i\lambda\gamma_{1}\|A^{\frac{\tau_{1}}{2}}\varphi\| ^{2}\\ +\gamma_{1}\langle A^{\frac{\tau_{1}}{2}}f^{1},A^{\frac{\tau_{1}} {2}}\varphi\rangle+\rho_{1}\langle f^{2},\varphi\rangle+\rho_{2}\langle v,f^{3 }\rangle+\delta\langle A\theta,\psi\rangle+\rho_{2}\langle f^{4},\psi\rangle. \tag{75}\] Taking real part in (75), applying Cauchy-Schwarz and Young inequalities, estimates (68), Lemma 7 and (70) (item \((i)\) in this lemma), we finish proof of item \((ii)\). \((iii)\) Performing the duality product of (64) for \(\rho_{3}y\) and using (63), we obtain \[\kappa_{2}\langle(y_{x}-z),y_{x}\rangle=\rho_{3}\langle s,i\lambda y \rangle-\jmath\langle(y-\varphi),y\rangle-\gamma_{2}\langle A^{\tau_{2}}s,y \rangle+\rho_{3}\langle f^{6},y\rangle\\ =\rho_{3}\|s\|^{2}+\rho_{3}\langle s,f^{5}\rangle-\jmath\langle(y -\varphi),y\rangle-i\lambda\gamma_{2}\|A^{\frac{\tau_{1}}{2}}y\|^{2}\\ +\gamma_{2}\langle A^{\frac{\tau_{2}}{2}}f^{1},A^{\frac{\tau_{2}} {2}}y\rangle+\rho_{3}\langle f^{6},y\rangle\\ =\rho_{3}\|s\|^{2}+\rho_{3}\langle s,f^{5}\rangle-\frac{ij}{ \lambda}\langle(y-\varphi),f^{5}\rangle-\frac{ij}{\lambda}\langle(y-\varphi),s\rangle \\ -i\lambda\gamma_{2}\|A^{\frac{\tau_{2}}{2}}y\|^{2}+\gamma_{2} \langle A^{\frac{\tau_{2}}{2}}f^{1},A^{\frac{\tau_{2}}{2}}y\rangle+\rho_{3} \langle f^{6},y\rangle\] now, performing the duality product of (66) for \(\rho_{4}z\) and using (65), we obtain \[\kappa_{2}\langle(y_{x}-z),z\rangle=-\rho_{4}\|w\|^{2}-\rho_{4} \langle w,f^{7}\rangle+b_{2}\|A^{\frac{1}{2}}z\|^{2}+i\lambda\gamma_{3}\|A^{ \frac{\tau_{3}}{2}}z\|^{2}\\ -\gamma_{3}\langle A^{\frac{\tau_{3}}{2}}f^{7},A^{\frac{\tau_{3}} {2}}z\rangle-\rho_{4}\langle f^{8},z\rangle,\] subtracting the last two equations, we have \[\kappa_{2}\|y_{x}-z\|^{2}+b_{2}\|A^{\frac{1}{2}}z\|^{2}=\rho_{3} \|s\|^{2}+\rho_{4}\|w\|^{2}+\rho_{3}\langle s,f^{5}\rangle-\frac{ij}{\lambda} \langle(y-\varphi),f^{5}\rangle-\frac{ij}{\lambda}\langle(y-\varphi),s\rangle \\ -i\lambda\gamma_{2}\|A^{\frac{\tau_{2}}{2}}y\|^{2}+\gamma_{2} \langle A^{\frac{\tau_{2}}{2}}f^{1},A^{\frac{\tau_{2}}{2}}y\rangle+\rho_{3} \langle f^{6},y\rangle+\rho_{4}\langle w,f^{7}\rangle\\ -i\lambda\gamma_{3}\|A^{\frac{\tau_{3}}{2}}z\|^{2}+\gamma_{3} \langle A^{\frac{\tau_{3}}{2}}f^{7},A^{\frac{\tau_{3}}{2}}z\rangle+\rho_{4} \langle f^{8},z\rangle. \tag{76}\] Taking real part in (76), applying Cauchy-Schwarz and Young inequalities, estimative (68), norms \(\|F\|_{\mathbb{H}_{1}}\) and \(\|U\|_{\mathbb{H}_{1}}\) and item \((i)\) this lemma, we finish proof of item \((iii)\). \(\Box\) **Theorem 9**: _The semigroup \(S_{1}(t)=e^{t\mathbb{B}_{1}}\), is exponentially stable as long as the parameters \((\tau_{1},\tau_{2},\tau_{3})\in[0,1]^{3}\)._ **Proof**: Let's first check the condition (58), which implies (57). Using the Lemmas 7, 8 and applying in the sequence the estimates of (68), we arrive at: \[\|U\|_{\mathbb{H}}^{2}\leq C_{\delta}\|F\|_{\mathbb{H}_{1}}\|U\|_{\mathbb{H}_{ 1}}\quad\text{for}\quad 0\leq\tau_{1},\tau_{2},\tau_{3}\leq 1. \tag{77}\] Therefore the condition (57) for \((\tau_{1},\tau_{2},\tau_{3})\in[0,1]^{3}\) of Theorem 5 is verified. Next, we show the condition (56). **Lemma 10**: _Let \(\varrho(\mathbb{B}_{1})\) be the resolvent set of operator \(\mathbb{B}_{1}\). Then_ \[i\mathbb{R}\subset\varrho(\mathbb{B}_{1}). \tag{78}\] **Proof**: Since \(\mathbb{B}_{1}\) is the infinitesimal generator of a \(C_{0}-\)semigroup of contractions \(S_{1}(t)\), \(t\geq 0\), from Theorem 4, \(\mathbb{B}_{1}\) is a closed operator and \(\mathfrak{D}(\mathbb{B}_{1})\) has compact embedding into the energy space \(\mathbb{H}_{1}\), the spectrum \(\sigma(\mathbb{B}_{1})\) contains only eigenvalues. Let us prove that \(i\mathbb{R}\subset\rho(\mathbb{B}_{1})\) by using an argument by contradiction, so we suppose that \(i\mathbb{R}\not\subset\rho(\mathbb{B}_{1})\). As \(0\in\rho(\mathbb{B}_{1})\) and \(\rho(\mathbb{B}_{1})\) is open, we consider the highest positive number \(\lambda_{0}\) such that the \((-i\lambda_{0},i\lambda_{0})\subset\rho(\mathbb{B}_{1})\) then \(i\lambda_{0}\) or \(-i\lambda_{0}\) is an element of the spectrum \(\sigma(\mathbb{B}_{1})\). We suppose \(i\lambda_{0}\in\sigma(\mathbb{B}_{1})\) (if \(-i\lambda_{0}\in\sigma(\mathbb{B}_{1})\) the proceeding is similar). Then, for \(0<\delta<\lambda_{0}\) there exist a sequence of real numbers \((\lambda_{n})\), with \(\delta\leq\lambda_{n}<\lambda_{0}\), \(\lambda_{n}\to\lambda_{0}\), and a vector sequence \(U_{n}=(u_{n},v_{n},w_{n},\theta_{n})\in\mathfrak{D}(\mathbb{B}_{1})\) with unitary norms, such that \[\|(i\lambda_{n}I-\mathbb{B}_{1})U_{n}\|_{\mathbb{H}_{1}}=\|F_{n}\|_{\mathbb{H }_{1}}\to 0,\] as \(n\to\infty\). From estimative (77), we have \[\|U_{n}\|_{\mathbb{H}_{1}}^{2}=\rho_{1}\|u\|^{2}+\rho_{2}\|v\|^{2} +\rho_{3}\|w\|^{2}+\rho_{4}\|s\|^{2}+b_{1}\|A^{\frac{1}{2}}\psi\|^{2}+b_{2}\|A^ {\frac{1}{2}}z\|^{2}\\ +\kappa_{1}\|\varphi_{x}-\psi\|^{2}+\kappa_{2}\|y_{x}-z\|^{2}+j\|y -\varphi\|^{2}+\frac{\rho_{5}\delta}{\beta}\|A^{\frac{1}{2}}\theta\|^{2}\\ \leq C_{\delta}\|F_{n}\|_{\mathbb{H}_{1}}\|U_{n}\|_{\mathbb{H}_{1} }=C_{\delta}\|F_{n}\|_{\mathbb{H}_{1}}\to 0. \tag{79}\] Therefore, we have \(\|U_{n}\|_{\mathbb{H}_{1}}\to 0\) but this is an absurd, since \(\|U_{n}\|_{\mathbb{H}_{1}}=1\) for all \(n\in\mathbb{N}\). Thus, \(i\mathbb{R}\subset\rho(\mathbb{B}_{1})\). This completes the proof of this lemma. \(\Box\) Therefore the semigroup \(S_{1}(t)=e^{t\mathbb{B}_{1}}\) is exponentially stable for \((\tau_{1},\tau_{2},\tau_{3})\in[0,1]^{3}\), thus we finish the proof of this Theorem 9. \(\Box\) ## 3 System 02 In this section we present results of asymptotic behavior (Exponential Decay) and regularity (Determination of Gevrey Classes and Analyticity) of the second system of this research. ### Well-posedness of the System 02 Now, using the operator \(A:=-\partial_{xx}\) the system (21)-(31) can be written in the following setting \[\rho_{1}\varphi_{tt}+\kappa_{1}A\varphi+\kappa_{1}\psi_{x}-y(y- \varphi)+\gamma_{1}A^{\beta_{1}}\varphi_{t}=0\quad\text{in}\quad(0,l)\times(0, \infty), \tag{80}\] \[\rho_{2}\psi_{tt}+b_{1}A\psi-\kappa_{1}(\varphi_{x}-\psi)-\delta A \theta=0\quad\text{in}\quad(0,l)\times(0,\infty),\] (81) \[\rho_{3}y_{tt}+\kappa_{2}Ay+\kappa_{2}z_{x}+y(y-\varphi)+\gamma_{ 2}A^{\beta_{2}}y_{t}=0\quad\text{in}\quad(0,l)\times(0,\infty),\] (82) \[\rho_{4}z_{tt}+b_{2}Az-\kappa_{2}(y_{x}-z)+\gamma_{3}A^{\beta_{3} }z_{t}=0\quad\text{in}\quad(0,l)\times(0,\infty),\] (83) \[\rho_{5}\theta_{t}+KA\theta+\delta A\psi_{t}=0\quad\text{in}\quad (0,l)\times(0,\infty). \tag{84}\] with the initial conditions; (29)-(31). Taking the duality product between equation (80) and \(\varphi_{t}\), (81) with \(\psi_{t}\), (82) with \(y_{t}\), (83) with \(z_{t}\) and (84) with \(\theta\), taking advantage of the self-adjointness of the powers of the operator \(A\), with boundary condition \(z(0,t)=z(l,t)=0\), we have \(\|z_{x}\|^{2}=\langle z_{x},z_{x}\rangle=\int_{0}^{l}z_{x}\overline{z_{x}}dx= \int Az\overline{z}dx+z_{x}\overline{z}|_{0}^{l}=\langle Az,z\rangle=\|A^{ \frac{1}{2}}z\|^{2}\), similarly we have \(\|\psi_{x}\|^{2}=\|A^{\frac{1}{2}}\psi\|^{2}\) and \(\|\theta_{x}\|^{2}=\|A^{\frac{1}{2}}\theta\|^{2}\). For every solution of the system (38)-(48) the total energy \(\mathfrak{E}_{2}\colon\mathbb{R}^{+}\to\mathbb{R}^{+}\) is given by \[\mathfrak{E}_{2}(t)=\frac{1}{2}\bigg{[}\rho_{1}\|\varphi_{t}\|^{2}+ \rho_{2}\|\psi_{t}\|^{2}+\rho_{3}\|y_{t}\|^{2}+\rho_{4}\|z_{t}\|^{2}+b_{1}\|A^ {\frac{1}{2}}\psi\|^{2}+b_{2}\|A^{\frac{1}{2}}z\|^{2}\\ +\kappa_{1}\|\varphi_{x}-\psi\|^{2}+\kappa_{2}\|y_{x}-z\|^{2}+ \jmath\|y-\varphi\|^{2}+\rho_{5}\|\theta\|^{2}\bigg{]}, \tag{85}\] and satisfies \[\frac{d}{dt}\mathfrak{E}_{2}(t)=-\gamma_{1}\|A^{\frac{\beta_{1}}{2}}\varphi_{t}\| ^{2}-\gamma_{2}\|A^{\frac{\beta_{2}}{2}}y_{t}\|^{2}-\gamma_{3}\|A^{\frac{\beta_{3} }{2}}z_{t}\|^{2}-K\|A^{\frac{1}{2}}\theta\|^{2}. \tag{86}\] Taking \(\varphi_{t}=u\), \(\psi_{t}=v\), \(y_{t}=s\) and \(z_{t}=w\), the initial boundary value problem (80)-(84) can be reduced to the following abstract initial value problem for a first-order evolution equation \[\frac{d}{dt}U(t)=\mathbb{B}_{2}U(t),\quad U(0)=U_{0}, \tag{87}\] where \(U(t)=(\varphi,u,\psi,v,y,s,z,w,\theta)^{T}\), \(U_{0}=(\varphi_{0},\varphi_{1},\psi_{0},\psi_{1},y_{0},y_{1},z_{0},z_{1}, \theta_{0})^{T}\) and the operator \(\mathbb{B}_{2}\colon\mathfrak{D}(\mathbb{B}_{2})\subset\mathbb{H}_{2}\to \mathbb{H}_{2}\) is given by \[\mathbb{B}_{2}U:=\left(\begin{array}{c}-\dfrac{\kappa_{1}}{ \rho_{1}}A\varphi-\dfrac{\kappa_{1}}{\rho_{1}}\psi_{x}+\dfrac{J}{\rho_{1}}(y- \varphi)-\dfrac{\gamma_{1}}{\rho_{1}}A^{\beta_{1}}u\\ v\\ -\dfrac{b_{1}}{\rho_{2}}A\psi+\dfrac{\kappa_{1}}{\rho_{2}}(\varphi_{x}-\psi)+ \dfrac{\delta}{\rho_{2}}A\theta\\ -\dfrac{\kappa_{2}}{\rho_{3}}Ay-\dfrac{\kappa_{2}}{\rho_{3}}z_{x}-\dfrac{J}{ \rho_{3}}(y-\varphi)-\dfrac{\gamma_{2}}{\rho_{3}}A^{\beta_{2}}s\\ w\\ -\dfrac{b_{2}}{\rho_{4}}Az+\dfrac{\kappa_{2}}{\rho_{4}}(y_{x}-z)-\dfrac{\gamma_ {3}}{\rho_{4}}A^{\beta_{3}}w\\ -\dfrac{K}{\rho_{5}}A\theta-\dfrac{\delta}{\rho_{5}}Av\end{array}\right). \tag{88}\] This operator will be defined in a suitable subspace of the phase space \[\mathbb{H}_{2}:=[\mathfrak{D}(A^{\frac{1}{2}})\times\mathfrak{D}(A^{0})]^{4} \times\mathfrak{D}(A^{0}).\] It is a Hilbert space with the inner product \[\langle U_{1},U_{2}\rangle_{\mathbb{H}_{2}} := \rho_{1}\langle u_{1},u_{2}\rangle+\rho_{2}\langle v_{1},v_{1} \rangle+\rho_{3}\langle s_{1},s_{2}\rangle+\rho_{4}\langle w_{1},w_{2}\rangle +b_{1}\langle\psi_{1,x},\psi_{2,x}\rangle \tag{89}\] \[+b_{2}\langle z_{1,x},z_{2,x}\rangle+\kappa_{1}\langle\varphi_{1, x}-\psi_{1},\varphi_{2,x}-\psi_{2}\rangle+\kappa_{2}\langle y_{1,x}-z_{1},y_{2,x}-z _{2}\rangle\] \[+J\langle y_{1}-\varphi_{1},y_{2}-\varphi_{2}\rangle+\rho_{5} \delta\langle\theta_{1},\theta_{2}\rangle.\] For \(U_{i}=(\varphi_{i},u_{i},\psi_{i},v_{i},y_{i},s_{i},z_{i},w_{i},\theta_{i})^{T}\in \mathbb{H}_{2}\), \(i=1,2\) and induced norm \[\|U\|_{\mathbb{H}_{2}}^{2}:=\rho_{1}\|u\|^{2}+\rho_{2}\|v\|^{2}+ \rho_{3}\|w\|^{2}+\rho_{4}\|s\|^{2}+b_{1}\|A^{\frac{1}{2}}\psi\|^{2}+b_{2}\|A^ {\frac{1}{2}}z\|^{2}\\ +\kappa_{1}\|\varphi_{x}-\psi\|^{2}+\kappa_{2}\|y_{x}-z\|^{2}+ \|y-\varphi\|^{2}+\rho_{5}\|\theta\|^{2}. \tag{90}\] In these conditions, we define the domain of \(\mathbb{B}_{2}\) as \[\mathfrak{D}(\mathbb{B}_{2}):=\Big{\{}U\in\mathbb{H}_{2}\colon(u, v,s,w)\in[\mathfrak{D}(A^{\frac{1}{2}})]^{4},\theta\in\mathfrak{D}(A)\quad \text{and}\\ (\varphi,\psi,y,z)\in(\mathfrak{D}(A)\cap\mathfrak{D}(A^{\beta_{1} }))\times\mathfrak{D}(A)\times(\mathfrak{D}(A)\cap\mathfrak{D}(A^{\beta_{2}})) \times(\mathfrak{D}(A)\cap\mathfrak{D}(A^{\beta_{3}}))\Big{\}}. \tag{91}\] To show that the operator \(\mathbb{B}_{2}\) is the generator of a \(C_{0}\)-semigroup we invoke a result from Li-Zheng' [13]. Theorem 2: Clearly, we see that \(\mathfrak{D}(\mathbb{B}_{2})\) is dense in \(\mathbb{H}_{2}\). And it is easy to see that \(\mathbb{B}_{2}\) is dissipative. In fact, for each \(U=(\varphi,u,\psi,v,y,s,z,w,\theta)^{T}\in\mathfrak{D}(\mathbb{B}_{2})\) we have \[\mathrm{Re}\langle\mathbb{B}_{2}U,U\rangle_{\mathbb{H}_{2}}=-\gamma_{1}\|A^{ \frac{\beta_{1}}{2}}u\|^{2}-\gamma_{2}\|A^{\frac{\beta_{2}}{2}}s\|^{2}-\gamma_{3} \|A^{\frac{\beta_{3}}{2}}w\|^{2}-K\|A^{\frac{1}{2}}\theta\|^{2}\leq 0. \tag{92}\] Therefore, it is enough to show that \(0\in\rho(\mathbb{B}_{2})\) (resolvent set of \(\mathbb{B}_{2}\)), hence we must show that \((0I-\mathbb{B}_{2})^{-1}\) exists and is bounded in \(\mathbb{H}_{2}\). To do that, let us take \(F=(f^{1},f^{2},f^{3},f^{4},f^{5},f^{6},f^{7},f^{8},f^{9})^{T}\in\mathbb{H}_{2}\), and look a unique \(U=(\varphi,u,\psi,v,y,s,z,w,\theta)^{T}\in\mathfrak{D}(\mathbb{B}_{2})\), such that \[-\mathbb{B}_{2}U=F,\qquad\text{in}\qquad\mathbb{H}_{2}. \tag{93}\] Equivalently, we get \(-u=f^{1},-v=f^{3},-s=f^{5},-w=f^{7}\), \(A\theta=\frac{\delta}{K}Af^{3}+\frac{\rho_{5}}{K}f^{9}\) and the followings equations \[\kappa_{1}A\varphi+\kappa_{1}\psi_{x}-J(y-\varphi) = \gamma_{1}A^{\beta_{1}}f^{1}+\rho_{1}f^{2},\quad\text{in}\quad D(A^ {\frac{1}{2}}) \tag{93}\] \[b_{1}A\psi-\kappa_{1}(\varphi_{x}-\psi) = \frac{\delta^{2}}{K}Af^{3}+\rho_{2}f^{4}+\frac{\delta\rho_{5}}{K} f^{9},\quad\text{in}\quad D(A^{\frac{1}{2}})\] (94) \[\kappa_{2}Ay+\kappa_{2}z_{x}+J(y-\varphi) = \gamma_{2}A^{\beta_{2}}f^{5}+\rho_{3}f^{6},\quad\text{in}\quad D(A ^{\frac{1}{2}})\] (95) \[b_{2}Az-\kappa_{2}(y_{x}-z) = \gamma_{3}A^{\beta_{3}}f^{7}+\rho_{4}f^{8},\quad\text{in}\quad D( A^{\frac{1}{2}}). \tag{96}\] Perform the duality product of (93)-(96) with \(\varphi^{*},\psi^{*},y^{*}\) and \(z^{*}\) respectively, and adding, and using identities \(\langle A\varphi,\varphi^{*}\rangle=\langle\varphi_{x},\varphi_{x}^{*}\rangle\), \(\langle A\psi,\psi^{*}\rangle=\langle\psi_{x},\psi_{x}^{*}\rangle\),\(\langle Az,z^{*}\rangle=\langle z_{x},z_{x}^{*}\rangle\) and \(\langle Ay,y^{*}\rangle=\langle y_{x},y_{x}^{*}\rangle\), we obtain the equivalent variational problem: \[\mathfrak{B}((\varphi,\psi,y,z),(\varphi^{*},\psi^{*},y^{*},z^{*}))=\mathfrak{ L}(\varphi^{*},\psi^{*},y^{*},z^{*}), \tag{97}\] where \(\mathfrak{B}(\cdot,\cdot)\) is the sesquilinear form in \([D(A^{\frac{1}{2}})]^{4}\), given by \[\mathfrak{B}((\varphi,\psi,y,z),(\varphi^{*},\psi^{*},y^{*},z^{*})) = \kappa_{1}\langle\varphi_{x},\varphi_{x}^{*}\rangle-\kappa_{1} \langle\psi,\varphi_{x}^{*}\rangle+b_{1}\langle\psi_{x},\psi_{x}^{*}\rangle- \kappa_{1}\langle\varphi_{x}-\psi,\psi^{*}\rangle \tag{98}\] \[+\kappa_{2}\langle y_{x},y_{x}^{*}\rangle-\kappa_{2}\langle z,y_{ x}^{*}\rangle-\kappa_{2}\langle y_{x}-z,z^{*}\rangle\] \[-\jmath\langle y-\varphi,\varphi^{*}\rangle+\jmath\langle y- \varphi,y^{*}\rangle+b_{2}\langle z_{x},z_{x}^{*}\rangle\] \[= \kappa_{1}\langle\varphi_{x}-\psi,\varphi_{x}^{*}-\psi^{*}\rangle+ \kappa_{2}\langle y_{x}-z,y_{x}^{*}-z^{*}\rangle+b_{1}\langle\psi_{x},\psi_{x} ^{*}\rangle\] \[+b_{2}\langle z_{x},z_{x}^{*}\rangle+\jmath\langle y-\varphi,y^{* }-\varphi^{*}\rangle\] and \(\mathfrak{L}(\cdot,\cdot,\cdot,\cdot)\) is a continuous linear form in \([D(A^{\frac{1}{2}})]^{4}\), given by \[\mathfrak{L}(\varphi^{*},\psi^{*},y^{*},z^{*}) = \gamma_{1}\langle A^{\beta_{1}}f^{1},\varphi^{*}\rangle+\rho_{1} \langle f^{2},\varphi^{*}\rangle+\frac{\delta^{2}}{K}\langle Af^{3},\psi^{*} \rangle+\rho_{2}\langle f^{4},\psi^{*}\rangle+\frac{\delta\rho_{5}}{K}\langle f ^{9},\psi^{*}\rangle \tag{99}\] \[+\gamma_{2}\langle A^{\beta_{2}}f^{5},y^{*}\rangle+\rho_{3} \langle f^{6},y^{*}\rangle+\gamma_{3}\langle A^{\beta_{3}}f^{7},z^{*}\rangle+ \rho_{4}\langle f^{8},z^{*}\rangle.\] Since \[\mathfrak{B}((\varphi,\psi,y,z),(\varphi,\psi,y,z))=\kappa_{1}\|\varphi_{x}- \psi\|^{2}+\kappa_{2}\|y_{x}-z\|^{2}+b_{1}\|\psi_{x}\|^{2}+b_{2}\|z_{x}\|^{2}+ j\|y-\varphi\|^{2},\] the sesquilinear form \(\mathfrak{B}(\cdot,\cdot)\) is strongly coercive on \([D(A^{\frac{1}{2}})]^{4}\), and since (99) defines a continuous linear functional of \((\varphi^{*},\psi^{*},y^{*},z^{*})\), by Lax-Milgram's Theorem, problem (97) admits a unique solution \((\varphi,\psi,y,z)\in[D(A^{\frac{1}{2}})]^{4}\). By taking test functions in the form; \((\overline{\varphi},0,0,0),(0,\overline{\psi},0,0),(0,0,\overline{y},0)\) and \((0,0,0,\overline{z})\) with \(\overline{\varphi},\overline{\psi},\overline{y},\overline{z}\in\mathcal{D}(0,l)\) (espace of test functions), it is easy to see, that \((\varphi,\psi,y,z)\) satisfies equations (93)-(96) in the distributional sense. This also shows that \((\varphi,\psi,y,z)\in(\mathfrak{D}(A)\cap\mathfrak{D}(A^{\beta_{1}}))\times \mathfrak{D}(A)\times(\mathfrak{D}(A)\cap\mathfrak{D}(A^{\beta_{2}}))\times( \mathfrak{D}(A)\cap\mathfrak{D}(A^{\beta_{3}}))\) for all \((\beta_{1},\beta_{2},\beta_{3})\in[0,1]^{3}\), because \[\kappa_{1}A\varphi = -\kappa_{1}\psi_{x}+\jmath(y-\varphi)+\gamma_{1}A^{\beta_{1}}f^{ 1}+\rho_{1}f^{2}, \tag{100}\] \[b_{1}A\psi = \kappa_{1}(\varphi_{x}-\psi)+\frac{\delta^{2}}{K}Af^{3}+\rho_{2}f^ {4}+\frac{\delta\rho_{5}}{K}f^{9},\] (101) \[\kappa_{2}Ay = -\kappa_{2}z_{x}-\jmath(y-\varphi)\gamma_{2}A^{\beta_{2}}f^{5}+ \rho_{3}f^{6},\] (102) \[b_{2}Az = \kappa_{2}(y_{x}-z)+\gamma_{3}A^{\beta_{3}}f^{7}+\rho_{4}f^{8}. \tag{103}\] Since \(-u=f^{1}\in D(A^{\frac{1}{2}},-v=f^{3}\in D(A^{\frac{1}{2}},-s=f^{5}\in D(A^{ \frac{1}{2}}),-w=f^{7}\in D(A^{\frac{1}{2}},\,A\theta=\frac{\delta}{K}Af^{3}+ \frac{\rho_{5}}{K}f^{9}\in D(A^{\frac{1}{2}})\) we have proved that \((\varphi,u,\psi,v,y,s,z,w,\theta)^{T}\) belongs to \(\mathfrak{D}(\mathbb{B}_{2})\) and is a solutions of \(-\mathbb{B}_{2}U=F\) and it is not difficult to prove that \(\mathbb{B}_{2}^{-1}\) is a bounded operator \((\|U\|_{\mathbb{B}_{2}}^{2}=\|\mathbb{B}_{2}^{-1}F\|_{\mathbb{B}_{2}}^{2}\leq C \|F\|_{\mathbb{B}_{2}}^{2})\). Therefore, we conclude that \(0\in\rho(\mathbb{B}_{2})\), and this finish the proof of this Theorem 2. ### Exponential Decay of System 02, for \((\beta_{1},\beta_{2},\beta_{3})\in[0,1]^{3}\) In this section, we will study the asymptotic behavior of the semigroup \(S_{2}(t)=e^{t\mathbb{B}_{2}}\) of the system (80)-(84). **Remark 11**: _Note that to show the condition (57) it is enough to show that: Let \(\delta>0\). There exists a constant \(C_{\delta}>0\) such that the solutions of the system (80)-(84) for \(|\lambda|>\delta\), satisfy the inequality_ \[\|U\|_{\mathbb{H}_{2}}\leq C_{\delta}\|F\|_{\mathbb{H}_{2}}\qquad\mathrm{for} \quad 0\leq\beta_{1},\beta_{2},\beta_{3}\leq 1. \tag{104}\] In order to use Theorem 5, we will try to obtain some estimates for: \[U=(\varphi,u,\psi,v,y,s,z,w,\theta)^{T}\in\mathfrak{D}(\mathbb{B}_{2})\ \mathrm{and}\ F=(f^{1},f^{2},f^{3},f^{4},f^{5},f^{6},f^{7},f^{8},f^{9})^{T}\in \mathbb{H}_{2},\] such that \((i\lambda I-\mathbb{B}_{2})U=F\), where \(\lambda\in\mathbb{R}\). This system, written in components, reads \[i\lambda\varphi-u = f^{1}\quad\mathrm{in}\quad\mathfrak{D}(A^{\frac{1}{2}}) \tag{105}\] \[i\lambda u+\frac{\kappa_{1}}{\rho_{1}}A\varphi+\frac{\kappa_{1} }{\rho_{1}}\psi_{x}-\frac{J}{\rho_{1}}(y-\varphi)+\frac{\gamma_{1}}{\rho_{1}}A ^{\beta_{1}}u = f^{2}\quad\mathrm{in}\quad\mathfrak{D}(A^{0})\] (106) \[i\lambda\psi-v = f^{3}\quad\mathrm{in}\quad\mathfrak{D}(A^{\frac{1}{2}})\] (107) \[i\lambda v+\frac{b_{1}}{\rho_{2}}A\psi-\frac{\kappa_{1}}{\rho_{2 }}(\varphi_{x}-\psi)-\frac{\delta}{\rho_{2}}A\theta = f^{4}\quad\mathrm{in}\quad\mathfrak{D}(A^{0})\] (108) \[i\lambda y-s = f^{5}\quad\mathrm{in}\quad\mathfrak{D}(A^{\frac{1}{2}})\] (109) \[i\lambda s+\frac{\kappa_{2}}{\rho_{3}}Ay+\frac{\kappa_{2}}{\rho _{3}}z_{x}+\frac{J}{\rho_{3}}(y-\varphi)+\frac{\gamma_{2}}{\rho_{3}}A^{\beta_ {2}}s = f^{6}\quad\mathrm{in}\quad\mathfrak{D}(A^{0})\] (110) \[i\lambda z-w = f^{7}\quad\mathrm{in}\quad\mathfrak{D}(A^{\frac{1}{2}})\] (111) \[i\lambda w+\frac{b_{2}}{\rho_{4}}Az-\frac{\kappa_{2}}{\rho_{4}}( y_{x}-z)+\frac{\gamma_{3}}{\rho_{4}}A^{\beta_{3}}w = f^{8}\quad\mathrm{in}\quad\mathfrak{D}(A^{0})\] (112) \[i\lambda\theta+\frac{K}{\rho_{5}}A\theta+\frac{\delta}{\rho_{5}} Av = f^{9}\quad\mathrm{in}\quad\mathfrak{D}(A^{0}). \tag{113}\] From (91), we have the first estimate \[|\gamma_{1}\|A^{\frac{\beta_{1}}{2}}u\|^{2}+\gamma_{2}\|A^{\frac {\beta_{2}}{2}}s\|^{2}+\gamma_{3}\|A^{\frac{\beta_{3}}{2}}w\|^{2}+K\|A^{\frac{ 1}{2}}\theta\|^{2}|\\ =|-\mathrm{Re}\langle\mathbb{B}_{2}U,U\rangle|=|\mathrm{Re}\{ \langle i\lambda U-F,U\rangle\}|\\ \leq|\langle F,U\rangle|\leq\|F\|_{\mathbb{H}_{2}}\|\|U\|_{\mathbb{H }_{2}}.\] Therefore \[\gamma_{1}\|A^{\frac{\beta_{2}}{2}}u\|^{2}+\gamma_{2}\|A^{\frac{\beta_{2}}{2}} s\|^{2}+\gamma_{3}\|A^{\frac{\beta_{3}}{2}}w\|^{2}+K\|A^{\frac{1}{2}}\theta\|^{2} \leq\|F\|_{\mathbb{H}_{2}}\|\|U\|_{\mathbb{H}_{2}}. \tag{114}\] Next, we show some lemmas that will lead us to the proof of the main theorem of this section. **Lemma 12**: _Let \(\delta>0\). There exists \(C_{\delta}>0\) such that the solutions of the system (80)-(84) for \((\beta_{1},\beta_{2},\beta_{3})\in[0,1]^{3}\), for \(\varepsilon>0\), exists \(C_{\varepsilon}>0\) independent of \(\lambda\), such that_ \[\|v\|^{2} \leq C_{\varepsilon}\|F\|_{\mathbb{H}_{2}}\|U\|_{\mathbb{H}_{2}}+ \varepsilon\|U\|_{\mathbb{H}_{2}}^{2}. \tag{115}\] **Proof**: Applying the duality product between (113) and \(A^{-1}v\) and using (108), we have \[\frac{\delta}{\rho_{5}}\|v\|^{2}=\langle A^{-1}\theta,i\lambda v \rangle-\frac{K}{\rho_{5}}\langle\theta,v\rangle+\langle f^{9},A^{-1}v\rangle\] \[=\langle A^{-1}\theta,-\frac{b_{1}}{\rho_{2}}A\psi+\frac{\kappa_{1 }}{\rho_{2}}(\varphi_{x}-\psi)+\frac{\delta}{\rho_{2}}A\theta-f^{4}\rangle- \frac{K}{\rho_{5}}\langle\theta,v\rangle+\langle f^{9},A^{-1}v\rangle\] \[=-\frac{b_{1}}{\rho_{2}}\langle\theta,\psi\rangle+\frac{\kappa_{1 }}{\rho_{2}}\langle A^{-1}\theta,(\varphi_{x}-\psi)\rangle+\frac{\delta}{\rho_{2 }}\|\theta\|^{2}-\langle\theta,f^{4}\rangle-\frac{K}{\rho_{5}}\langle\theta,v \rangle+\langle f^{9},A^{-1}v\rangle\] Applying Cauchy-Schwarz inequality, we have \[\|v\|^{2}\leq C\{\|\theta\|\|\psi\|+\|A^{-1}\theta\|\|\varphi_{x}-\psi\|+\| \theta\|^{2}+\|\theta\|\|f^{4}\|+\|f^{9}\|\|A^{-1}v\|\}.\] Finally, applying Young inequality, continuous immersions: \(\mathfrak{D}(A^{\frac{1}{2}})\hookrightarrow\mathfrak{D}(A^{0})\hookrightarrow \mathfrak{D}(A^{-1})\), and from estimative (114), finish proof this lemma. \(\Box\) **Lemma 13**: _Let \(\delta>0\). There exists \(C_{\delta}>0\) such that the solutions of the system (80)-(84) for \(|\lambda|>\delta\) and \((\beta_{1},\beta_{2},\beta_{3})\in[0,1]^{3}\), satisfy_ \[(i)\quad|\lambda|\|y-\varphi\|^{2}\leq C_{\delta}\|F\|_{\mathbb{H }_{2}}\|U\|_{\mathbb{H}_{2}}, \tag{116}\] \[(ii)\quad\kappa_{1}\|\varphi_{x}-\psi\|^{2}+b_{1}\|A^{\frac{1}{2} }\psi\|^{2}\leq\varepsilon\|U\|_{\mathbb{H}_{2}}^{2}+C_{\delta}\|F\|_{\mathbb{ H}_{2}}\|U\|_{\mathbb{H}_{2}},\] (117) \[(iii)\quad\kappa_{2}\|y_{x}-z\|^{2}+b_{2}\|A^{\frac{1}{2}}z\|^{2} \leq C_{\delta}\|F\|_{\mathbb{H}_{2}}\|U\|_{\mathbb{H}_{2}}. \tag{118}\] **Proof**: We omit the proof of this lemma because it is completely similar to the proof of Lemma 8 of system 1. \(\Box\) **Theorem 14**: _The semigroup \(S_{2}(t)=e^{t\mathbb{B}_{2}}\), is exponentially stable as long as the parameters \((\beta_{1},\beta_{2},\beta_{3})\in[0,1]^{3}\)._ **Proof**: Let's first check the condition (58), which implies (57). Using the Lemmas 13, 12 and and applying in the sequence the estimates of (114), we arrive at: \[\|U\|_{\mathbb{H}_{2}}^{2}\leq C_{\delta}\|F\|_{\mathbb{H}_{2}}\|U\|_{\mathbb{ H}_{2}}\quad\mbox{for}\quad 0\leq\beta_{1},\beta_{2},\beta_{3}\leq 1. \tag{119}\] Therefore the condition (57) for \((\beta_{1},\beta_{2},\beta_{3})\in[0,1]^{3}\) of Theorem 5 is verified. Next, we will announce a lemma of the condition (10). The demonstration will be omitted, as it is completely similar to the one demonstrated for the first system. **Lemma 15**: _Let \(\varrho(\mathbb{B}_{2})\) be the resolvent set of operator \(\mathbb{B}_{2}\). Then_ \[i\mathbb{R}\subset\varrho(\mathbb{B}_{2}). \tag{120}\] **Proof**: The proof is similar to the proof of Lemma 10. \(\Box\) Therefore, the semigroup \(S_{2}(t)=e^{t\mathbb{B}_{2}}\) is exponentially stable for \((\beta_{1},\beta_{2},\beta_{3})\in[0,1]^{3}\), thus we finish the proof of this Theorem 14. \(\Box\) **Theorem 16** (Lions' Interpolation): _Let \(\alpha<\beta<\gamma\). The there exists a constant \(L=L(\alpha,\beta,\gamma)\) such that_ \[\|A^{\beta}u\|\leq L\|A^{\alpha}u\|^{\frac{\gamma-\beta}{\gamma-\alpha}}\cdot \|A^{\gamma}u\|^{\frac{\beta-\alpha}{\gamma-\alpha}} \tag{121}\] _for every \(u\in\mathfrak{D}(A^{\gamma})\)._ **Proof**: See Theorem 5.34 [6]. \(\Box\) ### Regularity of the semigroup \(S_{2}(t)=e^{t\mathbb{B}_{2}}\) In this subsection, we will show that the semigroup \(S_{2}(t)\) is analyticity for \((\beta_{1},\beta_{2},\beta_{3})\in[\frac{1}{2},1]^{3}\) and determination of Gevrey class for \((\beta_{1},\beta_{2},\beta_{3})\in(0,1)^{3}\). But before we will show some preliminary lemmas. #### 3.3.1 Analyticity: System 02 The following theorem characterizes the analyticity of \(S_{2}(t)\), see [13]: **Theorem 17** (see [13]): _Let \(S_{2}(t)=e^{\mathbb{B}_{2}t}\) be \(C_{0}\)-semigroup of contractions on a Hilbert space. Suppose that_ \[\rho(\mathbb{B}_{2})\supseteq\{i\lambda;\;\lambda\in\mathbb{R}\}\equiv i \mathbb{R}\] _Then \(S_{2}(t)\) is analytic if and only if_ \[\limsup_{|\lambda|\rightarrow\infty}\|\lambda(i\lambda I-\mathbb{B}_{2})^{-1} \|_{\mathcal{L}(\mathbb{H}_{2})}<\infty \tag{122}\] _holds._ **Remark 18**: To show the (122) condition, it suffices to show that, given \(\delta>0\) there exists a constant \(C_{\delta}>0\) such that the solutions of (105)-(113), for \(|\lambda|>\delta\) satisfy the inequality \[\|\lambda(i\lambda I-\mathbb{B}_{2})^{-1}F\|_{\mathbb{H}_{2}}^{2}\leq C_{ \delta}\|F\|_{\mathbb{H}_{2}}\|U\|_{\mathbb{H}_{2}}\qquad\Longleftrightarrow \qquad|\lambda|\|U\|_{\mathbb{H}_{2}}^{2}\leq C_{\delta}\|F\|_{\mathbb{H}_{2}} \|U\|_{\mathbb{H}_{2}}. \tag{123}\] **Lemma 19**: _Let \(\varepsilon>0\). There exists \(C_{\varepsilon}>0\) such that the solutions of the system (80)-(84), satisfy_ \[\|A^{\frac{1}{2}}v\|^{2}\leq C_{\varepsilon}\|F\|_{\mathbb{H}_{2}}\|U\|_{ \mathbb{H}_{2}}. \tag{124}\] **Proof**: Performing the duality product of (113) for \(v\) and using (108), we obtain \[\frac{\delta}{\rho_{5}}\|A^{\frac{1}{2}}v\|^{2} = \langle\theta,i\lambda v\rangle-\frac{K}{\rho_{5}}\langle A^{ \frac{1}{2}}\theta,A^{\frac{1}{2}}v\rangle+\langle f^{9},v\rangle\] \[= -\frac{b_{1}}{\rho_{2}}\langle A^{\frac{1}{2}}\theta,A^{\frac{1} {2}}\psi\rangle+\frac{\kappa_{1}}{\rho_{2}}\langle\theta,\varphi_{x}-\psi \rangle+\frac{\delta}{\rho_{2}}\|A^{\frac{1}{2}}\theta\|^{2}+\langle\theta,f^ {4}\rangle\] \[-\frac{K}{\rho_{5}}\langle A^{\frac{1}{2}}\theta,A^{\frac{1}{2}}v \rangle+\langle f^{9},v\rangle,\] using estimative (114) and applying Cauchy-Schwarz and Young inequalities, for \(\varepsilon>0\), exists \(C_{\varepsilon}>0\) independent of \(\lambda\), such that \[\|A^{\frac{1}{2}}v\|^{2}\leq C_{\varepsilon}\|F\|_{\mathbb{H}_{2}}\|U\|_{ \mathbb{H}_{2}}+\varepsilon\|A^{\frac{1}{2}}v\|^{2}.\] \(\Box\) **Lemma 20**: _Let \(\delta>0\). There exists \(C_{\delta}>0\) such that the solutions of the system (80)-(84) for \(|\lambda|>\delta\), satisfy_ \[(i) |\lambda|\|\theta\|^{2} \leq C_{\delta}\|F\|_{\mathbb{H}_{2}}\|U\|_{\mathbb{H}_{2}}\quad \mathrm{for}\quad 0\leq\beta_{1},\beta_{2},\beta_{3}\leq 1, \tag{125}\] \[(ii) |\lambda|\|u\|^{2} \leq C_{\delta}\|F\|_{\mathbb{H}_{2}}\|U\|_{\mathbb{H}_{2}}\quad \mathrm{for}\quad\frac{1}{2}\leq\beta_{1}\leq 1,\] (126) \[(iii) |\lambda|\|s\|^{2} \leq C_{\delta}\|F\|_{\mathbb{H}_{2}}\|U\|_{\mathbb{H}_{2}}\quad \mathrm{for}\quad\frac{1}{2}\leq\beta_{2}\leq 1,\] (127) \[(iv) |\lambda|\|\varphi_{x}-\psi\|^{2} \leq C_{\delta}\|F\|_{\mathbb{H}_{2}}\|U\|_{\mathbb{H}_{2}}\quad \mathrm{for}\quad\frac{1}{2}\leq\beta_{1}\leq 1,\] (128) \[(v) |\lambda|\|y_{x}-z\|^{2} \leq C_{\delta}\|F\|_{\mathbb{H}_{2}}\|U\|_{\mathbb{H}_{2}}\quad \mathrm{for}\quad\frac{1}{2}\leq\beta_{2}\leq 1. \tag{129}\] **Proof**: \((i)\) Taking the duality product between (113) and \(\theta\), taking advantage of the self-adjointness of the powers of the operator \(A\), we arrive at: \[i\lambda\|\theta\|^{2} = -\frac{K}{\rho_{5}}\|A^{\frac{1}{2}}\theta\|^{2}-\frac{\delta}{ \rho_{5}}\langle A^{\frac{1}{2}}v,A^{\frac{1}{2}}\theta\rangle+\langle f^{9}, \theta\rangle.\] Finally, taking imaginary part, applying Young inequality, estimates (68) and (124) of Lemma 19, finish to proof this item. **Proof:**\((ii)\) Taking the duality product between (106) and \(\lambda A^{-\beta_{1}}u\), using (105) and (107), we arrive at: \[\frac{\gamma_{1}}{\rho_{1}}\lambda\|u\|^{2} = -i|\lambda|^{2}\|A^{-\frac{\beta_{1}}{2}}u\|^{2}-\frac{\kappa_{1} }{\rho_{1}}\langle\lambda\varphi,A^{1-\beta_{1}}u\rangle-\frac{\kappa_{1}}{ \rho_{1}}\langle\lambda\psi_{x},A^{-\beta_{1}}u\rangle\] \[+\frac{\jmath}{\rho_{1}}\langle\frac{\lambda}{\sqrt{|\lambda|}}(y- \varphi),\sqrt{|\lambda|}A^{-\beta_{1}}u\rangle+\langle f^{2},\lambda A^{- \beta_{1}}u\rangle\] \[= -i|\lambda|^{2}\|A^{-\frac{\beta_{1}}{2}}u\|^{2}+\frac{i\kappa_{1 }}{\rho_{1}}\|A^{\frac{1-\beta_{2}}{2}}u\|^{2}+\frac{i\kappa_{1}}{\rho_{1}} \langle A^{\frac{1}{2}}f^{1},A^{\frac{1}{2}-\beta_{1}}u\rangle-\frac{i\kappa_ {1}}{\rho_{1}}\langle v,A^{-\beta_{1}}u_{x}\rangle\] \[+\frac{i\kappa_{1}}{\rho_{1}}\langle f_{x}^{3},A^{-\beta_{1}}u \rangle+\frac{\jmath}{\rho_{1}}\langle\frac{\lambda}{\sqrt{|\lambda|}}(y- \varphi),\sqrt{|\lambda|}A^{-\beta_{1}}u\rangle-\frac{i\kappa_{1}}{\rho_{1}} \langle f^{2},A^{1-\beta_{1}}\varphi\rangle\] \[-\frac{i\kappa_{1}}{\rho_{1}}\langle f^{2},A^{-\beta_{1}}\psi_{ x}\rangle+\frac{ij}{\rho_{1}}\langle f^{2},A^{-\beta_{1}}(y-\varphi)\rangle- \frac{i\gamma_{1}}{\rho_{1}}\langle f^{2},u\rangle+i\|A^{-\frac{\beta_{1}}{2} }f^{2}\|^{2}.\] Taking real part and applying Cauchy-Schwarz and Young inequalities, for \(\varepsilon>0\), exists \(C_{\varepsilon}>0\), such that \[|\lambda|\|u\|^{2} \leq C\{\|A^{\frac{1}{2}}f^{1}\|\|A^{\frac{1}{2}-\beta_{1}}u\|+\|v\| \|A^{-\beta_{1}}u_{x}\|+\|f_{x}^{3}\|\|A^{-\beta_{1}}u\|\}+C_{\varepsilon}| \lambda|\|y-\varphi\|^{2}\] \[+\varepsilon|\lambda|\|A^{-\beta_{1}}u\|^{2}+C\{\|f^{2}\|\|A^{1- \beta_{1}}\varphi\|+\|f^{2}\|\|A^{-\beta_{1}}\psi_{x}\|\] \[+\|f^{2}\|\|A^{-\beta_{1}}(y-\varphi)\|+\|f^{2}\|\|u\|\},\] as from \(\frac{1}{2}\leq\beta_{1}\leq 1\), we have \(-\beta_{1}\leq 0\), \(1-\beta_{1}\leq\frac{1}{2}\), \(\frac{1}{2}-\beta_{1}\leq 0\) and \(-\frac{1}{2}\leq\frac{1-2\beta_{1}}{2}\leq 0\), then \(\mathfrak{D}(A^{\frac{1}{2}})\hookrightarrow\mathfrak{D}(A^{1-\beta_{1}})\) and \(\mathfrak{D}(A^{0})\hookrightarrow\mathfrak{D}(A^{\frac{1}{2}-\beta_{1}})\), furthermore, from the estimative (115) of Lemma 12 and \(\|A^{-\beta_{1}}u_{x}\|=\|A^{\frac{1-2\beta_{1}}{2}}u\|\), we finish to proof this item. **Proof:**\((iii)\) Taking the duality product between (110) and \(\lambda A^{-\beta_{2}}s\), using (109) and (110), we arrive at: \[\frac{\gamma_{2}}{\rho_{3}}\lambda\|s\|^{2} = -i|\lambda|^{2}\|A^{-\frac{\beta_{2}}{2}}s\|^{2}-\frac{\kappa_{2} }{\rho_{3}}\langle\lambda y,A^{1-\beta_{2}}s\rangle-\frac{\kappa_{2}}{\rho_{3} }\langle\lambda z_{x},A^{-\beta_{2}}s\rangle\] \[-\frac{\jmath}{\rho_{3}}\langle\frac{\lambda}{\sqrt{|\lambda|}}(y -\varphi),\sqrt{|\lambda|}A^{-\beta_{2}}s\rangle+\langle f^{6},\lambda A^{- \beta_{2}}s\rangle\] \[= -i|\lambda|^{2}\|A^{-\frac{\beta_{2}}{2}}s\|^{2}+\frac{i\kappa_{2 }}{\rho_{3}}\|A^{\frac{1-\beta_{2}}{2}}s\|^{2}+\frac{i\kappa_{2}}{\rho_{3}} \langle A^{\frac{1}{2}}f^{5},A^{\frac{1}{2}-\beta_{2}}s\rangle-\frac{i\kappa_ {2}}{\rho_{3}}\langle w,A^{-\beta_{2}}s_{x}\rangle\] \[+\frac{i\kappa_{2}}{\rho_{3}}\langle f_{x}^{7},A^{-\beta_{1}}s \rangle+\frac{\jmath}{\rho_{1}}\langle\frac{\lambda}{\sqrt{|\lambda|}}(y- \varphi),\sqrt{|\lambda|}A^{-\beta_{2}}s\rangle-\frac{i\kappa_{2}}{\rho_{3}} \langle f^{6},A^{1-\beta_{2}}y\rangle\] \[-\frac{i\kappa_{2}}{\rho_{3}}\langle f^{6},A^{-\beta_{2}}z_{x} \rangle-\frac{ij}{\rho_{3}}\langle f^{6},A^{-\beta_{2}}(y-\varphi)\rangle-\frac {i\gamma_{2}}{\rho_{3}}\langle f^{6},s\rangle+i\|A^{-\frac{\beta_{2}}{2}}f^{6} \|^{2}.\] Taking, real part, and applying Cauchy-Schwarz and Young inequalities, for \(\varepsilon>0\), exists \(C_{\varepsilon}>0\), such that \[|\lambda|\|s\|^{2} \leq C\{\|A^{\frac{1}{2}}f^{5}\|\|A^{\frac{1}{2}-\beta_{2}}s\|+\|w\| \|A^{-\beta_{2}}s_{x}\|+\|f_{x}^{7}\|\|A^{-\beta_{2}}s\|\}+C_{\varepsilon}| \lambda|\|y-\varphi\|^{2}\] \[+\varepsilon|\lambda|\|A^{-\beta_{2}}s\|^{2}+C\{\|f^{6}\|\|A^{1- \beta_{2}}y\|+\|f^{6}\|\|A^{-\beta_{2}}z_{x}\|\] \[+\|f^{6}\|\|A^{-\beta_{2}}(y-\varphi)\|+\|f^{6}\|\|s\|\},\] as from \(\frac{1}{2}\leq\beta_{2}\leq 1\), we have \(-\beta_{2}\leq 0\), \(1-\beta_{2}\leq\frac{1}{2}\), \(\frac{1}{2}-\beta_{2}\leq 0\) and \(-\frac{1}{2}\leq\frac{1-2\beta_{2}}{2}\leq 0\), then \(\mathfrak{D}(A^{\frac{1}{2}})\hookrightarrow\mathfrak{D}(A^{1-\beta_{2}})\) and \(\mathfrak{D}(A^{0})\hookrightarrow\mathfrak{D}(A^{\frac{1}{2}-\beta_{2}})\), furthermore, from the estimative (115) of Lemma 12 and \(\|A^{-\beta_{2}}z_{x}\|=\|A^{\frac{1-2\beta_{2}}{2}}z\|\), we finish to proof this item. **Proof:**\((iv)\) From (105), we have \(i\lambda\varphi_{x}-u_{x}=f_{x}^{1}\), subtracting from this result the equation (107), we have \[i\lambda(\varphi_{x}-\psi)-(u_{x}-v)=f_{x}^{1}-f^{3}, \tag{130}\] taking the duality product between (130) and \(\varphi_{x}-\psi\) and using (106), we arrive at: \[i\lambda\|\varphi_{x}-\psi\|^{2} = \langle(u_{x}-v),\varphi_{x}-\psi\rangle+\langle f_{x}^{1},(\varphi _{x}-\psi)\rangle-\langle f^{3},(\varphi_{x}-\psi)\rangle\] \[= -\langle u,(\varphi_{x}-\psi)_{x}\rangle-\langle v,\varphi_{x}- \psi\rangle+\langle f_{x}^{1},(\varphi_{x}-\psi)\rangle-\langle f^{3},(\varphi _{x}-\psi)\rangle\] \[= \frac{i\rho_{1}}{\kappa_{1}}\lambda\|u\|^{2}-\frac{J}{\kappa_{1} }\langle u,(y-\varphi)\rangle-\frac{\gamma_{1}}{\kappa_{1}}\|A^{\frac{\beta_{ 1}}{2}}u\|^{2}+\frac{\rho_{1}}{\kappa_{1}}\langle u,f^{2}\rangle\] \[-\langle v,(\varphi_{x}-\psi)\rangle+\langle f_{x}^{1},(\varphi _{x}-\psi)\rangle-\langle f^{3},(\varphi_{x}-\psi)\rangle,\] taking imaginary part and applying Cauchy-Schwarz and Young inequalities, and using (119), we have \[|\lambda|\|\varphi_{x}-\psi\|^{2}\leq C\{|\lambda|\|u\|^{2}+\|F\|_{\mathbb{H}_ {2}}\|U\|_{\mathbb{H}_{2}}\}\quad\mbox{for}\quad 0\leq\beta_{1},\beta_{2}, \beta_{3}\leq 1. \tag{131}\] Using (126) (item \((ii)\) this lemma) we finish proof of this item. **Proof:**\((v)\) On the other hand, similarly from (109), we have \(i\lambda y_{x}-s_{x}=f_{x}^{5}\), subtracting from this result the equation (111), we have \[i\lambda(y_{x}-z)-(s_{x}-w)=f_{x}^{5}-f^{7}. \tag{132}\] Taking the duality product between (132) and \(y_{x}-z\) and using (108), we arrive at: \[i\lambda\|y_{x}-z\|^{2} = \langle(s_{x}-w),y_{x}-z\rangle+\langle f_{x}^{5},y_{x}-z\rangle- \langle f^{7},y_{x}-z\rangle\] \[= -\langle s,(y_{x}-z)_{x}\rangle-\langle w,(y_{x}-z)\rangle+ \langle f_{x}^{5},y_{x}\rangle-\langle f_{x}^{5},z\rangle\] \[-\langle f^{7},y_{x}\rangle+\langle f^{7},z\rangle\] \[= \frac{i\rho_{3}\lambda}{\kappa_{2}}\|s\|^{2}-\frac{J}{\kappa_{2} }\langle s,(y-\varphi)\rangle-\frac{\gamma_{2}}{\kappa_{2}}\|A^{\frac{\gamma _{2}}{2}}s\|^{2}+\frac{\rho_{3}}{\kappa_{2}}\langle s,f^{6}\rangle\] \[-\langle w,(y_{x}-z)\rangle+\langle f_{x}^{5},y_{x}\rangle- \langle f_{x}^{5},z\rangle-\langle f^{7},y_{x}\rangle+\langle f^{7},z\rangle.\] Taking imaginary part and applying Cauchy-Schwarz and Young inequalities, we have \[|\lambda|\|y_{x}-z\|^{2}\leq C\{|\lambda|\|s\|^{2}+\|F\|_{\mathbb{H}}\|U\|_{ \mathbb{H}}\}\quad\mbox{for}\quad 0\leq\beta_{1},\beta_{2}\beta_{3}\leq 1. \tag{133}\] Using (127) (item \((iii)\) this lemma) we finish proof of this lemma. \(\Box\) **Lemma 21**: _Let \(\delta>0\). There exists \(C_{\delta}>0\) such that the solutions of the system (80)-(84) for \(|\lambda|>\delta\), satisfy_ \[|\lambda|\|v\|^{2}\leq C_{\delta}\|F\|_{\mathbb{H}_{2}}\|U\|_{\mathbb{H}_{2}} \quad\mbox{for}\quad 0\leq\beta_{1},\beta_{2},\beta_{3}\leq 1. \tag{134}\] **Proof**: Performing the duality product between equation (113) and \(\lambda A^{-1}v\), and using (107) and (108), we obtain \[\frac{\delta}{\rho_{5}}\lambda\|v\|^{2} = \langle\lambda A^{-1}\theta,i\lambda v\rangle-\frac{K}{\rho_{5}} \langle\theta,\lambda v\rangle+\langle f^{9},\lambda A^{-1}v\rangle\] \[= -i\frac{b_{1}}{\rho_{2}}\langle\theta,v\rangle-i\frac{b_{1}}{\rho_ {2}}\langle\theta,f^{3}\rangle+\frac{i\kappa_{1}K}{\rho_{2}\rho_{5}}\langle \theta,\varphi_{x}-\psi\rangle+\frac{i\kappa_{1}\delta}{\rho_{2}\rho_{5}} \langle v,\varphi_{x}-\psi\rangle\] \[-\frac{i\kappa_{1}}{\rho_{2}}\langle A^{-1}f^{9},\varphi_{x}-\psi \rangle+\frac{\delta}{\rho_{2}}\lambda\|\theta\|^{2}+\frac{iK}{\rho_{5}}\langle \theta,f^{4}\rangle+\frac{i\delta}{\rho_{5}}\langle v,f^{4}\rangle\] \[-i\langle A^{-1}f^{9},f^{4}\rangle-\frac{K}{\rho_{5}}\langle\sqrt{| \lambda|}\theta,\frac{\lambda}{\sqrt{|\lambda|}}v\rangle-\frac{ib_{1}}{\rho_ {2}}\langle f^{9},\psi\rangle\] \[+\frac{i\kappa_{1}}{\rho_{2}}\langle A^{-1}f^{9},\varphi_{x}- \psi\rangle+\frac{i\delta}{\rho_{2}}\langle f^{9},\theta\rangle+i\langle A^{-1 }f^{9},f^{4}\rangle.\] Applying Cauchy-Schwarz and Young inequalities, for \(\varepsilon>0\), exists \(C_{\varepsilon}>0\), such that \[|\lambda|\|v\|^{2} \leq C\{\|\theta\|\|v\|+\|\theta\|f^{3}\|+\|\theta\|\|\varphi_{x}-\psi \|+\|v\|\|\varphi_{x}-\psi\|+|\lambda|\|\theta\|^{2}\] \[+\|\theta\|\|f^{4}\|+\|v\|\|f^{4}\|+\|f^{9}\|\|\|\psi\|+\|f^{9}\| \|\theta\|\}+C_{\varepsilon}|\lambda|\|\theta\|^{2}+\varepsilon|\lambda|\|v \|^{2}.\] Finally, from estimates (104), (125), finish proof this lemma. \(\Box\) **Lemma 22**: _Let \(\delta>0\). There exists \(C_{\delta}>0\) such that the solutions of the system (80)-(84) for \(|\lambda|>\delta\), satisfy_ \[(i)\;|\lambda|||A^{\frac{1}{2}}\psi\|^{2} \leq C_{\delta}\|F\|_{\mathbb{H}_{2}}\|U\|_{\mathbb{H}_{2}}\quad\mathrm{ for}\quad 0\leq\beta_{1},\beta_{2},\beta_{3}\leq 1, \tag{135}\] \[(ii)\;|\lambda|||w\|^{2} \leq C_{\delta}\|F\|_{\mathbb{H}_{2}}\|U\|_{\mathbb{H}_{2}}\quad \mathrm{for}\quad\frac{1}{2}\leq\beta_{2},\beta_{3}\leq 1,\] (136) \[(iii)\quad|\lambda|||A^{\frac{1}{2}}z\|^{2} \leq C_{\delta}\|F\|_{\mathbb{H}_{2}}\|U\|_{\mathbb{H}_{2}}\quad \mathrm{for}\quad\frac{1}{2}\leq\beta_{2},\beta_{3}\leq 1. \tag{137}\] **Proof**: \((i)\) Performing the duality product between equation (108) and \(\frac{ip_{2}}{\kappa_{1}}\lambda\psi\), we have \[\frac{ib_{1}}{\kappa_{1}}\lambda\|A^{\frac{1}{2}}\psi\|^{2}=i \langle\sqrt{|\lambda|}(\varphi_{x}-\psi),\frac{\lambda}{\sqrt{|\lambda|}} \psi\rangle+\frac{i\rho_{2}}{\kappa_{1}}\lambda\|v\|^{2}+\frac{\rho_{2}}{ \kappa_{1}}\langle i\lambda v,f^{3}\rangle-\frac{\delta}{\kappa_{1}}\langle A^ {\frac{1}{2}}\theta,A^{\frac{1}{2}}v\rangle\] \[-\frac{\delta}{\kappa_{1}}\langle A^{\frac{1}{2}}\theta,A^{\frac{ 1}{2}}f^{3}\rangle-\frac{\rho_{2}}{\kappa_{1}}\langle f^{4},v\rangle-\frac{ \rho_{2}}{\kappa_{1}}\langle f^{4},f^{3}\rangle \tag{138}\] as, of (108), we have \[\frac{\rho_{2}}{\kappa_{1}}\langle i\lambda v,f^{3}\rangle=-\frac{b_{1}}{ \kappa_{1}}\langle A^{\frac{1}{2}}\psi,A^{\frac{1}{2}}f^{3}\rangle+\langle \varphi_{x}-\psi,f^{3}\rangle+\frac{\delta}{\kappa_{1}}\langle A^{\frac{1}{2} }\theta,A^{\frac{1}{2}}f^{3}\rangle+\frac{\rho_{2}}{\kappa_{1}}\langle f^{4},f^{3}\rangle, \tag{139}\] using (139) in (138), we have \[\frac{ib_{1}}{\kappa_{1}}\lambda\|A^{\frac{1}{2}}\psi\|^{2}= \frac{i\rho_{2}}{\kappa_{1}}\lambda\|v\|^{2}+i\langle\sqrt{|\lambda|}(\varphi _{x}-\psi),\frac{\lambda}{\sqrt{|\lambda|}}\psi\rangle-\frac{\delta}{\kappa_{ 1}}\langle A^{\frac{1}{2}}\theta,A^{\frac{1}{2}}v\rangle-\frac{\rho_{2}}{\kappa _{1}}\langle f^{4},v\rangle\] \[-\frac{b_{1}}{\kappa_{1}}\langle A^{\frac{1}{2}}\psi,A^{\frac{1}{ 2}}f^{3}\rangle+\langle\varphi_{x}-\psi,f^{3}\rangle. \tag{140}\] Applying Cauchy-Schwarz and Young inequalities, for \(\varepsilon>0\), exists \(C_{\varepsilon}>0\) independent of \(\lambda\), such that \[|\lambda|||A^{\frac{1}{2}}\psi\|^{2}\leq C_{\varepsilon}|\lambda||| \varphi_{x}-\psi\|^{2}+\varepsilon|\lambda|||\psi\|^{2}+C\{\|A^{\frac{1}{2}} \theta\|^{2}+\|A^{\frac{1}{2}}v\|+|\lambda|||v\|^{2}+\|f^{4}|||v\|\] \[+\|A^{\frac{1}{2}}\psi\|\|A^{\frac{1}{2}}f^{3}\|+\|\varphi_{x}\| \|f^{3}\|+\|\psi\|\|f^{3}\|\},\] finally, from \(\mathfrak{D}(A^{0})\hookrightarrow\mathfrak{D}(A^{\frac{1}{2}})\), estimates (104), (114), (124) Lemma 19 and (134) Lemma 21, we finish to proof this item. **Proof**: \((ii)\) Performing the duality product between equation (112) and \(\lambda A^{-\beta_{3}}w\), and using (111), we obtain \[\frac{\gamma_{3}}{\rho_{4}}\lambda\|w\|^{2} = -i|\lambda|^{2}\|A^{-\frac{\beta_{3}}{2}}w\|^{2}-\frac{b_{2}}{ \rho_{4}}\langle\lambda z,A^{1-\beta_{3}}w\rangle+\frac{\kappa_{2}}{\rho_{4}} \langle\frac{\lambda}{\sqrt{|\lambda|}}(y_{x}-z),\sqrt{|\lambda|}A^{-\beta_{3} }w\rangle \tag{141}\] \[+\langle f^{8},\lambda A^{-\beta_{3}}w\rangle\] \[= -i|\lambda|^{2}\|A^{-\frac{\beta_{3}}{2}}w\|^{2}+\frac{ib_{2}}{ \rho_{4}}\|A^{\frac{1-\beta_{3}}{2}}w\|^{2}+\frac{ib_{2}}{\rho_{4}}\langle A^ {\frac{1}{2}}f^{7},A^{\frac{1}{2}-\beta_{3}}w\rangle\] \[+\frac{\kappa_{2}}{\rho_{4}}\langle\frac{\lambda}{\sqrt{|\lambda |}}(y_{x}-z),\sqrt{|\lambda|}A^{-\beta_{3}}w\rangle-\frac{ib_{2}}{\rho_{4}} \langle f^{8},A^{1-\beta_{3}}z\rangle\] \[+\frac{i\kappa_{2}}{\rho_{4}}\langle f^{8},A^{-\beta_{3}}(y_{x} -z)\rangle-\frac{i\dot{\gamma}_{3}}{\rho_{4}}\langle f^{8},w\rangle+i\|f^{8} \|^{2}. \tag{142}\] Taking, real part, and applying Cauchy-Schwarz and Young inequalities, for \(\varepsilon>0\), exists \(C_{\varepsilon}>0\), such that \[|\lambda|||w\|^{2} \leq C_{\varepsilon}|\lambda|||y_{x}-z\|^{2}+\varepsilon|\lambda|||A^{- \beta_{3}}w\|^{2}+C\{\|A^{\frac{1}{2}}f^{7}\|\|A^{\frac{1}{2}-\beta_{3}}w\| \tag{143}\] \[+\|f^{8}\|\|A^{1-\beta_{3}}z\|+\|f^{8}\|\|A^{-\beta_{3}}(y_{x}-z )\|+\|f^{8}\|\|w\|.\] as form \(\frac{1}{2}\leq\beta_{3}\leq 1\), we have: \(\frac{1}{2}-\beta_{3}\leq 0\) and \(1-\beta_{3}\leq\frac{1}{2}\), then \(\mathfrak{D}(A^{0})\hookrightarrow\mathfrak{D}(A^{\frac{1}{2}-\beta_{3}})\) and \(\mathfrak{D}(A^{\frac{1}{2}})\hookrightarrow\mathfrak{D}(A^{1-\beta_{3}})\). Finally applying estimative (129) of Lemma 20, finish proof this item. **Proof:**\((iii)\) Performing the duality product between equation (112) and \(w\) and using (111), we have \[i\lambda\|w\|^{2}-i\frac{b_{2}}{\rho_{4}}\lambda\|A^{\frac{1}{2}}z\|^{2}-\frac{b_ {2}}{\rho_{4}}\langle A^{\frac{1}{2}}z,A^{\frac{1}{2}}f^{7}\rangle-\frac{\kappa _{2}}{\rho_{4}}\langle y_{x}-z,w\rangle+\frac{\gamma_{3}}{\rho_{4}}\|A^{\frac{ \beta_{3}}{2}}w\|^{2}=\langle f^{8},w\rangle,\] Taking imaginary part, and applying Cauchy-Schwarz and Young inequalities, we obtain \[|\lambda|\|A^{\frac{1}{2}}z\|^{2}\leq C\{|\lambda|\|w\|^{2}+\|A^{ \frac{1}{2}}z|\|A^{\frac{1}{2}}f^{7}\|+\|y_{x}-z\|^{2}+\|w\|^{2}+\|f^{8}\|\|w\|\} \\ \leq C_{\delta}\{\|F\|_{\mathbb{H}_{2}}\|U\|_{\mathbb{H}_{2}}+| \lambda|\|w\|^{2}\}\quad\mbox{for}\quad 0\leq\beta_{1},\beta_{2},\beta_{3}\leq 1. \tag{143}\] Finally, applying of item (ii) this Lemma and (104), finish to proof this item. \(\Box\) **Theorem 23**: _The semigroup \(S_{2}(t)=e^{t\mathbb{B}_{2}}\) is analytic for \((\beta_{1},\beta_{2},\beta_{3})\in[\frac{1}{2},1]^{3}.\)_ **Proof**: From Lemma 15, (15) is verified. Let \(\delta>0\), there exists a constant \(C_{\delta}>0\) such that the solutions of the system (29)-(84) for \(|\lambda|>\delta\), satisfy the inequality \[|\lambda|\|U\|_{\mathbb{H}_{2}}^{2}\leq C_{\delta}\|F\|_{\mathbb{H}_{2}}\|U\|_ {\mathbb{H}_{2}}. \tag{144}\] Finally, considering \((\beta_{1},\beta_{2},\beta_{3})\in[\frac{1}{2},1]^{3}\) and using (116) (item \((i)\) the Lemmas 13), and Lemmas: 20, 21 and 22, we finish the proof of this theorem. \(\Box\) #### 3.3.2 Determination of Gevrey Classes: System 02 Before exposing our results, it is useful to recall the next definition and result presented in [2] (adapted from [28], Theorem 4, p. 153]). **Definition 24**: _Let \(t_{0}\geq 0\) be a real number. A strongly continuous semigroup \(S(t)\), defined on a Banach space \(\mathbb{H}\), is of Gevrey class \(s>1\) for \(t>t_{0}\), if \(S(t)\) is infinitely differentiable for \(t>t_{0}\), and for every compact set \(K\subset(t_{0},\infty)\) and each \(\mu>0\), there exists a constant \(C=C(\mu,K)>0\) such that_ \[||S^{(n)}(t)||_{\mathcal{L}(\mathbb{H})}\leq C\mu^{n}(n!)^{s},\mbox{ for all }\quad t\in K,n=0,1,2... \tag{145}\] **Theorem 25** ([28]): _Let \(S(t)\) be a strongly continuous and bounded semigroup on a Hilbert space \(\mathbb{H}\). Suppose that the infinitesimal generator \(\mathbb{B}\) of the semigroup \(S(t)\) satisfies the following estimate, for some \(0<\Psi<1\):_ \[\lim_{|\lambda|\to\infty}\sup|\lambda|^{\Psi}||(i\lambda I-\mathbb{B})^{-1}||_{ \mathcal{L}(\mathbb{H})}<\infty. \tag{146}\] _Then \(S(t)\) is of Gevrey class \(s\) for \(t>0\), for every \(s>\dfrac{1}{\Psi}\)._ Our main result in this subsection is as follows: **Theorem 26**: _Let \(S_{2}(t)=e^{t\mathbb{B}_{2}}\) strongly continuos-semigroups of contractions on the Hilbert space \(\mathbb{H}_{2}\), the semigroups \(S_{2}(t)\) is of Gevrey class \(s\), for every \(s>\frac{1+\phi}{2\phi}\), such that, we have the resolvent estimative:_ \[\limsup_{|\lambda|\to\infty}|\lambda|^{\frac{2\phi}{1+\phi}}||(i\lambda I- \mathbb{B}_{2})^{-1}||_{\mathcal{L}(\mathbb{H}_{2})}<\infty, \tag{147}\] _where,_ \[\phi:=\min_{(\beta_{1},\beta_{2},\beta_{3})\in(0,1)^{3}}\{\beta_{1},\beta_{2 },\beta_{3}\}. \tag{148}\] **Proof**: Notice that, for \(\phi\) defined in (148), we have \(0<(2\phi)/(\phi+1)<1\). Next we will estimate: \(|\lambda|^{\frac{2\beta_{1}}{1+\beta_{1}}}\|u\|^{2},\quad|\lambda|^{\frac{2 \beta_{2}}{1+\beta_{2}}}\|s\|^{2}\) and \(|\lambda|^{\frac{2\beta_{3}}{1+\beta_{3}}}\|w\|^{2}\). **Let's start by estimating the term \(|\lambda|^{\frac{2\beta_{1}}{1+\beta_{1}}}\|u\|\):** It is assume that \(|\lambda|>1\), some ideas could be borrowed from [12]. Set \(u=u_{1}+u_{2}\), where \(u_{1}\in\mathfrak{D}(A)\) and \(u_{2}\in\mathfrak{D}(A^{0})\), with \[i\lambda u_{1}+Au_{1}=f^{2}, i\lambda u_{2}=-\dfrac{\kappa_{1}}{\rho_{1}}A\varphi-\dfrac{ \kappa_{1}}{\rho_{1}}\psi_{x}+\frac{\jmath}{\rho_{1}}(y-\varphi)-\dfrac{\gamma _{1}}{\rho_{1}}A^{\beta_{1}}u+Au_{1}. \tag{149}\] Firstly, applying in the product duality the first equation in (149) by \(u_{1}\), then by \(Au_{1}\) and recalling that the operator \(A\) is self-adjoint, resulting in \[|\lambda|\|u_{1}\|+|\lambda|^{\frac{1}{2}}\|A^{\frac{1}{2}}u_{1}\|+\|Au_{1}\| \leq C\|F\|_{\mathbb{H}_{2}}. \tag{150}\] Applying the \(A^{-\frac{1}{2}}\) operator on the second equation of (149), result in \[i\lambda A^{-\frac{1}{2}}u_{2}=-\dfrac{\kappa_{1}}{\rho_{1}}A^{\frac{1}{2}} \varphi-\dfrac{\kappa_{1}}{\rho_{1}}A^{-\frac{1}{2}}\psi_{x}+\frac{\jmath}{ \rho_{1}}A^{-\frac{1}{2}}(y-\varphi)-A^{\beta_{1}-\frac{1}{2}}u+A^{\frac{1}{2 }}u_{1},\] then, as \(\|A^{-\frac{1}{2}}\psi_{x}\|^{2}=\langle-A^{-\frac{1}{2}}\psi_{xx},A^{-\frac{ 1}{2}}\psi\rangle=\langle A^{\frac{1}{2}}\psi,A^{-\frac{1}{2}}\psi\rangle=\| \psi\|^{2}\leq C\|A^{\frac{1}{2}}\psi\|^{2}\), \(-\frac{1}{2}<0\) and \(\beta_{1}-\frac{1}{2}\leq\frac{\beta_{1}}{2}\), taking into account the continuous embedding \(\mathfrak{D}(A^{\theta_{2}})\hookrightarrow\mathfrak{D}(A^{\theta_{1}}),\ \theta_{2}>\theta_{1}\) and using (150) and as \(-1\leq-\frac{2\beta_{1}}{\beta_{1}+1}\), result in \[|\lambda|^{2}\|A^{-\frac{1}{2}}u_{2}\|^{2} \leq C\{\|A^{\frac{1}{2}}\varphi\|^{2}+\|A^{\frac{1}{2}}\psi\|^{2}+\| y-\varphi\|^{2}+\|A^{\frac{\beta_{1}}{2}}u\|^{2}\}+\|A^{\frac{1}{2}}u_{1}\|^{2}\] \[\leq C\{\|F\|_{\mathbb{H}_{2}}\|U\|_{\mathbb{H}_{2}}+|\lambda|^{-1}\|F \|_{\mathbb{H}_{2}}^{2}\}\leq C|\lambda|^{-\frac{2\beta_{1}}{\beta_{1}+1}}\{| \lambda|^{\frac{2\beta_{1}}{\beta_{1}+1}}\|F\|_{\mathbb{H}_{2}}\|U\|_{\mathbb{H }_{2}}+\|F\|_{\mathbb{H}_{2}}^{2}\}.\] Then \[\|A^{-\frac{1}{2}}u_{2}\|^{2}\leq C|\lambda|^{-\frac{2\beta_{1}+1}{\beta_{1}+1}} \{|\lambda|^{\frac{2\beta_{1}}{\beta_{1}+1}}\|F\|_{\mathbb{H}_{2}}\|U\|_{ \mathbb{H}_{2}}+\|F\|_{\mathbb{H}_{2}}^{2}\}. \tag{151}\] On the other hand, from \(A^{\frac{\beta_{1}}{2}}u_{2}=A^{\frac{\beta_{1}}{2}}u-A^{\frac{\beta_{1}}{2}}u_ {1}\), (114) and as \(\mathfrak{D}(A^{\frac{1}{2}})\hookrightarrow\mathfrak{D}(A^{\frac{\beta_{1}}{2}})\), the inequality of (150), result in \[\|A^{\frac{\beta_{1}}{2}}u_{2}\|^{2}\leq C\{\|A^{\frac{\beta_{1}}{2}}u\|^{2}+\| A^{\frac{\beta_{1}}{2}}u_{1}\|^{2}\}\leq C|\lambda|^{-\frac{2\beta_{1}}{\beta_{1}+1}} \{|\lambda|^{\frac{2\beta_{1}}{\beta_{1}+1}}\|F\|_{\mathbb{H}_{2}}\|U\|_{ \mathbb{H}_{2}}+\|F\|_{\mathbb{H}_{2}}^{2}\}. \tag{152}\] By Lions' interpolations inequality (Theorem 16), \(0\in\big{[}-\frac{1}{2},\frac{\beta_{1}}{2}\big{]}\), result in \[\|u_{2}\|^{2}\leq C(\|A^{-\frac{1}{2}}u_{2}\|^{2})^{\frac{\beta_{1}}{\beta+\beta_{1 }}}(\|A^{\frac{\beta_{1}}{2}}u_{2}\|^{2})^{\frac{1}{1+\beta_{1}}}. \tag{153}\] Then, using (151) and (152) in (153), for \(|\lambda|>1\), result in \[\|u_{2}\|^{2}\leq C|\lambda|^{-\frac{4\beta_{1}}{1+\beta_{1}}}\{|\lambda|^{\frac{ 2\beta_{1}}{1+\beta_{1}}}\|F\|_{\mathbb{H}_{2}}\|U\|_{\mathbb{H}_{2}}+\|F\|_{ \mathbb{H}_{2}}^{2}\}. \tag{154}\] Therefore, as \(\|u\|^{2}\leq\|u_{1}\|^{2}+\|u_{2}\|^{2}\), from (150), (154) and as for \(0\leq\beta_{1}\leq 1\) we have \(|\lambda|^{-2}\leq|\lambda|^{-\frac{4\beta_{1}}{1+\beta_{1}}}\), result in \[|\lambda|\|u\|^{2}\leq C_{\delta}|\lambda|^{\frac{1-\beta\beta_{1}}{1+\beta_{1 }}}\{|\lambda|^{\frac{2\beta_{1}}{1+\beta_{1}}}\|F\|_{\mathbb{H}_{2}}\|U\|_{ \mathbb{H}_{2}}+\|F\|_{\mathbb{H}_{2}}^{2}\}\qquad\text{for}\qquad 0\leq\beta_{1}\leq 1. \tag{155}\] On the other hand, let's now estimate the missing term \(|\lambda|^{\frac{2\beta_{2}}{1+\beta_{2}}}\|s\|^{2}\):It is assumed that \(|\lambda|>1\). Set \(s=s_{1}+s_{2}\), where \(s_{1}\in\mathfrak{D}(A)\) and \(s_{2}\in\mathfrak{D}(A^{0})\), with \[i\lambda s_{1}\,+\,As_{1}\,=\,f^{6}\qquad\text{and}\qquad i\lambda s_{2}\,=\,- \frac{\kappa_{2}}{\rho_{3}}Ay\,-\,\frac{\kappa_{2}}{\rho_{3}}z_{x}\,-\,\frac{ \jmath}{\rho_{3}}(y\,-\,\varphi)\,-\,\frac{\gamma_{2}}{\rho_{3}}A^{\beta_{2} }s\,+\,As_{1}. \tag{156}\] Firstly, applying in the product duality the first equation in (156) by \(s_{1}\), then by \(As_{1}\) and recalling that the operator \(A\) is self-adjoint, resulting in \[|\lambda|\|s_{1}\|+|\lambda|^{\frac{1}{2}}\|A^{\frac{1}{2}}s_{1}\|+\|As_{1}\| \leq C\|F\|_{\mathbb{H}_{2}}. \tag{157}\] Applying the operator \(A^{-\frac{1}{2}}\) in second equation of (156), we have \[i\lambda A^{-\frac{1}{2}}s_{2}=-\frac{\kappa_{2}}{\rho_{3}}A^{\frac{1}{2}}y- \frac{\kappa_{2}}{\rho_{3}}A^{-\frac{1}{2}}z_{x}-\frac{\jmath}{\rho_{3}}A^{- \frac{1}{2}}(y-\varphi)-\frac{\gamma_{2}}{\rho_{3}}A^{\beta_{2}-\frac{1}{2}}s+ A^{\frac{1}{2}}s_{1},\] then, as \(\|A^{-\frac{1}{2}}z_{x}\|^{2}=\|z\|^{2}\leq C\|A^{\frac{1}{2}}z\|^{2}\), \(0<\frac{1}{2}\) and \(\beta_{2}-\frac{1}{2}\leq\frac{\beta_{2}}{2}\), taking into account the continuous embedding \(\mathfrak{D}(A^{\theta_{2}})\hookrightarrow\mathfrak{D}(A^{\theta_{1}}),\, \theta_{2}>\theta_{1}\), lead to \[|\lambda|^{2}\|A^{-\frac{1}{2}}s_{2}\|^{2} \leq C\{\|A^{\frac{1}{2}}y\|^{2}+\|A^{\frac{1}{2}}z\|^{2}+\|y-\varphi \|^{2}+\|A^{\frac{\beta_{2}}{2}}s\|^{2}\}+\|A^{\frac{1}{2}}s_{1}\|^{2}\] \[\leq C\{\|F\|_{\mathbb{H}_{2}}\|U\|_{\mathbb{H}_{2}}+|\lambda|^{-1}\|F \|_{\mathbb{H}_{2}}^{2}\}\leq C|\lambda|^{-\frac{2\beta_{2}}{\rho_{2}+1}}\{| \lambda|^{\frac{2\beta_{2}}{\rho_{2}+1}}\|F\|_{\mathbb{H}_{2}}\|U\|_{\mathbb{ H}_{2}}+\|F\|_{\mathbb{H}_{2}}^{2}\}.\] Then \[\|A^{-\frac{1}{2}}s_{2}\|^{2}\leq C|\lambda|^{-\frac{2(2\beta_{2}+1)}{\rho_{2 }+1}}\{|\lambda|^{\frac{2\beta_{2}}{\rho_{2}+1}}\|F\|_{\mathbb{H}_{2}}\|U\|_{ \mathbb{H}_{2}}+\|F\|_{\mathbb{H}_{2}}^{2}\}. \tag{158}\] On the other hand, from \(A^{\frac{\beta_{2}}{2}}s_{2}=A^{\frac{\beta_{2}}{2}}s-A^{\frac{\beta_{2}}{2}}s_ {1}\), (114) and as \(\mathfrak{D}(A^{\frac{1}{2}})\hookrightarrow\mathfrak{D}(A^{\frac{\beta_{1}}{2}})\), the inequality of (157), result in \[\|A^{\frac{\beta_{1}}{2}}s_{2}\|^{2}\leq C\{\|A^{\frac{\beta_{2}}{2}}s\|^{2}+ \|A^{\frac{\beta_{2}}{2}}s_{1}\|^{2}\}\leq C|\lambda|^{-\frac{2\beta_{2}}{\rho_ {2}+1}}\{|\lambda|^{\frac{2\beta_{2}}{\rho_{2}+1}}\|F\|_{\mathbb{H}_{2}}\|U\|_{ \mathbb{H}_{2}}+\|F\|_{\mathbb{H}_{2}}^{2}\}. \tag{159}\] By Lions' interpolations inequality \(0\in\big{[}-\frac{1}{2},\frac{\beta_{2}}{2}\big{]}\), result in \[\|s_{2}\|^{2}\leq C(\|A^{-\frac{1}{2}}s_{2}\|^{2})^{\frac{\beta_{2}}{1+\beta_{2} }}(\|A^{\frac{\beta_{2}}{2}}s_{2}\|^{2})^{\frac{1}{1+\beta_{2}}}. \tag{160}\] Then, using (158) and (159) in (160), for \(|\lambda|>1\), result in \[\|s_{2}\|^{2}\leq C|\lambda|^{-\frac{4\beta_{2}}{1+\beta_{2}}}\{|\lambda|^{\frac {2\beta_{2}}{1+\beta_{2}}}\|F\|_{\mathbb{H}_{2}}\|U\|_{\mathbb{H}_{2}}+\|F\|_{ \mathbb{H}_{2}}^{2}\}. \tag{161}\] Therefore, as \(\|s\|^{2}\leq\|s_{1}\|^{2}+\|s_{2}\|^{2}\), from (157), (161) and as for \(0\leq\beta_{2}\leq 1\) we have \(|\lambda|^{-2}\leq|\lambda|^{-\frac{4\beta_{2}}{1+\beta_{2}}}\), result in \[|\lambda|\|s\|^{2}\leq C_{\delta}|\lambda|^{\frac{1-3\beta_{2}}{1+\beta_{2}}}\{| \lambda|^{\frac{2\beta_{2}}{1+\beta_{2}}}\|F\|_{\mathbb{H}_{2}}\|U\|_{\mathbb{H} _{2}}+\|F\|_{\mathbb{H}_{2}}^{2}\}\qquad\text{for}\qquad 0\leq\beta_{2}\leq 1. \tag{162}\] Finally, let's now estimate the missing term \(|\lambda|^{\frac{2\beta_{2}}{1+\beta_{2}}}\|w\|^{2}\): It is assumed that \(|\lambda|>1\). Set \(w=w_{1}+w_{2}\), where \(w_{1}\in\mathfrak{D}(A)\) and \(w_{2}\in\mathfrak{D}(A^{0})\), with \[i\lambda w_{1}\,+\,Aw_{1} = f^{8}\qquad\text{and}\qquad i\lambda w_{2} = -\frac{b_{2}}{\rho_{4}}Az\,+\,\frac{\kappa_{2}}{\rho_{4}}(y_{x}\,-\,z )\,-\,\frac{\gamma_{3}}{\rho_{4}}A^{\beta_{3}}w\,+\,Aw_{1}. \tag{163}\] Firstly, applying in the product duality the first equation in (163) by \(w_{1}\), then by \(Aw_{1}\) and recalling that the operator \(A\) is self-adjoint, resulting in \[|\lambda|\|w_{1}\|+|\lambda|^{\frac{1}{2}}\|A^{\frac{1}{2}}w_{1}\|+\|Aw_{1}\| \leq C\|F\|_{\mathbb{H}_{2}}. \tag{164}\] Applying the operator \(A^{-\frac{1}{2}}\) in second equation of (163), we get \[i\lambda A^{-\frac{1}{2}}w_{2}=-\frac{b_{2}}{\rho_{4}}A^{\frac{1}{2}}z+\frac{ \kappa_{2}}{\rho_{4}}A^{-\frac{1}{2}}(y_{x}-z)-\frac{\gamma_{3}}{\rho_{4}}A^{ \beta_{3}-\frac{1}{2}}w+A^{\frac{1}{2}}w_{1},\] then, from \(0\leq\beta_{3}\leq 1\), we have: \(\mathfrak{D}(A^{\frac{\beta_{3}}{2}})\hookrightarrow\mathfrak{D}(A^{\beta_{3 }-\frac{1}{2}})\) and \(\mathfrak{D}(A^{0})\hookrightarrow\mathfrak{D}(A^{-\frac{1}{2}})\), from estimates (104) and (164), lead to \[\|A^{-\frac{1}{2}}w_{2}\|^{2}\leq C|\lambda|^{-\frac{2(2\beta_{3}+1)}{\beta_{ 3}+1}}\big{\{}|\lambda|^{\frac{2\beta_{3}}{2\beta_{3}+1}}\|F\|_{\mathbb{H}_{2 }}\|U\|_{\mathbb{H}_{2}}+\|F\|_{\mathbb{H}_{2}}^{2}\big{\}}. \tag{165}\] On the other hand, from \(A^{\frac{\beta_{3}}{2}}w_{2}=A^{\frac{\beta_{3}}{2}}w-A^{\frac{\beta_{3}}{2}}w _{1}\), (114) and as \(\mathfrak{D}(A^{\frac{1}{2}})\hookrightarrow\mathfrak{D}(A^{\frac{\beta_{1}}{ 2}})\), the inequality of (164), result in \[\|A^{\frac{\beta_{3}}{2}}w_{2}\|^{2}\leq C\{\|A^{\frac{\beta_{3}}{2}}w\|^{2}+ \|A^{\frac{\beta_{3}}{2}}w_{1}\|^{2}\}\leq C|\lambda|^{-\frac{2\beta_{3}}{\beta _{3}+1}}\big{\{}|\lambda|^{\frac{2\beta_{3}}{\beta_{3}+1}}\|F\|_{\mathbb{H}_{2 }}\|U\|_{\mathbb{H}_{2}}+\|F\|_{\mathbb{H}_{2}}^{2}\big{\}}. \tag{166}\] By Lions' interpolations inequality \(0\in\big{[}-\frac{1}{2},\frac{\beta_{3}}{2}\big{]}\), result in \[\|w_{2}\|^{2}\leq C(\|A^{-\frac{1}{2}}w_{2}\|^{2})^{\frac{\beta_{3}}{1+\beta_ {3}}}(\|A^{\frac{\beta_{3}}{2}}w_{2}\|^{2})^{\frac{1}{1+\beta_{3}}}. \tag{167}\] Then, using (165) and (159) in (167), for \(|\lambda|>1\), result in \[\|w_{2}\|^{2}\leq C|\lambda|^{-\frac{4\beta_{3}}{1+\beta_{3}}}\{|\lambda|^{ \frac{2\beta_{3}}{1+\beta_{3}}}\|F\|_{\mathbb{H}_{2}}\|U\|_{\mathbb{H}_{2}}+ \|F\|_{\mathbb{H}_{2}}^{2}\}. \tag{168}\] Therefore, as \(\|w\|^{2}\leq C\{\|w_{1}\|^{2}+\|w_{2}\|^{2}\}\), from (164), (168) and as for \(0\leq\beta_{3}\leq 1\) we have \(|\lambda|^{-2}\leq|\lambda|^{-\frac{4\beta_{3}}{1+\beta_{3}}}\), result in \[|\lambda|\|w\|^{2}\leq C_{S}|\lambda|^{\frac{1-3\beta_{3}}{1+\beta_{3}}}\{| \lambda|^{\frac{2\beta_{3}}{1+\beta_{3}}}\|F\|_{\mathbb{H}_{2}}\|U\|_{\mathbb{ H}_{2}}+\|F\|_{\mathbb{H}_{2}}^{2}\}\qquad\text{for}\qquad 0\leq\beta_{3}\leq 1. \tag{169}\] On other hand, using (155) in inequality (131), we have \[|\lambda|\|\varphi_{x}-\psi\|^{2}\leq C_{S}|\lambda|^{\frac{1-3\beta_{1}}{1+ \beta_{1}}}\{|\lambda|^{\frac{2\beta_{3}}{1+\beta_{1}}}\|F\|_{\mathbb{H}_{2}} \|U\|_{\mathbb{H}_{2}}+\|F\|_{\mathbb{H}_{2}}^{2}\}\quad\text{for}\quad 0\leq\beta_{1}\leq 1. \tag{170}\] Using (162) in inequality (133), we have \[|\lambda|\|y_{x}-z\|^{2}\leq C_{\delta}|\lambda|^{\frac{1-3\beta_{2}}{1+\beta_ {2}}}\{|\lambda|^{\frac{2\beta_{2}}{1+\beta_{2}}}\|F\|_{\mathbb{H}_{2}}\|U\|_{ \mathbb{H}_{2}}+\|F\|_{\mathbb{H}_{2}}^{2}\}\quad\text{for}\quad 0\leq\beta_{2}\leq 1. \tag{171}\] Now, using (169) in inequality (143), we have \[|\lambda|\|A^{\frac{1}{2}}z\|^{2}\leq C_{\delta}|\lambda|^{\frac{1-3\beta_{2}}{1 +\beta_{2}}}\{|\lambda|^{\frac{2\beta_{2}}{1+\beta_{2}}}\|F\|_{\mathbb{H}_{2}} \|U\|_{\mathbb{H}_{2}}+\|F\|_{\mathbb{H}_{2}}^{2}\}\quad\text{for}\quad 0\leq\beta_{2}\leq 1. \tag{172}\] Furthermore, taking \(\phi:=\min\limits_{(\beta_{1},\beta_{2},\beta_{3})\in(0,1)^{3}}\{\beta_{1}, \beta_{2},\beta_{3}\}\) defined in (148), we have, \(0<\phi<1\) and from estimates; (155),(162) and (169), we obtain \[|\lambda|^{\frac{2\phi}{1+\phi}}\{\rho_{1}\|u\|^{2}+\rho_{3}\|w\|^{2}+\rho_{4}\| s\|^{2}\}\leq C_{\delta}\|F\|_{\mathbb{H}_{2}}\|U\|_{\mathbb{H}_{2}}\quad\text{for} \quad 0<(2\phi)/(\phi+1)<1. \tag{173}\] As, \(0<\frac{2\phi}{1+\phi}<1\), from estimates: (116), (125), (134) and (135), we get \[|\lambda|^{\frac{2\phi}{1+\phi}}\{j|y-\varphi\|^{2}+\rho_{5}\|\theta\|^{2}+\rho_{ 2}\|v\|^{2}+b_{1}\|A^{\frac{1}{2}}\psi\|^{2}\}\leq C_{\delta}\|F\|_{\mathbb{H}_{2 }}\|U\|_{\mathbb{H}_{2}}\quad\text{for}\quad 0<(2\phi)/(\phi+1)<1. \tag{174}\] Now, as \(0<\frac{2\phi}{\phi+1}<1\), from estimates; (170),(171) and (172), we obtain \[|\lambda|^{\frac{2\phi}{1+\phi}}\{\kappa_{1}\|\varphi_{x}-\psi\|^{2}+\kappa_{2} \|y_{x}-z\|^{2}+b_{2}\|A^{\frac{1}{2}}z\|^{2}\}\leq C_{\delta}\|F\|_{\mathbb{H }_{2}}\|U\|_{\mathbb{H}_{2}}\quad\text{for}\quad 0<(2\phi)/(\phi+1)<1. \tag{175}\] Finally summing the estimates (173),(174) and (175), we have \[|\lambda|^{\frac{2\phi}{\phi+1}}\|U\|_{\mathcal{H}}\leq C_{\delta}\|F\|_{ \mathcal{H}}\qquad\text{for}\qquad 0<(2\phi)/(\phi+1)<1.\] Therefore, the proof of this theorem is finished. \(\Box\)
2306.17641
Molecular Dynamics Study of the Sonic Horizon of Microscopic Laval Nozzles
A Laval nozzle can accelerate expanding gas above supersonic velocities, while cooling the gas in the process. This work investigates this process for microscopic Laval nozzles by means of non-equilibrium molecular dynamics simulations of statioary flow, using grand canonical Monte-Carlo particle reservoirs. We study the expansion of a simple fluid, a mono-atomic gas interacting via a Lennard-Jones potential, through an idealized nozzle with atomically smooth walls. We obtain the thermodynamic state variables pressure, density, and temperature, but also the Knudsen number, speed of sound, velocity, and the corresponing Mach number of the expanding gas for nozzles of different sizes. We find that the temperature is well-defined in the sense that the each velocity components of the particles obey the Maxwell-Boltzmann distribution, but it is anisotropic, especially for small nozzles. The velocity auto-correlation function reveals a tendency towards condensation of the cooled supersonic gas, although the nozzles are too small for the formation of clusters. Overall we find that microscopic nozzles act qualitatively like macroscopic nozzles in that the particles are accelerated to supersonic speeds while their thermal motion relative to the stationary flow is cooled. We find that, like macroscopic Laval nozzles, microscopic nozzles also exhibit a sonic horizon, which is well-defined on a microscopic scale. The sonic horizon is positioned only slightly further downstream compared to isentropic expansion through macroscopic nozzles, where it is situated in the most narrow part. We analyze the sonic horizon by studying spacetime density correlations, i.e.\ how thermal fluctuations at two positions of the gas density are correlated in time and find that after the sonic horizon there are indeed no upstream correlations on a microscopic scale.
Helmut Ortmayer, Robert E. Zillich
2023-06-30T13:30:29Z
http://arxiv.org/abs/2306.17641v1
# Molecular Dynamics Study of the Sonic Horizon of Microscopic Laval Nozzles ###### Abstract A Laval nozzle can accelerate expanding gas above supersonic velocities, while cooling the gas in the process. This work investigates this process for microscopic Laval nozzles by means of non-equilibrium molecular dynamics simulations of stationary flow, using grand canonical Monte-Carlo particle reservoirs. We study the expansion of a simple fluid, a mono-atomic gas interacting via a Lennard-Jones potential, through an idealized nozzle with atomically smooth walls. We obtain the thermodynamic state variables pressure, density, and temperature, but also the Knudsen number, speed of sound, velocity, and the corresponing Mach number of the expanding gas for nozzles of different sizes. We find that the temperature is well-defined in the sense that the each velocity components of the particles obey the Maxwell-Boltzmann distribution, but it is anisotropic, especially for small nozzles. The velocity auto-correlation function reveals a tendency towards condensation of the cooled supersonic gas, although the nozzles are too small for the formation of clusters. Overall we find that microscopic nozzles act qualitatively like macroscopic nozzles in that the particles are accelerated to supersonic speeds while their thermal motion relative to the stationary flow is cooled. We find that, like macroscopic Laval nozzles, microscopic nozzles also exhibit a sonic horizon, which is well-defined on a microscopic scale. The sonic horizon is positioned only slightly further downstream compared to isentropic expansion through macroscopic nozzles, where the sonic horizon is situated in the most narrow part. We analyze the sonic horizon by studying spacetime density correlations, i.e. how thermal fluctuations at two positions of the gas density are correlated in time and find that after the sonic horizon there are indeed no upstream correlations on a microscopic scale. ## I Introduction The Laval nozzle converts thermal kinetic energy into translational kinetic energy and was invented by Gustaf de Laval in 1888 for actuating steam turbines with steam accelerated by expansion. The goal was to achieve the highest possible velocity of an expanding gas, made possible with the convergent-divergent nozzle shape. The left panel of Fig. 1 schematically shows the cross section of such a nozzle. When the gas reaches the most narrow part, the nozzle throat, the flow can become supersonic. The surface where this happens is called sonic horizon (or acoustic horizon) [1; 2] because no information carried by sound waves can travel upstream through the sonic horizon. The expansion of gas in a Laval nozzle has interesting thermodynamic properties. While the gas acceleration of macroscopic Laval nozzles is exploited for propulsion purposes in rocket engines, the temperature drop during expansion through a nozzle with a diameter in the tenth of \(\mu\)m range is exploited in supersonic jet spectroscopy to freeze out translational, rotational and vibrational degrees of freedom of molecules, leading to spectra that are not complicated by too many thermally populated excited states [3; 4; 5; 6]. The studied molecules can be kept in a supercooled gas phase, far below the condensation temperature, with a high density compared to a conventionally cooled equilibrium vapor. Under appropriate conditions, weakly bound van der Waals cluster can be formed [7; 8]. The molecules of interest are typically co-expanded with a noble gas. In case of \({}^{4}\)He as carrier the cooling effect is also greatly enhanced by the unique quantum effects of \({}^{4}\)He at low temperatures. Especially the helium-droplet beam technique takes additional advantage from the superfluidity of \({}^{4}\)He[7; 8; 9]. The typical orifice used for molecular beams has only a convergent part and the divergent nozzle part is realized by the ambient pressure in the expansion chamber. During expansion the surrounding gas in the chamber provides a pressure boundary to the jet and the jet temperature itself keeps decreasing after exiting the orifice. [10]. Macroscopic Laval nozzles are well understood and can be approximately described by simple thermodynamic considerations, under assumptions that are reasonable for macroscopic nozzles: isentropic flow without dissipation (inviscid gas and smooth slip boundaries); the flow velocity \(v\) depends only on the position \(x\) along the axis of the nozzle; the nozzles cross section varies only gradually with \(x\); the flow is stationary; and continuum fluid dynamics is valid, i.e. each fluid element is in local thermodynamic equilibrium. Then the relative velocity change with \(x\) and the relative change of the cross section area \(A\) follow the simple relation [10] \[\frac{\mathrm{d}v}{v}=-\,\frac{1}{1-\left(\frac{v}{c}\right)^{2}}\,\frac{ \mathrm{d}A}{A} \tag{1}\] where \(c\) is the speed of sound, which can be expressed in terms of the isentropic or isothermic derivative of the pressure with respect to the density, \[c=\sqrt{\left(\frac{\partial p}{\partial\rho}\right)_{S}}=\sqrt{\frac{c_{ \mathrm{p}}}{c_{\mathrm{v}}}\left(\frac{\partial p}{\partial\rho}\right)_{T}} \tag{2}\] where \(c_{\rm p}\) and \(c_{\rm v}\) is the heat capacity at constant pressure and volume, respectively. The ratio \(M=v/c\) is called Mach number, and \(M=1\) defines the sonic horizon. The usual situation is a gas in a reservoir or a combustion chamber producing gas to the left in our figures of the nozzle. Hence the flow velocity is small when it enters the nozzle, in particular it is subsonic, \(M<1\). Eq. (1) tells us that, with decreasing cross section \(A\) (e.g. moving downstream in the convergent part), the flow velocity \(v\) must increase. In the nozzle throat, i.e. where \(A\) has a minimum and \(dA=0\), \(v\) either stays below \(M\), in which case \(v\) must decelerate in the divergent part. Or the gas flow attains \(M=1\) in the nozzle throat, and then accelerates further in the divergent part (if the pressure difference between inlet and outlet is large enough). Hence for supersonic flow, \(v\) increases with _increasing_\(A\). Note that Eq. (1) implies that the transition to supersonic flow can happen only where the cross section area has a minimum. The goal of this work is to understand the physics of microscopic Laval nozzles on the nanoscale of the atoms of the gas flowing through a constriction which is only nanometers wide. We want to answer the following questions: How do the transport properties of a Laval nozzle depend on its size, and does it even have the typical characteristic of a convergent-divergent nozzle, i.e. converting thermal energy into translational energy? If yes, how efficiently does a nanoscale Laval nozzle cool the expanding gas? Do we obtain supersonic flow? Is there a well-defined sonic horizon, and if yes, where in the nozzle is it located? Is there even local thermodynamic equilibrium such that we can define a local speed of sound and thus can speak of a sonic horizon and supersonic flow? Since we are interested in the fundamental mechanism of a microscopic Laval nozzle we study a rather idealized nozzle with atomically flat surfaces corresponding to slip boundaries. This simplifies the problem since it eliminates the boundary layer close to the nozzle walls. Boundary effects are of course essential in a real microscopic nozzle, and they would be easy to model with rough walls, but they would complicate the analysis and interpretation of our results. A common method to study microscopic nozzles is the direct simulation Monte Carlo (DSMC) method [11; 12; 13; 14], which solves the Boltzmann equation. However, we want to make as few approximations as possible, apart from the idealization of a atomically smooth nozzle walls. Therefore we use molecular dynamics (MD) simulations, which accounts for each atom or molecule of the gas, and collisions are described by realistic intermolecular interactions. Atomistic (MD) simulations have been shown to be useful for the understanding of fluid dynamic phenomena [15; 16; 17; 18; 19; 20; 21; 22]. The only underlying assumption of the MD method is that quantum physics plays no role and classical mechanics is sufficient. This is usually a valid assumption, with the exception of expansion of \({}^{4}\)He under conditions where the \({}^{4}\)He gas cools to superfluid nanodroplets[23]. Because of the non-equilibrium nature of this expansion process through a Laval nozzle we perform non-equilibrium MD (NEMD) simulations [24]. The right panel of Fig. 1 shows the trajectories 30 randomly chosen particles of a simulation in a convergent-divergent nozzle that contained on average about 790000 particles. The speed of the particles is color-coded. Fig. 1 gives an impression how a Laval nozzle converts thermal energy (temperature) to ordered translation energy: close to the inlet the motion is predominantly thermal; close to the outlet the velocities are higher and tend to point in \(x\)-direction, but the temperature, i.e. the kinetic energy after subtracting the flow velocity, is in fact much lower as our results will show. Averaging over all particles and over time leads to the thermodynamic notion of a gas that accelerates and cools as is expands through the nozzle. With MD we can obtain, with microscopic resolution, both thermodynamic quantities like temperature, pressure, or density, and microscopic quantities like the velocity autocorrelation function VACF, velocity distribution, or density fluctuation correlations: we will investigate whether the expanding gas has a well-defined temperature, characterized by an isotropic Maxwell-Boltzmann distribution of the thermal particle velocities. The VACF exhibits features related to the metastability of the accelerated gas cooled below condensation temperature. We Figure 1: Left: Cross section of a Laval nozzle with a convergent and divergent nozzle part. Indicated by the arrow and color is the flow direction and temperature decrease of the expanding gas. calculate spatio-temporal density auto-correlations, i.e. correlations between fluctuations of the density at different times and different locations, to study the propagation of information upstream and downstream and pinpoint the location of the sonic horizon (if it exists). In a macroscopic nozzle, upstream propagation of information carried by density fluctuations is not possible in the supersonic region. On the microscopic scale, e.g. on the scale of the mean free path of the atoms, a unidirectional information flow is not so obvious. For instance, if we assume a Maxwell-Boltzmann distribution of random particle velocities, fast particles from the tail of the distribution could carry information upstream. We remark that, in a seminal paper by W. G. Unruh et al. [1], a mathematical analogue between the black hole evaporation by Hawking radiation and the fluid mechanical description of a sonic horizon is found. This analogue has brought significant attention to sonic horizons [2; 25; 26; 27; 28], but in this work we will not study analog Hawking radiation. ## II Molecular dynamics simulation of expansion in Laval nozzle The gas flow through the microscopic Laval nozzle is simulated with the molecular dynamics (MD) method which solves Newton's equation of motion for all particles of the gas. Unlike in continuum fluid dynamics, which solves the Navier-Stokes equation, MD contains thermal fluctuations of the pressure and density, also in equilibrium. Furthermore, unlike the continuum description, MD does not assume local thermodynamic equilibrium, which may not be fulfilled in a microscopic nozzle. The price for an accurate atomistic description afforded by MD simulations is a high computational cost compared to Navier-Stokes calculations or DSMC simulations. In the present case, we simulate up to several hundred thousand particles. Larger MD simulations are possible, but our focus is the microscopic limit of a Laval nozzles on the nanometer scale. A challenge for MD is to implement effective reservoirs to maintain a pressure differential for a steady flow between inlet and outlet of the nozzle. An actual reservoir large enough to maintain its thermodynamic state during the MD simulation would be prohibitively computationally expensive. We approximate these reservoirs by defining small inlet and outlet regions where we perform a hybrid MD and MC Monte-Carlo simulation (GCMC) [29], with grand canonical Monte-Carlo exchange of particles [30]. As the name implies, this method simulates a grand canonical ensemble for a given chemical potential \(\mu\), volume \(V\) and temperature \(T\) by inserting and removing particles. The nozzle itself is simulated in the microcanonical ensemble, i.e. energy is conserved. This ensemble represents a nozzle with perfect thermally insulating walls. Fig. 2 shows the geometry of the nozzle simulated with the inlet and outlet colored in blue and yellow, respectively, with the convergent-divergent nozzle in between. To keep the simulation simple and the computational effort in check we simulate a slit Laval nozzle, translationally invariant in \(z\)-direction (perpendicular to the plane of the figure) and realized with periodic boundaries in this direction. Since our focus is a microscopic understanding of supersonic flow and the sonic horizon, we simulate a nozzle with atomically smooth walls. Simulating rough walls would have significantly complicated the analysis of the flow, because of the nontrivial spatial dependence of the flow field in the direction perpendicular to the general flow direction, requiring significantly longer simulations to resolve all measured quantities in both \(x\) and \(y\) direction. In a smooth-walled nozzle, we can restrict ourselves to studying only the \(x\)-dependence of the quantities of interest. The gas particles are atoms interacting via a pair-wise Lennard-Jones (LJ) potential. Thus we simulate the expansion of a noble gas through the nozzle. Molecules with vibrational and rotational degrees of freedom seeded into the noble gas would be an interesting subject for further investigation, but this exceeds the scope of this work. The (LJ) potential between a pair of particles with distance \(r\) is given by \[V_{LJ}(r)=4\epsilon\left[\left(\frac{\sigma}{r}\right)^{12}-\left(\frac{ \sigma}{r}\right)^{6}\right] \tag{3}\] The smooth walls are also modelled via a (LJ) potential with, \[V_{LJ}(s)=4\epsilon\left[\left(\frac{\sigma}{s}\right)^{12}-\left(\frac{ \sigma}{s}\right)^{6}\right] \tag{4}\] where \(s\) is the normal distance between atom and wall. We use the common reduced units for simulations of LJ particles if not otherwise stated, see table 1. Thus with the atom mass \(m\), and the LJ parameters \(\sigma\) and \(\epsilon\) for a specific noble gas, the results can be converted from reduced units to physical units. Atoms are inserted and deleted in the inlet (blue) and outlet (yellow) by running the MD simulation in Figure 2: Geometry of a slit Laval nozzle with the convergent and divergent part in the \(xy\)-plane. The nozzle walls are two cylinders. In the \(z\)-direction out of the plane, the nozzle is translationally invariant, realized with periodic boundary conditions. Particle insertion is done by grand canonical Monte Carlo insertion and deletion [29; 30] on the left side (blue) in the \(\mu_{1}V_{1}T_{1}\) ensemble. The nozzle region shown in white with the convergent end divergent boundaries is simulated in the microcanonical ensemble. Particle deletion is done on the right side (yellow) in a \(\mu_{2}V_{2}T_{2}\) ensemble. these regions as a hybrid (GCMC) simulation [29]. The two grand canonical ensembles are characterized by their chemical potential, the volume, and the temperature, \((\mu_{1},V_{1},T_{1})\) and \((\mu_{2},V_{2},T_{2})\), respectively. A proper choice of these thermodynamic variables ensures that on average, an excess of particles are inserted in the inlet and particle are eliminated in the outlet, such that a stationary gas flow is established after equilibration. There are alternative insertion method, such as the insertion-deletion method, where the mass flow is specified [31]. The temperature and chemical potential of the inlet reservoir is set to \(T_{1}=2.0\) and \(\mu_{1}=-32\), which would correspond to a density \(\rho_{1}=0.86\) and ensures that the pressure is not too high and the LJ particles remain in the gas phase. The particle insertion region in the nozzle is not in equilibrium with the grand canonical reservoir defining the \((\mu_{1},V_{1},T_{1})\) ensemble, because the inlet volume is not closed on the side facing the nozzle. The outflow must be compensated by additional insertions, which makes the insertion rate higher than the elimination rate. Indeed we observed that the average density in the insertion region is approximately half the density \(\rho_{1}\). Also the temperature in the inlet region is lower than the set value \(T_{1}=2.0\). The resulting pressure in the insertion region is \(p\approx 0.06\) in our reduced units. For Argon with \(\epsilon=1.65\cdot 10^{-21}\,\mathrm{J}\) and \(\sigma=3.4\,\mathrm{\AA}\)[32] this translates to a temperature of \(T=179\mathrm{K}\) and a pressure \(p\approx 2.5\cdot 10^{6}\,\mathrm{Pa}\) in SI units. This is in the pressure range for molecular beam spectroscopy experiments [4]. The inlet conditions will converge to the specified reservoir variables if the number of GCMC moves is significantly larger than the number of MD moves, or if the size of the inlet region is increased; both increases computational cost. Alternatively, the inlet conditions may be matched to the desired pressure and temperature by fine-tuning the reservoir variables and running many equilibration simulations, which again requires a high computational effort. In this work we refrain from perfectly controlling the thermodynamic state of the inlet although it leads to effectively different inlet conditions in differently sized nozzles. In the convergent-divergent part of the nozzle, between the two grand canonical ensembles, the atoms are propagated in the microcanonical ensemble (i.e. energy and particle number are conserved), which is the most suitable ensemble for dynamic studies since the dynamics is not biased by a thermostat. Since we want to simulate expansion into vacuum, instead of choosing a very negative chemical potential, we simply set the pressure in the outlet to zero, such that particles entering the outlet region are deleted immediately. For comparisons of different nozzle sizes, we scaled the slit nozzle in both \(x\) and \(y\) directions, while keeping the simulation box length \(z_{max}\) in the translationally invariant \(z\)-direction, perpendicular to the figure plane in Fig. 2, fixed. In the \(z\)-direction, we apply periodic boundary conditions. We compared different simulation box lengths \(z_{max}\) in \(z\)-direction to quantify unwanted finite size effects in \(z\)-direction. Ideally, we want to keep \(z_{max}\) larger than the mean free path. Especially for the dilute gas at the end of the divergent part, a sufficiently large \(z_{max}\) is required to avoid such effects. For most simulations, we found \(z_{max}=86.18\,\sigma\) or \(z_{max}=43.09\,\sigma\) to be adequate, as shown below. We initialize the NEMD simulations with particles only in the inlet region. Equilibration is achieved when the total number of particle in the simulation does not increase anymore but just fluctuates about an average value. When this steady state is reached, we start measurements by averaging velocities, pressure, density etc. The equilibrium equation of state for LJ particles is well known [33; 34]. The equation of state is not needed for the MD simulations, but it is helpful for the analysis of the results, particularly for the calculation of the speed of sound and the Mach number. Specifying the Mach number, temperature, or pressure rests on the assumption of local thermodynamic equilibrium, and thus on the validity of a local equation of state. In a microscopic nozzles where the state variables of the LJ gas changes on a very small temporal and spatial scale local thermodynamic equilibrium may be violated. All simulation were done with the open source MD software LAMMPS [35; 36]. ## III Thermodynamic properties In this section we present thermodynamic results of our molecular dynamic simulations of the expansion through slit Laval nozzles: density, pressure, temperature, and Mach number. We check whether a microscopic nozzle exhibits the transition to supersonic flow and where the sonic horizon is located in nozzles of various sizes, and we compare to ideal gas continuum dynamics. The atomistic NEMD simulation also allows us to investigate if the gas attains a local equilibrium everywhere in the nozzle, with a well-defined temperature. \begin{table} \begin{tabular}{|l l|} \hline Quantity & reduced units \\ \hline Distance & x\({}^{*}\)=\(\sqrt{\sigma}\) \\ Time & t\({}^{*}\)=\(t\sqrt{\frac{\epsilon}{m^{*}\rho^{*2}}}\) \\ Energy & E\({}^{*}\)=\(E/e\) \\ Velocity & v\({}^{*}\)=vt\({}^{*}\)/\(\sigma\) \\ Temperature & T\({}^{*}\)=T \(k_{B}/\epsilon\) \\ Pressure & P\({}^{*}\)=\(\frac{\rho^{3}}{\epsilon}\) \\ Density & \(\rho^{*}\)=\(\rho\sigma^{3}\) \\ \hline \end{tabular} \end{table} Table 1: Conversion to dimensionless reduced units (\({}^{*}\)) used in this work. ### Very small nozzle Fig. 3 shows results for a very small Laval nozzle, with a throat width of only \(3.9\,\sigma\), i.e. only a few atoms wide. Panel a) shows the nozzle geometry. The temperature is shown in panel c). The kinetic temperature is the thermal motion of the atoms after the flow velocity at \(\mathbf{r}\), \(\mathbf{v}(\mathbf{r})\) is subtracted \[\frac{3}{2}k_{\mathrm{B}}T=\sum_{i}\frac{m}{2}\left(\mathbf{v}_{i}-\mathbf{v} (\mathbf{r_{i}})\right)^{2} \tag{5}\] Unlike in equilibrium, the temperature in a non-equilibrium situation such as stationary flow varies spatially, \(T=T(\mathbf{r})\), provided that local equilibrium is fulfilled. If there is no local equilibrium, there is no well-defined temperature. Although the right hand side of eq.(5) can still be evaluated, the notion of a "temperature" is meaningless if the thermal parts of the atom velocities do not follow a Maxwell-Boltzmann distribution. Here we assume that eq.(5) provides a well-defined local temperature \(T(x)\) at position \(x\) along the flow direction in our Laval nozzles. Further below we investigate whether this assumption is justified. The subtleties of the calculation of \(\mathbf{v}(\mathbf{r})\) and \(T(x)\), and how to subtract the flow velocity from the particle velocities can be found in appendix C and D, respectively. Fig. 3 shows that \(T(x)\) indeed drops after the gas passes the nozzle throat, but there is a small increase before it reaches the throat. We attribute this to the wall potential: the constriction is dominated by the attractive well of the LJ potential (4). The associated drop in potential energy is accompanied by an increase of the temperature, i.e. kinetic energy. Panel g) shows the flow speed \(v(x)=|\mathbf{v}(x)|\). \(v(x)\) increases monotonously over the whole length of the nozzle. For comparisons, we also show the speed of sound of the LJ gas \(c(x)\) and of the ideal gas \(c_{\mathrm{d}}(x)\), which are very similar, even in the convergent part where the density is higher. For a monatomic ideal gas, the speed of sound (2) becomes \[c_{\mathrm{id}}(x)=\sqrt{\frac{5}{3}k_{\mathrm{B}}T(x)/m}. \tag{6}\] The speed of sound \(c(x)\) of the LJ fluid is calculated from its equation of state given in Ref. [33] and the specific residual heat capacities [34], using the expression with the isothermal derivative in Eq. (2) and the values of \(\rho(x)\) and \(T(x)\) measured in the MD nozzle simulations. \(\rho(x)\) is shown in panel i), together with the pressure. The heat capacities \(c_{\mathrm{p}}\) and \(c_{\mathrm{v}}\) appearing in eq.(2) are also obtained from the equation of state of the LJ fluid. Note that applying the equation of state at position \(x\) in the nozzle again assumes local equilibrium, which is not necessarily true. Panel e) shows the Mach number \(M(x)\) obtained from the simulation and the Mach number \(M_{\mathrm{id}}(x)\) for an ideal gas continuum. For the ideal gas, we can derive from eq. (1) a relation between the cross section areas \(A(x)\) and Mach numbers \(M_{\mathrm{id}}(x)\) at two different positions \(x_{1}\) and \(x_{2}\) in the nozzle [10] \[\frac{A(x_{1})}{A(x_{2})}=\frac{M_{\mathrm{id}}(x_{2})}{M_{\mathrm{id}}(x_{1} )}\left(\frac{1+\frac{\gamma-1}{2}M_{\mathrm{id}}^{2}(x_{1})}{1+\frac{\gamma -1}{2}M_{\mathrm{id}}^{2}(x_{2})}\right)^{\frac{\gamma+1}{2(\gamma-1)}} \tag{7}\] Figure 3: Thermodynamic quantities for a nozzle with a throat width of only \(3.9\,\sigma\). The figure shows in panel a) an overview of the nozzle in the x-y-plane, in c) the temperature, in e) the Mach number \(M(x)\) and the ideal gas approximation for the Mach number \(M_{\mathrm{id}}\), in g) the ideal gas approximation of the speed of sound \(c_{\mathrm{id}}\), the speed of sound \(c\) obtained from the simulation and the averaged flow speed \(v\), in i) the density \(\rho\) and pressure \(p\), and in k) the Knudsen number. All quantities are shown as a function of the \(x\) position in the nozzle. \(M_{\rm id}(x)\) can now be obtained by setting \(x_{1}=x\) and \(x_{2}=x_{c}\), the position of the sonic horizon, where \(M_{\rm id}(x_{c})=1\) by definition. Panel e) shows that the Mach number \(M(x)\) obtained from the simulation stays below the ideal gas approximation \(M_{\rm id}(x)\), with the difference growing in the divergent part of the nozzle. At the end of the nozzle \(M\) is approximately half the value of the ideal gas continuum approximation \(M_{\rm id}\). In particular, the sonic horizon predicted by the MD simulation is located _after_ the throat of the nozzle, not at the point of smallest cross section predicted by the continuum description of isentropic flow, see eq. (1). The Knudsen number is a characteristic quantity for flow in confined geometries. It is the mean free path length \(\lambda\) divided by a characteristic length \(d\) of confinement \[{\rm Kn}(x)=\frac{\lambda(x)}{d(x)} \tag{8}\] In our slit Laval nozzle \(d(x)\) is the width at position \(x\). We estimate the mean free path \(\lambda(x)\) using a hard sphere approximation [37] \[\lambda(x)=\left(\sqrt{2}\rho(x)\pi\right)^{-1} \tag{9}\] under the assumption of a Maxwell-Boltzmann distribution of the velocities which which check to be fulfilled in the nozzle, see section IV.1 and Fig. 7. For \({\rm Kn}\ll\)1 the mean free path is much smaller than the nozzle width and a continuum description of the flow is appropriate. For \({\rm Kn}\approx 1\) or \({\rm Kn}\gg 1\) a continuum description is is not Figure 4: Same as Fig.3 for a throat width of \(7.8\,\sigma\) (left column) and \(15.6\,\sigma\) (right column). possible and the transport becomes partly ballistic. For the smallest nozzle results, the Knudsen number \(\text{Kn}(x)\) shown in panel k) in Fig. 3, is significantly larger than unity in the supersonic regime. ### Small nozzles Fig. 4 shows results for two nozzles twice and four times as large as the smallest nozzle presented in Fig.3, with throat widths \(7.8\,\sigma\) and \(15.6\,\sigma\), respectively. The small temperature increase seen for the smallest nozzle is not present anymore. \(T\) is almost constant in the convergent part and then decreases monotonously. Note that for each nozzle, the flow starts from slightly different thermodynamics conditions in the inlet region, for reasons explained above. As the nozzle size increases, the Mach number \(M\) reaches a higher value for the larger nozzle despite the slightly lower \(T\) in the inlet, and it follows the ideal gas approximation \(M_{\text{id}}\) more closely. The sonic horizon moves closer to the minimum of the cross section. Of course the Knudsen number \(\text{Kn}(x)\) is smaller for larger nozzles. Due to the wider nozzle throat, the pressure is significantly lower in the convergent part. For Fig. 5, we increase the nozzle size again twofold and fourfold. We find the same trends as in Fig. 4. For the nozzle with throat width \(62.5\sigma\), the Mach number \(M\) is close to the ideal gas approximation \(M_{\text{id}}\). \(M\) falls below \(M_{\text{id}}\) only towards the end of the nozzle, where the collision rate presumably becomes too low for efficient Figure 5: Same as Fig.3 for a throat width of \(31.25\,\sigma\) (left column) and \(62.5\,\sigma\) (right column). The temperature is split into its contribution from motion in \(x\), \(y\) and \(z\) direction. cooling. The sonic horizon is essentially in the center, indicated by the vertical dashed line. For these two largest nozzles, we examined whether local equilibrium is fulfilled. The direction-dependent temperature, see appendix D, is shown in panel c) and d) of Fig. 5. The temperature is not quite isotropic, i.e. there is insufficient local equilibration between the motion in \(x\)-, \(y\), and \(z\)-direction. The three respective temperatures differ. In the convergent part the temperature in the \(y\)-direction, \(T_{y}\), is highest, while in the divergent part \(T_{y}\) is lower than \(T_{x}\) and \(T_{z}\). \(T_{z}\) is only influenced by collisions between particles because there is no wall in \(z\)-direction. Comparing the two nozzles presented in Fig. 5, we observe the expected trend that the temperature anisotropy decreases with increasing nozzle size. At the end of the nozzles in Fig. 5 the temperature anisotropy grows because the collision rate between particles drops as the density drops. Whether the random particle velocities are Maxwell-Boltzmann distributed will be studied in section IV about microscopic properties. In table 2 we compare the difference \(\Delta x_{c}=x_{c}-x_{c}^{0}\) between the calculated position \(x_{c}\) of the sonic horizon and the position \(x_{c}^{0}\) of minimal cross section area predicted by isentropic flow in the continuum description. In all cases the sonic horizon is "delayed" and shifted downstream, \(\Delta x_{c}>0\). With growing nozzle size characterized by the throat width \(d_{m}\), the dimensionless difference falls in relation to the nozzle size, quantified by the ratio \(\frac{\Delta x_{c}}{d_{m}}\) shown in the right column. In absolute numbers, \(\Delta x_{c}\) grows with size (middle column), until it actually drops for the largest nozzle. Surprisingly, our atomistic simulations indicate that for a sufficiently large nozzle the sonic horizon is situated right in the middle, with atomistic precision. ### Phase diagram Does the gas undergo a phase transition and condense into droplets at the end of the nozzle as it cools upon expansion? Fig. 6 shows the phase diagram of the LJ equation of state in the \((T,\rho)\) plane as determined form Ref. [33]. The saturation density curve shown in yellow is associated with the phase transition, but up to the critical density, shown as blue curve, a supersaturated vapor phase or a superheated liquid phase is possible. This supersaturated and superheated phases are metastable. The green curve in Fig. 6 shows the path of density and temperature values, shown in panels c) and i) of Fig. 5, of the gas expansion in the nozzle with throat width \(d_{m}=31.25\). Strictly speaking, only an adiabatically slow evolution of a LJ fluid has a well-defined path in diagram Fig. 6, which shows _equilibrium_ phases. But plotting the state during expanding through the microscopic nozzle in Fig. 6 at least provides a qualitative description of the fluid at a particular position in the nozzle. The path would extend to about \(T=0.4\), but the equation of state from ref. [33] does not reach below \(T=0.7\). We note that the triple point, obtained from molecular simulations studies in Ref.[38] lies at \(T_{tr}=0.661\), below which the gas-liquid coexistence region becomes a gas-solid coexistence region. From the path traced by the expanding gas we see that the LJ fluid starts in the gas phase in the inlet. As temperature and density fall upon expansion, the fluid enters the gas-liquid coexistence region. In this region the fluid can remain in a metastable supersaturated gas phase. Below the triple point, even the gas-solid coexistence region is reached at the end of the nozzle. Our simulations show no evidence of a liquid or even a solid phase in our simulations, which would appear as small liquid or solid clusters; the LJ particles remain unbound until reaching the outlet region of the nozzle. Either the gas remains metastable or it is to far out of local thermal equilibrium that the discussion in terms of the phase diagram is meaningless. The anisotropy of the temperature discussed in the previous section indicates that thermal equilibrium is not completely fulfilled. The absence of nucleation of clusters is not a surprise, because there is simply not enough time in a microscopic nozzle for nucleation under such dilute conditions before the gas reaches the outlet. \begin{table} \begin{tabular}{|r|c|c|} \hline \(d_{m}\) & \(\Delta x_{c}\) & \(\frac{\Delta x_{c}}{d_{m}}\) \\ \hline 3.90 & 3.78 & 0.97 \\ 7.80 & 4.97 & 0.64 \\ 15.60 & 5.96 & 0.38 \\ 31.25 & 6.09 & 0.19 \\ 62.50 & 2.74 & 0.044 \\ \hline \end{tabular} \end{table} Table 2: Downstream shift \(\Delta x_{c}\) of the sonic horizon with respect to the center position predicted by continuum fluid dynamics. Nozzle are characterized by the minimal width \(d_{m}\). The right column shows the dimensionless difference in relation to nozzle size, \(\frac{\Delta x_{c}}{d_{m}}\). Figure 6: Density-temperature phase diagram. Shown are the saturation density (yellow), the critical density (blue) and the critical point (purple) from the Lennard-Jones equation of state [33]. For the nozzle with a throat width \(d_{m}=31.25\) the path of temperature and density values are shown as green curve. Microscopic properties Molecular dynamics simulation allows to measure properties which are inaccessible in a macroscopic continuum mechanical description. We already have seen in the previous section the temperature is slightly anisotropic, which is inconsistent with local equilibrium. In this section we take a closer look at quantities defined on an atomistic level: the velocity probability distribution (in equilibrium the Maxwell-Boltzmann distribution) and the velocity autocorrelation function. Furthermore we study the propagation of density waves by calculating the upstream and downstream time-correlations of thermal density fluctuations of the stationary flow before, at, and after the sonic horizon. The goal is to check if the sonic horizon, found in the previous section by thermodynamic consideration, is also a well-defined boundary for upstream information propagation on the microscopic level. ### Velocity Distribution We have observed a temperature anisotropy, see panel c) and d) in Fig.5. This raises the question whether the particle velocities even follow a Maxwell-Boltzmann distribution. If the velocities are not Maxwell-Boltzmann distributed, we do not have a well-defined kinetic temperature. This question is important for the interpretation of the results, for example when we discussed the temperature drop during expansion in the previous section. We now clarify whether it is meaningful to talk about temperature in microscopic nozzles. We calculate the velocity distribution for the two largest nozzles (see Fig.5). shown in Fig. 7 by separately sampling the histograms for the \(x\), \(y\), and \(z\)-components of the velocity, where we subtract the steady flow velocity from the particle velocities, see appendix D. Since the velocity distribution depends on the location \(x\) in the nozzle, the histograms are two-dimensional, which requires a lot of data to sample from. Therefore we split \(x\) into only three regions \(x_{1}\), \(x_{2}\) and \(x_{3}\), depicted in the nozzle illustrations at the top of Fig. 7. The velocity distributions \(f(v_{x},x_{j})\) for the \(x\)-component of the velocity are shown in panels c) and d) for the two respective nozzles, each panel showing \(f(v_{x},x_{j})\) for all three regions \(x_{j}=x_{1},x_{2},x_{3}\) in blue, yellow, and green. Of course, the distributions become more narrow for larger \(x_{j}\), consistent with a downstream drop of temperature in a Laval nozzle. We fit the histograms with Gaussian functions, i.e. the Maxwell-Boltzmann distribution, also shown in the panels. The corresponding results \(f(v_{y},x_{j})\) and \(f(v_{z},x_{j})\) for the other two velocity directions are shown in panels e)-h). It is evident that, apart from small statistical fluctuations, the Maxwell-Boltzmann distribution is a good fit in all cases. Thus the notion of temperature in these microscopic non-equilibrium systems makes sense. The width of the velocity distributions (i.e. the temperature) is not quite the same in the three directions, however, in particular in region \(x_{3}\), the diverging part of the nozzle. In order to see this better, we compare the fits to \(f(v_{i},x_{3})\) for \(i=x,y,z\) in panels i) and j). The distribution of the \(y\)-component of the velocity is narrower than the other two directions. In other words the temperature according to \(v_{y}\) is lower, thus the temperature is not isotropic. This means there is insufficient equilibration between the three translational degrees of freedoms. The effect is more pronounced for the smaller nozzle because particles undergo fewer collisions before they exit the nozzle, as quantified by the larger Knudsen number, see Fig.5. The spatial binning into just three region \(x_{j}\) is rather coarse-grained as it neglects the temperature variation within a region. With more simulation data a finer spatial resolution would be possible, however we feel that the presented results are convincing enough that the thermal kinetic energy can be well-characterized by a temperature, albeit slighly different in each direction. ### Velocity Autocorrelation Function The velocity auto-correlation function, VACF, quantifies the "memory" of particles about their velocity. The VACF is defined as \[\text{VACF}(\tau)=\left\langle\mathbf{v}_{p}(t)\cdot\mathbf{v}_{p}(t+\tau) \right\rangle_{t,p} \tag{10}\] with \(\mathbf{v}_{p}(t)\) the velocity of particle \(p\) at time \(t\). \(\left\langle\dots\right\rangle_{t,p}\) denotes an average over time and over all particles. An ideal, i.e. non-interacting particle has eternal memory, \(\text{VACF}_{u}(\tau)=\text{const}\). But due to interactions with the other particles, \(\text{VACF}_{u}(\tau)\to 0\) within microscopically short times. In the case of stationary flow, we need to subtract the flow velocity from particle velocities in eq (10). Furthermore, the VACF will depend on the \(x\)-coordinate in the nozzle. Therefore we generalize eq. (10) to a form which is suitable for stationary flow in a nozzle that depends on \(x\) and is not biased by the flow velocity. We also normalize the VACF such that is is unity at \(\tau=0\): \[\text{VACF}(x,\tau)=\frac{\left\langle\Delta\mathbf{v}_{p}(t)\cdot\Delta \mathbf{v}_{p}(t+\tau)\,\delta(x-x_{p}(t))\right\rangle_{t,p}}{\left\langle \Delta\mathbf{v}_{p}(t)^{2}\,\delta(x-x_{p}(t))\right\rangle} \tag{11}\] where \(\Delta\mathbf{v}_{p}(t)\equiv\mathbf{v}_{p}(t)-\mathbf{v}(x_{p}(t))\) is the thermal part of the velocity, after subtraction of the flow velocity \(\mathbf{v}\) at the particle coordinate \(x_{p}(t)\). Note that we define \(\text{VACF}(x,\tau)\) such that the spatial coordinate \(x\) coincides with the starting point \(x_{p}(t)\) at time \(t\) of the time correlation; at the final time \(t+\tau\), the particle has moved to \(x_{p}(t+\tau)\) downstream. When we sample (11) with a MD simulation, the coordinate \(x\) and the correlation time \(\tau\) are discretized, and \(\delta(x-x_{p}(t))\) is replaced by binning a histogram in the usual fashion, see the appendix. Fig. 8 shows the VACF for various positions \(x\) in the nozzle. The calculations were done for two different nozzle sizes (left and right panels). The VACFs cannot be shown for \(x\) all the way to the end of the nozzles because particles leave the simulation before the velocity correlation can be evaluated. For example, if a particle in the smaller of the two nozzles in Fig. 8 is located at \(x=437\) at \(\tau=0\) it will have moved with the flow on average to \(x=537\) at \(\tau=50\), where the outlet region starts and particles are removed from the simulation. For \(x\) close to the outlet, the VACF would be biased because the average in eq. (11) would contain only particles which happen to travel slow, e.g. slower than the flow average. The VACF decays monotonously for all \(x\) (in fact, Figure 7: Thermal part of the particle velocity distribution \(f(v_{i},x_{j})\) for two nozzle sizes with a throat width of \(31.25\,\sigma\) and \(62.5\,\sigma\) in the left and right column. respectively. Panels a) and b) show a schematic representation of those nozzles with the three regions \(x_{1}\), \(x_{2}\), and \(x_{3}\) for which the velocity distributions are obtained from the MD simulations. Panels c) to h) show the velocity distribution of the components \(v_{x}\), \(v_{y}\) and \(v_{x}\) for the the different regions in the nozzle. Also shown are Gaussian fits (dashed lines). Panel i) and j) are comparing the fits to the three velocity components these fits in the \(x_{3}\) region, in the diverging part of the nozzle. the VACF for only the \(y\)-component of the velocity (not shown) slightly overshoots to a negative correlations in the divergent part of the nozzle, which is a trivial effect of wall collisions). The decay is slower further downstream because the density drops. Towards the ends of the nozzles, the mean free path becomes large, see Fig. 5, reaching the length \(z_{max}\) of the simulation box in \(z\)-direction, where periodic boundary conditions are applied. We demonstrate that the finite size bias in \(z\)-direction is negligible by comparing the VACFs for different choices of \(z_{max}\). If \(z_{max}\) were too small, two particles might scatter at each other more than once due to the periodic boundaries, which would lead to a spurious oscillation in the VACF. Panels e) and f) in fig. 8 show VACF\((x,\tau)\) for \(z_{max}=86.2\sigma\), twice as large as in panels c) and d), corresponding to twice as many particles. Apart from the smaller statistical noise for larger \(z_{max}\), the VACFs for \(z_{max}=43.1\sigma\) and \(z_{max}=86.2\sigma\) are identical. This confirms that \(z_{max}=43.1\sigma\) is large enough to obtain reliable results. An interesting feature in the VACF for both nozzle sizes shown in Fig. 8 is a small shoulder around \(\tau\approx 4\) in the divergent part, i.e. a small additional velocity correlation. The inset in panel f) of Fig. 8 shows a close-up of the shoulder. Since this happens only at the low density in the divergent part of the nozzle, where the three-body collisions rate is low, the shoulder can be expected to be a two-body effect. It is consistent with pairs of particles orbiting around each other a few times. We test this conjecture by estimating the orbit period of two bound atoms in thermal equilibrium. The orbit speed \(v\) shall be determined by the temperature \(T\). We further assume a circular stable orbit with diameter \(d\). The orbiting particles have two rotational degrees of freedom but also two times the mass of a single particle: \[\frac{1}{2}k_{B}T=\frac{1}{2}mv^{2}. \tag{12}\] The centrifugal force \(F_{c}\) and the attractive LJ force \(F_{LJ}\) must be balanced, \[F_{c}+F_{\rm LJ}=m\frac{2v^{2}}{d}-4\epsilon m\left(-\frac{12}{d^{13}}+\frac{6 }{d^{7}}\right)=0, \tag{13}\] The orbit period \(t_{rot}\) can now be calculated from eq. (12) and eq. (13) \[t_{\rm rot}=\pi\frac{d}{v}=\pi\left(\frac{6\epsilon m^{4}\pm\sqrt{36\epsilon^ {2}-24k_{B}T/m}}{k_{B}^{4}T^{4}}\right)^{1/6}. \tag{14}\] which expresses \(t_{\rm rot}\) as function of the temperature. When we plug in a typical temperature towards the end Figure 8: Normalized velocity auto-correlation function VACF\((x,\tau)\), eq. (11)), along the nozzle with color coded \(x\)-position. Panels a) and b) show the shape and size of two nozzles, indicating the color scale for \(x\) in the panels below. Panels c) and d) show the VACF for nozzles with a distance \(z_{max}=43.1\,\sigma\) between the periodic boundaries in \(z\)-direction. Panels e) and f) show the same for \(z_{max}=86.2\,\sigma\). The inset of panel f) shows a close-up of the shoulder around \(\tau=4\), discussed in the text. of the nozzles of \(T\approx 0.5\), we obtain an orbit time \(t_{\rm rot}\approx 5\), which is similar to the time when the shoulder in the VACF appears, see Fig. 8. This does not mean that bound dimers form in the supercooled flow near the exit of the nozzle, which requires three-body collisions. But the estimate based on bound states is applicable also to spiral-shaped scattering processes where two particles orbit each other. The good agreement between the \(t_{\rm rot}\) and the shoulder indicates that such scattering processes occur, and may be a seeding event for the nucleation of van der Waals clusters and condensation in larger nozzles. ### Density fluctuation correlations and the sonic horizon The calculation of the speed of sound \(c\) according to eq.(2), using the equation of state from Ref.[33], assumes local thermal equilibrium. However, the anisotropy of the temperature, see Fig. 5, shows that not all degrees of freedom are in local equilibrium during the fast expansion through a microscopic nozzle. Therefore, locating the sonic horizon may be biased by non-equilibrium effects. It's not even clear if a sonic horizon, the definition of which is based on macroscopic fluid dynamics, is microscopically well-defined. While the thermal velocities of the atoms follow Maxwell-Boltzmann distributions, there are always particles in the tails of the distribution that travel upstream even after the sonic horizon. So maybe information can travel upstream on the microscopic scale of our nozzles, negating the existence of a sonic horizon. The MD methods provides the microscopic tools to answer this question by calculating spacetime correlations of density fluctuations: if density fluctuations propagate upstream even in the divergent part of the nozzle, there is no sonic horizon. We quantify the density fluctuation correlations before, at, and after the sonic horizon predicted from the calculation of the speed of sound. The instantaneous density \(\rho(x,t)\) at position \(x\) and time \(t\) is evaluated according to eq. (10). The density fluctuation, i.e. the random deviation at time \(t\) from the average density at position \(x\), is obtained by subtracting the time-averaged density (shown in Figs. 3, 4, and 5) from \(\rho(x,t)\), \(\Delta\rho(x,t)=\rho(x,t)-\left<\rho(x,t)\right>_{t}\). Note that fluctuations of the density depend also on \(y\) and \(z\), but we are interested in the fluctuations relative the the sonic horizon, and thus fluctuations between different positions \(x\) in the nozzle. The correlation between a density fluctuation at \(x\) and \(t\) and a density fluctuation at \(x+\delta x\) and \(t+\tau\) is given by the time average \[S(\tau,x,\delta x)=\frac{\left<\Delta\rho(x,t)\,\Delta\rho(x+\delta x,t+\tau) \right>_{t}}{\left<\Delta\rho(x,t)\,\Delta\rho(x,t)\right>_{t}} \tag{15}\] \(S\) is normalized such that it is unity for zero spatial and temporal shifts, \(S(0,x,0)=1\). In Fig. 9 we show the density fluctuation correlations \(S(\tau,x,\delta x)\) in a nozzle with throat width \(31.25\sigma\), evaluated at \(6\) different positions \(x\) in the nozzle and for three relative position offsets \(\delta x=p\,\sigma\) with \(p\in\{-1,0,1\}\). The position \(x\) in the nozzle is indicated in an inset in each panel. The density binning, with bin size \(\sigma\), is illustrated at the top of Fig. 9, which shows three adjacent bins at \(x\), \(x+\sigma\) and \(x-\sigma\), corresponding to \(p/\sigma=-1,0,1\) in the figure labels. The self correlation \(S(\tau,x,0)\) (yellow curves), correlating only the temporal decay of the density correlations at \(x\), is mainly influenced by the flow velocity and decays faster for higher flow velocities because density fluctuation are transported away more quickly. The upstream correlations \(S(\tau,x,-\sigma)\) (blue curves) and the downstream correlations \(S(\tau,x,\sigma)\) (green curves) are more interesting. Both correlations are small at zero delay time \(\tau=0\), because a density fluctuation at \(x\) needs some time to disperse to neighboring density bins. At position \(x=10\), where the flow speed is still small, there is no noticable difference between upstream and downstream correlation. For larger \(x\), hence for larger flow speed, the forward correlation increases and the backward correlation decreases, because the density fluctuation disperses with the flow or against the flow, respectively. According to the local speed of sound calculated in the previous section, see table 2, there is a sonic horizon at \(x=306\) for the nozzle size in Fig. 9. Indeed, for \(x=300\), the backward correlation has no peak anymore, but decreases monotonously from a small non-zero value at \(\tau=0\). For even larger \(x\), the upstream correlation decays more rapidly, yet it never completely vanishes at \(t=0\). The reason for this apparent contradiction to the existence of a sonic horizon is that the distance between bins and the width of the bins are both \(\sigma\). The finite value at \(\tau=0\) is an artifact caused by the density bins being directly adjacent to each other, see the illustration in Fig. 9: a density fluctuation at \(x\) will immediately have an effect on the adjacent bins at \(x+\sigma\) and \(x-\sigma\) since they share a boundary. In order to remove this bias, we also calculated the correlations with offsets \(\delta x=\pm 2\sigma\), \(S(\tau,x,2\sigma)\) and \(S(\tau,x,-2\sigma)\), such that the upstream and downstream bins do not share a boundary with the bin at \(x\). In Fig. 10 we compare the two choices of offsets. The left panels are take from Fig. 9 where \(\delta x\in\{-\sigma,0,\sigma\}\); the right panels show \(S(\tau,x,\delta x)\) with \(\delta x\in\{-2\sigma,0,2\sigma\}\), with a twice as large \(\tau\) range, because density fluctuations have to travel twice as far. The upstream and downstream correlations now vanish for zero time delay \(\tau=0\). The upstream correlation \(S(\tau,x,-2\sigma)\) right at the throat at \(x=300\sigma\) is very small but does not quite vanish, which is consistent with a location of the sonic horizon predicted at \(x=306\sigma\) according to the speed of sound. Further downstream at \(x=350\sigma\), however, \(S(\tau,x,-2\sigma)\) indeed vanishes within the error bars. This means that information about density fluctations cannot travel backwards beyond the sonic horizon even on the microscopic scale of just a distance of \(2\sigma\). A microscopic Laval nozzle does have a sonic horizon. We also calculated the density fluctuation correlations for a nozzle twice as large (length \(L=1250\,\sigma\) and throat width \(d=62.5\sigma\)). Fig. 11 compares the correponding results with those shown in Fig. 10. For the comparison, we scaled all lengths by two: the bins are \(2\,\sigma\) wide, separated by \(4\,\sigma\), see illustration at the top of Fig. 11. We compare \(S(\tau,x,\delta x)\) of the smaller nozzle with \(S(2\tau,2x,2\delta x)\) of the larger one, i.e. at the same relative positions with the same relative upstream and downstream offset, and showing twice the time window for the larger nozzle. According to the speed of sound, the sonic horizon for the larger nozzle is located at \(x=603\,\sigma\) (see table 2), very close to the throat at \(x=600\,\sigma\). The comparison in Fig. 11 shows that the density fluctuation correlations are very similar for equal relative positions for both nozzles. Also for the larger nozzle, the correlations are very small at the throat. Further downstream at \(x=350\sigma\) and \(x=700\sigma\), respectively, both nozzles exhibit no upstream correlations. Our calculations confirm that the thermodynamic determination of a sonic horizon, based on the equation of Figure 9: Density fluctuation correlations \(S(\tau,x,\delta x)\), eq. (15). Panels a) to f) show the self correlation \(S(\tau,x,0)\) in yellow, a backward correlation \(S(\tau,x,-\sigma)\) in blue and a forward correlation \(S(\tau,x,\sigma)\) in green for different positions \(x\) in the nozzle as given in the insets. The illustration at the top shows the density bins used for calculating \(S(\tau,x,\delta x)\): \(S(\tau,x,0)\) is obtained by correlating the yellow bin with itself, \(S(\tau,x,\sigma)\) or \(S(\tau,x,-\sigma)\) are obtained by correlating the yellow bin with the green or blue bin, respectively. state, is valid, although the anisotropy of the temperature indicates that the rapid expansion through the nozzles hinder complete local thermal equilibrium. The location of the sonic horizon is consistent with the vanishing of upstream time correlations of density fluctuations. The existence of a microscopically narrow sonic horizon is a non-trivial result, considering the large estimated Knudsen numbers. ## V Conclusion We studied the expansion of a gas of Lennard-Jones particles and its transition from subsonic to supersonic flow through microscopic Laval slit nozzles into vacuum. Our goal was to assess to what extent Laval nozzles with throat widths down to the scale of a few atom diameters still follow the same mechanisms as macroscopic nozzles where, given a sufficiently low outlet pressure, the gas flow becomes supersonic in the nozzle throat. For our study we used non-equilibrium molecular dynamics Figure 10: Comparison of density fluctuation correlation \(S(\tau,x,p\sigma)\) for different offsets, \(p\in\{-1,0,1\}\) (left panels) and \(p\in\{-2,0,2\}\) (right panels). At the top the respective binning is illustrated. In the insets the reference position \(x\) is indicated. The sonic horizon is situated slightly downstream of the nozzle throat (\(x=300\)) at \(x=306\), according to the thermodynamic calculation of the local speed of sound. (MD) simulations. MD is computationally demanding but makes the fewest approximations. We considered idealized nozzles with atomically flat surfaces with perfect slip to avoid boundary layer effects. We introduced three thermodynamic regions for the non-equilibrium molecular dynamic simulation: an inlet region, the nozzle region and the outlet region. In the inlet and outlet region, particle insertions and deletions are realized by grand canonical Monte Carlo sampling [29]. After equilibration this allows to study stationary flows. We obtained the thermodynamic state variables temperature, density, flow velocity, and pressure and their spatial dependence, as well as the Knudsen number, Mach number, velocity auto-correlation, and velocity distribution of the gas for nozzles of different sizes. We found a well-defined sonic horizon, i.e. the surface where the flow becomes supersonic, and analyzed it via space-time correlations of density fluctuations. We studied how the expansion dynamics depend on the nozzle size. Lower temperatures and correspondingly higher velocities and Mach numbers of the expanding gas are reached for larger nozzles, converging to predictions for isentropic expansion of an ideal gas continuum. With non-equilibrium molecular dynamics we can ob Figure 11: Comparison of the density fluctuation correlations \(S(\tau,x,\delta x)\) for two nozzle with throat width \(d=31.25\,\sigma\) (left panels) and \(d=62.5\,\sigma\) (right panels), respectively. All lengths are scaled by two for the larger nozzle, such that we compare the correlations for equal relative positions. At the top the density bin spacing is illustrated and the insets show the positions \(x\). serve phenomena which cannot be studied in continuum fluid dynamics, which assumes local thermodynamic equilibrium. We found that this assumption is violated for microscopic nozzles. The kinetic energy in the three translational degrees of freedom cannot equilibrate completely and is slightly different for each individual translational degree of freedom. The velocity components are still Maxwell-Boltzmann distributed, with a different width for each direction, which corresponds to an anisoptropic temperature. The phase of the LJ fluid in the inlet is in a vapor phase, but upon expansion through the nozzle becomes supersaturated. At the end of the nozzle it is in the vapor-solid coexistence phase. Indeed, in the velocity auto-correlation function, VACF, we see indications of metastable pairs of particles. Since the expanding gas does not reach equilibrium in our microscopic nozzles, no clusters are formed. Cluster formation could be studied by enlarging the simulation and including the low density region after the nozzle, giving the fluid enough time to equilibrate. The investigation of the sonic horizon with the help of spacetime-dependent correlations of density fluctuations showed that the position of the sonic horizon obtained from calculating the local speed of sound matches the position where density correlations practically cannot propagate against the flow. A microscopic distance on the order to the LJ particle size \(\sigma\) is already enough to completely suppress the backward correlations. The vanishing of backward time correlations does of course not happen abruptly at the sonic horizon, instead the backward correlations decrease gradually with the increasing flow velocity toward the sonic horizon. At the same time the forward correlations increase with the flow velocity. For larger microscopic nozzles, the simple macroscopic description relating the cross section to the Mach number is quite accurate. For smaller nozzles the position of the sonic horizon is shifted downstream. In future work, it will be interesting to study nozzles with rough walls. The gas expansion through microscopic nozzle will be strongly affected by the boundary layer near the walls. Another topic of practical interest is the co-expansion of a carrier noble gas seeded with molecules to investigate the cooling efficiency of rotational and vibrational degrees of freedom of the molecules. This models the cooling of molecules for molecular beam spectroscopy. We note that nozzles for molecular beam spectroscopy are significantly larger than those studied here, with nozzle diameters of the order of tens of \(\mu m\), instead of tenths of \(nm\). Increasing the outlet region will allow to study not only the condensation of the gas into clusters, but also the effect of a finite exit pressure on the position of the sonic horizon [13]. We acknowledge inspiring discussions with Stefan Pirker. ## Appendix A Density calculation The density \(\rho(x)\) as function of position \(x\) in the nozzle is calculated by binning the \(x\)-coordinate of all particles. Since we are interested in stationary flow situations, we can take time averages of the number of particles in the bin of volume \(V_{\rm bin}(x)\). The binning volumes are slices, usually of thickness \(\sigma\), which are centered at \(x\), as illustrated in Fig. 12. This average can be written as \[\rho(x)=\left\langle\frac{1}{V_{\rm bin}(x)}\sum_{i:p_{i}\in V_{\rm bin}(x)}1 \right\rangle_{t}\equiv\left\langle 1\right\rangle_{t,V_{\rm bin}(x)} \tag{10}\] with the sum counting all particles \(p_{i}\) in the volume of bin \(V_{\rm bin}(x)\), and the bracket denoting the time average. For calculations of spacetime density correlations we need the instantaneous density at \(x\) at time \(t\), which we obtain by omitting the time average in eq. (10) \[\rho(x,t)=\frac{1}{V_{\rm bin}(x)}\sum_{i:p_{i}\in V_{\rm bin}(x)}1 \tag{11}\] The determination of \(V_{\rm bin}(x)\) is not trivial, since the wall is not a well-defined hard boundary, but realized by the LJ potential (4). Choosing \(z=0\) in eq. (4) for the volume calculation would overestimate the real volume effectively available for the particles, because it neglects the thickness of the "skin" due to the finite value of \(\sigma\). We determined that \(z=0.8\,\sigma\) is the most suitable choice in the following way: we simulated a small nozzle (the size depicted in Fig. 12) with a constriction so narrow that almost no particle pass through in the course of a simulation. The wall position \(z\), and hence the effective volume \(V_{\rm bin}(x)\), is determined such that the density \(\rho(x)\) in the left half of the nozzle, obtained from (10), is constant as expected for an equilibrium simulation in a closed geometry. If the skin thickness were over- or underestimated, we would obtain a density increase or decrease towards the constriction, respectively. ## Appendix B Pressure calculation The pressure is calculated from the diagonal elements of the stress tensor which is calculated for each individual Figure 12: Bin volumes of width \(\sigma\) used for calculating the density \(\rho(x)\). particle \(i\) as [30; 36] \[S_{iab}=-m_{i}v_{ia}v_{ib}-\frac{1}{2}\sum_{\begin{subarray}{c}j:p_{j}\in V_{i} \\ j\neq i\end{subarray}}(r_{ia}F_{ijb}-r_{ja}F_{ijb}) \tag{10}\] with \(a,b\in\{x,y,z\}\) the Cartesian components. The first term is the ideal gas contribution and is biased by the collective flow speed. Since only the thermal motion should contribute to \(S_{iab}\), the flow velocity must be subtracted from \(\vec{v}_{i}\), see section C below for the calculation of the flow velocity. The second term is the virial contribution from the LJ-interaction. The summation is over all particles \(j\) within \(r_{c}\) from particle \(i\), where \(r_{c}\) is the cut-off radius of the LJ potential. This defines the cut-off volume \(V_{i}\) of particle \(i\). \(r_{ia}\) is component \(a\in\{x,y,z\}\) of the coordinate of particle \(i\) and \(F_{ijb}\) the component \(b\) of the force of the pairwise interaction between particle \(i\) and \(j\). We calculate the pressure \(p(x)\) at position \(x\) in the nozzle by averaging the diagonal elements of the stress tensor \(S_{iab}\) over all particles \(i\) within the bin volume \(V_{\text{bin}}(x)\), \[p(x)=-\left\langle\frac{\rho(x)}{3}\left(S_{ixx}+S_{iyy}+S_{izz}\right)\right\rangle _{t,V_{\text{bin}}(x)} \tag{11}\] with \(\left\langle\right\rangle_{V_{\text{bin}}(x)}\) denoting the average over \(V_{\text{bin}}(x)\). We also average over the three diagonal elements because we assume an isotropic stress tensor. Remembering that the temperature is not isotropic in the nozzle, the assumption of an isotropic stress tensor may not be valid. Inserting the stress tensor (10) into the expression (11) for the local pressure, we obtain \[p(x) =\rho(x)k_{\text{B}}T(x)+\frac{1}{3}\Bigg{\langle}\sum_{ \begin{subarray}{c}j:p_{j}\in(V_{i}\cap V_{\text{bin}}(x))\\ j\neq i\end{subarray}}\mathbf{r}_{i}\mathbf{F}_{ij}\Bigg{\rangle}_{t,V_{\text{ bin}}(x)}\] \[+\frac{1}{6}\Bigg{\langle}\sum_{\begin{subarray}{c}j:p_{j}\in(V _{i}\setminus V_{\text{bin}}(x))\\ j\neq i\end{subarray}}\mathbf{r}_{j}\mathbf{F}_{ji}\Bigg{\rangle}_{t,V_{\text{ bin}}(x)} \tag{12}\] where in the calculation of the local virial we have to distinguish between neighbor particles \(p_{j}\) which are also in the same binning volume \(V_{\text{bin}}(x)\) as particle \(p_{i}\) (giving rise to the first virial expression with the common prefactor \(\frac{1}{3}\)) and those which which are not (the second virial expression with the prefactor \(\frac{1}{6}\)). For the first virial expression we could use \(\mathbf{F}_{ij}=-\mathbf{F}_{ji}\) and swap the summation index \(i\) and \(j\) leading to a factor 2. For the particles \(p_{j}\) which are not in volume \(V_{\text{bin}}(x)\) this cannot be done, and each force \(\mathbf{F}_{ij}\) contributes just once. ## Appendix C Calculation of Velocity The velocity field \(\mathbf{v}(x,y)\) in the nozzle depends on both the \(x\) and \(y\)-coordinate. The velocity is not only a key quantity for Laval nozzles, but also required for obtaining the temperature \(T\), because \(\mathbf{v}(x,y)\) needs to be subtracted from the particle velocities for the calculation of \(T\), see next section. Fig. 13 illustrates the bin volumes \(V_{\text{bin}}(x,y)\) for the calculation of \(\mathbf{v}(x,y)\), as opposed to the bin slices in Fig. 12. The time averaged flow velocity \(\mathbf{v}\) in a bin volume \(V_{\text{bin}}(x,y)\) can be calculated as \[v_{a}(x,y)=\left\langle\frac{1}{N(x,y)}\sum_{i:p_{i}\in V_{\text{bin}}(x,y)}v_ {ai}\right\rangle_{t} \tag{13}\] with \(a\in\{x,y,z\}\), \(v_{ai}\) is the velocity component \(a\) of particle \(p_{i}\), and \(N(x,y)\) the number of particles in \(V_{\text{bin}}(x,y)\) at a given time. The magnitude of the flow velocity is \[v(x)=\sqrt{\left\langle v_{x}(x,y)\right\rangle_{y}^{2}+\left\langle v_{y}(x, y)\right\rangle_{y}^{2}} \tag{14}\] On average there is no flow in \(z\)-direction, \(v_{z}(x,y)=0\). ## Appendix D Temperature calculation In order to investigate how the gas cools upon expanding supersonically through the nozzle, we need to calculate the position-dependent temperature \(T(x)\). The microscopic definition of the temperature is the kinetic energy of the _random_ part of the particle velocity, hence we need to subtract the flow velocity \(\mathbf{v}(x,y)\) discussed in the previous section: \[k_{\text{B}}T(x,y)=m\left\langle\frac{1}{3N(x,y)-3}\sum_{i:p_{i}\in V_{\text{ bin}}(x,y)}\hskip-14.226378pt(\mathbf{v}_{i}-\mathbf{v}(x,y))^{2}\right\rangle_{t} \tag{15}\] We are interested only in the \(x\)-dependence of the temperature and therefore we average over \(y\) \[T(x)=\left\langle T(x,y)\right\rangle_{y} \tag{16}\] Note that subtracting the flow velocity removes three translational degrees of freedom, which we account for by subtracting \(\bar{3}\) from the number of degrees of freedom of the \(N(x,y)\) particles in binning volume \(V_{\text{bin}}(x,y)\). In Eq. (15) we average over the contribution of the three velocity components, which is fine in an isotropic system. In order to test whether the temperature is isoptropic or not (and indeed we find it is not), we calculate the direction-dependent kinetic temperature \[k_{\text{B}}T_{a}(x,y)=m\left\langle\frac{1}{N(x,y)-1}\sum_{i:p_{i}\in V_{ \text{bin}}(x,y)}\hskip-14.226378pt(v_{ia}-v_{a}(x,y))^{2}\right\rangle_{t} \tag{17}\] Figure 13: Bin volumes \(V_{\text{bin}}(x,y)\) with side length \(\sigma\) in \(x\)- and \(y\)-direction. with \(a\in\{x,y,z\}\). Again, we are interested only in how \(T_{a}\) varies with position \(x\) along the nozzle, hence we average over \(y\), \(T_{a}(x)=\left<T_{a}(x,y)\right>_{y}\).
2309.10973
Reachability Analysis for Lexicase Selection via Community Assembly Graphs
Fitness landscapes have historically been a powerful tool for analyzing the search space explored by evolutionary algorithms. In particular, they facilitate understanding how easily reachable an optimal solution is from a given starting point. However, simple fitness landscapes are inappropriate for analyzing the search space seen by selection schemes like lexicase selection in which the outcome of selection depends heavily on the current contents of the population (i.e. selection schemes with complex ecological dynamics). Here, we propose borrowing a tool from ecology to solve this problem: community assembly graphs. We demonstrate a simple proof-of-concept for this approach on an NK Landscape where we have perfect information. We then demonstrate that this approach can be successfully applied to a complex genetic programming problem. While further research is necessary to understand how to best use this tool, we believe it will be a valuable addition to our toolkit and facilitate analyses that were previously impossible.
Emily Dolson, Alexander Lalejini
2023-09-20T00:16:56Z
http://arxiv.org/abs/2309.10973v1
# Reachability Analysis for Lexicase Selection via Community Assembly Graphs ###### Abstract Fitness landscapes have historically been a powerful tool for analyzing the search space explored by evolutionary algorithms. In particular, they facilitate understanding how easily reachable an optimal solution is from a given starting point. However, simple fitness landscapes are inappropriate for analyzing the search space seen by selection schemes like lexicase selection in which the outcome of selection depends heavily on the current contents of the population (i.e. selection schemes with complex ecological dynamics). Here, we propose borrowing a tool from ecology to solve this problem: community assembly graphs. We demonstrate a simple proof-of-concept for this approach on an NK Landscape where we have perfect information. We then demonstrate that this approach can be successfully applied to a complex genetic programming problem. While further research is necessary to understand how to best use this tool, we believe it will be a valuable addition to our toolkit and facilitate analyses that were previously impossible. ## 1 Introduction Lexicase selection is a state-of-the art parent-selection algorithm for genetic programming [27]. It has proven highly effective across a wide variety of problems [4; 17; 19; 22; 23; 24], and has spawned many variants [2; 13; 18; 28]. One challenge of working with lexicase selection, however, is that most fitness-landscape-based analytical techniques do not directly apply to it. Fitness landscapes represent the mapping of genotypes to fitness and the adjacency of genotypes to each other, providing intuition for which genotypes are (easily)
2307.16676
End-to-End Reinforcement Learning for Torque Based Variable Height Hopping
Legged locomotion is arguably the most suited and versatile mode to deal with natural or unstructured terrains. Intensive research into dynamic walking and running controllers has recently yielded great advances, both in the optimal control and reinforcement learning (RL) literature. Hopping is a challenging dynamic task involving a flight phase and has the potential to increase the traversability of legged robots. Model based control for hopping typically relies on accurate detection of different jump phases, such as lift-off or touch down, and using different controllers for each phase. In this paper, we present a end-to-end RL based torque controller that learns to implicitly detect the relevant jump phases, removing the need to provide manual heuristics for state detection. We also extend a method for simulation to reality transfer of the learned controller to contact rich dynamic tasks, resulting in successful deployment on the robot after training without parameter tuning.
Raghav Soni, Daniel Harnack, Hannah Isermann, Sotaro Fushimi, Shivesh Kumar, Frank Kirchner
2023-07-31T13:51:29Z
http://arxiv.org/abs/2307.16676v2
# End-to-End Reinforcement Learning for Torque Based ###### Abstract Legged locomotion is arguably the most suited and versatile mode to deal with natural or unstructured terrains. Intensive research into dynamic walking and running controllers has recently yielded great advances, both in the optimal control and reinforcement learning (RL) literature. Hopping is a challenging dynamic task involving a flight phase and has the potential to increase the traversability of legged robots. Model based control for hopping typically relies on accurate detection of different jump phases, such as lift-off or touch down, and using different controllers for each phase. In this paper, we present a end-to-end RL based torque controller that learns to implicitly detect the relevant jump phases, removing the need to provide manual heuristics for state detection. We also extend a method for simulation to reality transfer of the learned controller to contact rich dynamic tasks, resulting in successful deployment on the robot after training without parameter tuning. ## I Introduction Dynamic legged locomotion evolved as a versatile strategy to traverse natural or unstructured terrains. Thus, legged robots such as quadrupeds and humanoids are popular for applications performed in these environments, either autonomously or alongside a human. Quasi-instantaneously making and breaking contacts with the environment is an integral part of legged locomotion, which leads to highly nonlinear, non-smooth dynamics. Thus, from a control perspective, dynamic legged locomotion requires significantly more complex algorithms than e.g. wheeled locomotion. Whereas the problem of dynamic walking on real robots has been solved by various techniques from optimal control (OC) [1, 2, 3] or reinforcement learning (RL) [4, 5, 6, 7, 8, 9], there is considerably less research for the even more dynamic locomotion type of hopping. Hopping can increase a system's mobility, since it allows for leaping over obstacles that cannot be surpassed otherwise [10, 11]. However, hopping incurs even more control complexity since there is a considerable flight phase during which the system has limited possibilities to adjust for the impact, and the center of mass trajectory is largely determined when the feet leave the ground. The canonical system to study hopping, which is also used in this paper, is a single hopping leg or monoped. Indeed, one of the earliest robotic systems showing dynamic legged locomotion was a single leg that could navigate a flat surface by jumping [12]. To control the hopping height, a heuristically tuned force controller was used, motivated by an energy shaping algorithm [13]. These seminal studies sparked a wealth of research into the control of single legged hopping machines. A common theme of such controllers is the reliance on detection of various states, e.g. lift-off, peak attitude, touchdown, and minimum attitude [13, 14, 15, 16, 17, 18]. The full jumping controller is then realized as a state machine, where PD controllers are typically used during flight phases, while stance phase states are directly controlled by torque or force. Deploying these controllers on hardware requires hand tuning parameters and system specific adaptations to account for model inaccuracies or unmodelled dynamics. Also, the detection of different jump states and appropriate control output during the lift-off phase relies on accurate height estimation and contact detection, for which further heuristics are typically employed. RL offers the promise to alleviate these issues. We hypothesize that, since neural networks are universal function approximators [19, 20], learning based controllers with neural Fig. 1: RL based torque controlled jumping snapshots in simulation and on the real robot. Fig. 2: Comparison of control concepts. network function approximators are able to implicitly detect relevant jump phases and thus realize a unified hopping controller without the need for explicit state transition heuristics. Even more, a fully integrated end-to-end solution for hopping should be possible via RL, mapping only proprioceptive feedback, i.e. actuator positions and velocities, to direct torque control, since this proprioceptive data is theoretically sufficient to implement a variable jumping height controller. The comparison of our method to a classical approach is visualized in Fig. 2. Whereas several previous studies utilized learning based controllers for jumping [11, 21], they focussed on single leaps, making implicit detection of jump phases less critical than for continuous jumping. From the data, it can also not be derived whether such a phase detection is actually realized. Recently, a RL based continuous hopping controller with adjustable jumping height for a small quadruped was developed in [22]. While this shows the feasibility of a unified controller that does not require a state machine, it still relied on height estimation and PD control, and thus can not be considered as a truly end-to-end learning approach. The widespread use of PD controllers in RL research is in part a consequence of the low sample complexity of many algorithms, which necessitates training in simulation. Using PD controllers reduces the requirements on the accuracy of the dynamics simulation and thus increases the chances of successful simulation to reality transfer. However, this requires tuning of PD gains for a successful sim2real transfer and may additionally hinder performance in highly dynamic tasks such as jumping, where direct torque control can unlock the full dynamical capabilities of a system [4, 23]. In summary, all previous approaches from classical control and RL require a subset of height estimation, contact detection, hyperparameter tuning, PD control, or a behavior state machine. In this paper, we present a RL based method that requires none of the above. We show successful training and simulation to reality transfer of a torque controller with implicit jumping phase detection and controllable jumping height, while only relying on proprioceptive feedback 1. To achieve this, we draw inspiration from energy shaping for the design of the reward function, and extend a previous technique for accurate simulation to reality transfer [24] to higher dimensional parameter spaces and dynamic, contact rich tasks. To the best knowledge of the authors, such a controller is described for the first time for a monoped. Footnote 1: [https://github.com/dfki-ric-underactuated-lab/hopping_leg](https://github.com/dfki-ric-underactuated-lab/hopping_leg) ## II Materials and Methods ### _Robotic System_ The robot used for the experiment is a custom-made 3 degrees of freedom (DOF) hopping leg system, mounted on a vertical rail with 1 passive DOF and 2 active DOFs. Fig. 3 shows a photo of the system, along with a 3D design model. The 2 active DOFs in the leg are actuated via quasi-direct drive motors qdd100 from mjbots [25] operating at a frequency of \(200Hz\). While the shoulder joint shares the joint axis with the motor axis, the elbow joint is driven by a motor placed at the shoulder via a belt drive with transmission ratio = 1:2. The housing is a light weight carbon fiber construction. ### _Energy Shaping_ As a baseline comparison, we implemented a classical energy shaping (ES) controller in simulation [26]. This ES controller is part of a finite state machine [27]. As shown in Fig. 4 the state machine consists of three states: * _Lift-off_: This phase is used to apply the desired energy with the ES controller for the next jump. It ends when the leg loses its ground contact. * _Flight_: In the flight phase, the leg prepares for the touchdown by moving into a predefined pose. The phase ends with the first contact of the leg with the ground. * _Touchdown_: During touchdown, the leg damps its movement using high damping and low positional gains. It ends when the base velocity \(\dot{x}\geq 0\). #### Ii-B1 Controller design As mentioned above, the desired energy \(E_{d}\) which is required to jump to a desired base height \(x_{d}\) has to be applied during the lift-off phase. For simplicity, we assume the robot to be a point mass \(m\) at its base. In this case, the required energy can be calculated with: \[E_{d}=mgx_{d} \tag{1}\] Here, \(g\) is the gravitational acceleration. Accordingly, we can estimate the reached energy \(E_{j-1}\) from the last jump using the estimated jumping height \(x_{j-1}\): \[E_{j-1}=mgx_{j-1} \tag{2}\] For reaching this energy, we needed to apply a feed-forward force \(F_{f,j-1}\) to the ground while the leg had ground contact. For the next jump, we estimate the new feed-forward force with: \[F_{f,j}=\frac{mg\left(x_{d}-x_{0,j}\right)}{\Delta x_{l,j}} \tag{3}\] Fig. 4: State machine used for the energy shaping controller. Fig. 3: Hopping leg used in the experiments. Here, \(x_{0,j}\) is the current minimum base height after the touchdown, and \(\Delta x_{l,j}\) is the the expected distance to be covered during the current lift-off, calculated from \(x_{0,j}\). Since the formerly applied force is proportional to the reached energy \(E_{j-1}\) we can write: \[E\propto kF_{f} \tag{4}\] and as the feed-forward force is almost constant: \[\dot{E}\propto\dot{k}F_{f} \tag{5}\] Thus, we can control the energy in the system by altering the gain \(k\). To control the gain \(k\) the following update rule has been used: \[k_{j}=k_{j-1}\left(\frac{E_{d}}{E_{j-1}}\right)^{2} \tag{6}\] As joint torque controller, Cartesian stiffness control with the following control law was used [28]: \[\tau=J^{T}(q)\left[k_{j}\begin{pmatrix}F_{f,j}\\ 0\end{pmatrix}+k_{p,y}\begin{pmatrix}0\\ -y\end{pmatrix}+k_{d,y}\begin{pmatrix}0\\ -\dot{y}\end{pmatrix}\right] \tag{7}\] Here, \(\tau\) is the vector of desired joint torques, \(J^{T}(q)\) is the transpose of the hybrid Jacobian at the end-effector and \(K_{p,y}\) and \(K_{d,y}\) are the Cartesian PD gains in \(y\) direction (refer to Fig. 3). As shown in (7), the PD terms of the Cartesian stiffness controller are only responsible to maintain the \(y\) position of the end-effector, while the energy shaping control is used to apply the forces in \(x\) direction. #### Iii-B2 Jumping height estimation For the ES controller, a height feedback is necessary. Therefore, a proprioceptive height estimation has been implemented. During flight phase, no additional forces can be applied to the system. Hence, we can expect the base acceleration \(\ddot{x}\) to be roughly equal to the gravitational acceleration \(g\). Due to the proprioceptive feedback, we know the lift-off position \(x_{l}\) and velocity \(\dot{x}_{l}\). Thus, the current base height \(x\) during flight phase can be calculated with: \[x=\frac{1}{2}g\left(t^{2}-t_{l}^{2}\right)-g\,t_{l}\left(t-t_{l}\right)+\dot{x }_{l}\left(t-t_{l}\right)+x_{l} \tag{8}\] Here, \(t\) is the current time and \(t_{l}\) is the time at lift-off. During stance phase, the current height is calculated from forward kinematics. #### Iii-B3 Simulation and real system parameters The simulations of the ES controller have been performed using PyBullet physics simulation [29]. For the simulation a control frequency of \(400Hz\) has been used to control the joint torques. The parameters \(k_{j=0}=1.0,k_{p,y}=10.0,k_{d,y}=3.0\) were optimized manually. For the real system, the control frequency has been reduced to \(200\) Hz. ### _Reinforcement Learning_ #### Iii-C1 Problem Formulation The hopping leg problem is formulated as a Markov Decision Process (MDP), where the agent, i.e. the controller in this case, interacts with the environment, i.e. the leg and its surroundings. A MDP is given by a tuple \((\mathcal{S},\mathcal{A},\mathcal{P},\mathcal{R})\), where \(\mathcal{S}\) is the set of states called the state space, \(\mathcal{A}\) is the set of actions called the action space, \(\mathcal{P}(s_{t+1}\mid s_{t},a_{t})\) the probability that taking action \(a_{t}\) in state \(s_{t}\) will lead to state \(s_{t+1}\), and \(\mathcal{R}(s_{t},a_{t},s_{t+1})\) the expected immediate reward for transitioning from \(s_{t}\) to \(s_{t+1}\) by taking the action \(a_{t}\). At each time step t, an action \(a_{t}\sim\pi(a_{t}\mid s_{t})\) is sampled from the policy given the current state \(s_{t}\). The objective of RL is to optimize the policy \(\pi\) such that the expected return is maximized. From the variety of RL algorithms, we choose Soft Actor-Critic (SAC) [30], a state-of-the-art off-policy algorithm, since it is relatively sample-efficient, stable, and requires little to no hyperparameter tuning. SAC aims to maximize the expected reward while also maximizing the policy entropy \(\mathrm{H}\). The objective is formulated as \[\pi^{*}=\arg\max_{\pi}\,\underset{a\sim\pi}{\mathrm{E}}\left[\sum_{t=0}^{ \infty}\gamma^{t}\Big{(}\mathrm{R}(s_{t},a_{t},s_{t+1})+\alpha\mathrm{H}(\pi( \cdot\mid s_{t}))\Big{)}\right].\] Maximizing the entropy as a secondary objective leads to policies that are maximally variable while performing the task, making them intrinsically robust. For continuous actions, exploration is commonly done in action space. At each time step, a noise vector \(\epsilon_{t}\) is sampled from a Gaussian distribution and added to the action output, such that \(\pi(a_{t}\mid s_{t})\sim\mu(s_{t},\theta_{\mu})+\mathcal{N}(0,\sigma^{2})\), where \(\mu\) is the deterministic policy and \(\theta_{\mu}\) its parameters. We use the modification of generalized state dependent exploration (gSDE) [31]. Here, the noise vector is a function of the state and the policy features \(z_{\mu}(s_{t},\theta_{z_{\mu}})\), which is the last layer before the deterministic output \(\mu(s_{t})=\theta_{\mu}z_{\mu}(s_{t},\theta_{z_{\mu}})\), i.e. \(\epsilon_{t}(s_{t},\theta_{\epsilon})=\theta_{\epsilon}z_{\mu}(s_{t})\). With gSDE, the action for a given state \(s_{t}\) remains the same until the noise parameters are sampled again. This promotes more consistent exploration and results in reduced shaky behavior on hardware [31]. #### Iii-C2 Network architecture The policy is modeled with a multilayer perceptron (MLP) with four hidden layers of 256, 256, 128, and 128 neurons. The activation function is ReLU. The critic network is modeled by a separate network with the same architecture. LSTM policy networks were also tried but offered no empirical advantage. The policy is inferred at the operating frequency of \(200Hz\). #### Iii-C3 Observation and action space The hopping leg system has no additional sensors apart from the joint encoders. Thus, only normalized joint positions and velocities, and the desired jumping height over the three last time-steps \(t,t-1\), and \(t-2\) constitute the observation state. Joint data over multiple time-steps is empirically found to be essential to produce the desired behaviour with implicit contact detection. Hence, the observation space is \(s\epsilon\mathbb{R}^{3\times 5=15}\). The action space consists of the normalized output motor torques, which are later scaled up before being sent as the torque commands. The action space is thus \(a\in\mathbb{R}^{2}\). #### Iii-C4 Reward The total reward at each time step is a weighted sum of positive gains and negative penalties, encoding behaviours to be encouraged or precluded. The reward comprises the following components: Energy Gain (\(G_{e}\))The agent is incentivized to maximize the kinetic and elastic potential energy of the leg at any given time step. The reasoning behind this term is an approximation of the leg by spring with mean length \(x_{o}\). This reward term promotes an oscillatory behaviour leading to high enough velocities for hopping. The corresponding term is calculated as: \[G_{e}=\dot{x}^{2}+(x-x_{o})^{2} \tag{9}\] where \(x\) and \(\dot{x}\) are the base height and velocity, respectively. \(x_{o}\) is the base height for the initial standing position of the leg. Height barrier penalty (\(P_{h}\))The agent is penalized exponentially when the base height crosses the desired height command \(x^{d}\). \[P_{h}=\begin{cases}1-e^{x-x^{d}},&\text{if }x\geq x^{d}\\ 0,&\text{otherwise}\end{cases} \tag{10}\] Jerky Action Penalty (\(P_{j}\))Sudden changes in the output torques can cause shakiness in the hardware, making the policy hard to transfer. Therefore, the agent is penalized for large differences in consecutive actions. \[P_{j}=\sum_{i=0}^{2}(a_{t}^{i}-a_{t-1}^{i})^{2} \tag{11}\] Joint constraints penalty (\(P_{jp},P_{jv}\))It is desired to keep the joint position limits within some pre-defined constraints to avoid self-collisions and prevent arbitrary configurations. The joint velocities should be reasonably bounded for successful sim-to-real transfer. These constraints are imposed with negative penalties. The penalty for the position limit is structured such that it becomes significant around the limits and beyond them but stays reasonably low elsewhere. It is calculated as: \[P_{jp}=\sum_{i=0}^{2}\begin{cases}e^{-10(q_{i}-q_{i}^{i})}+e^{10(q_{i}-q_{i}^ {h})},&\text{if }q_{i}^{l}\leq q_{i}\leq q_{i}^{h}\\ 1,&\text{otherwise}\end{cases} \tag{12}\] Here, \(q_{i}^{l}\) and \(q_{i}^{h}\) denote the lower and upper joint limits, respectively. To reasonably constrain the search space for the agent, we used a PD controller to bring the joints back within bounds if joint limits are passed during training. The joint velocities are penalized if they cross the saturation limits for the motors. \[P_{jv}=\sum_{i=0}^{2}\begin{cases}0,&\text{if }-\dot{q}_{i}^{h}\leq\dot{q}_{i} \leq\dot{q}_{i}^{h}\\ \dot{q_{i}}^{2}-\dot{q_{i}^{h^{2}}},&\text{otherwise}\end{cases} \tag{13}\] Here, \(\dot{q}_{i}^{h}\) is the maximum desired joint velocity. The final expected reward is calculated as: \[R=w_{1}G_{e}-w_{2}P_{h}-w_{3}P_{j}-w_{4}P_{jp}-w_{5}P_{jv} \tag{14}\] The weights used during training are \(w_{1}=0.5\), \(w_{2}=2\), \(w_{3}=0.05\), \(w_{4}=0.02\), and \(w_{5}=0.005\). ### _Simulation to Reality Transfer_ We use a custom gym [32] environment with MuJoCo physics engine [33] for training in simulation. As explored in [34], MuJoCo is well suited for robotics and reinforcement learning problems as it provides a wide range of solver parameters and settings, which can be adapted and optimized for many use cases. The policies trained in MuJoCo with default model and simulation parameters failed to transfer to the hardware. Therefore, we optimised the simulation parameters to narrow the sim-to-real gap. #### Iv-D1 Simulation Parameter Optimisation The goal of this step is to match simulation dynamics to the real robot using simple training trajectories. Trajectory Generation and Data CollectionA varied set of task-space, hand-tuned sinusoidal trajectories is generated for two different system configurations. These include a fixed-base configuration, where the leg is suspended in the air, and a moving-base configuration, where the leg comes in contact with the ground. For the fixed-base configuration, the template trajectories are: \[x =(A\cos\left(\frac{2\pi t}{T_{1}}\right)+\epsilon)\cos(\theta) \tag{15}\] \[y =(A\cos\left(\frac{2\pi t}{T_{1}}\right)+\epsilon)\sin(\theta)\] \[\theta =-\frac{\pi}{2}\cos\left(\frac{2\pi t}{T_{2}}\right)\] Here, \(\epsilon=L_{1}+L_{2}-A\) is the trajectory offset from the origin, given \(L_{1}\) and \(L_{2}\) are the shank and calf link lengths for the leg. The periods (\(T_{1},T_{2}\)) and the amplitude \(A\) are varied such that the maximum workspace of the leg is covered. The moving-base trajectory only consists of vertical trajectories with a few segments fast enough to break ground contact. These trajectories closely imitate the hopping configuration of the leg and are given as: \[x =A\cos\left(\frac{2\pi t}{T_{1}}\right)+\epsilon \tag{16}\] \[y =0\] The joint-level trajectories are obtained through inverse kinematics and tracked on the hardware with a PD controller Fig. 5: Parameter optimization pipeline. running at a frequency of \(200Hz\). The controller gains are fixed and the target velocity set to 0. For both fixed-base and floating base configuration, 240 s of data was recorded where \(T_{1}\in\{0.75,0.5,0.25\}\), \(T_{2}\in\{10,20\}\), and \(A\in\{0.15,0.1,0.05\}\). Joint positions, velocities, and resulting motor torques are recorded on the actual hardware. Simulation Parameters OptimisationUsing the hardware trajectories, we optimize for the simulation's dynamics and solver parameters. We use the same PD controller running at the same frequency to track the generated trajectories in simulation, with gains adjusted for the motors' internal gear ratio. We optimize for the following set of simulation parameters: * _Dynamic Parameters:_ The simulation model's dynamic parameters to be optimized involve the friction loss, damping and armature (rotor inertia) values for the hip and knee motors, friction loss and damping for the rail, which is modeled by a passive prismatic joint, and the link inertias. * _Solver Parameters:_ Time constant and damping ratio are two of the solver parameters, characteristic of the mass-spring-damper constraint modeling of MuJoCo. These parameters are optimized to modulate the contact model between the leg and the plane. CMA-ES [35] is used to optimize these parameters with a cost on the cumulative joint position difference between simulation trajectories and recorded real hardware data at each time step. \[J(q)=\sum_{t=0}^{t_{f}}\sum_{i=0}^{1}(q_{i_{\text{sim}}}^{t}-q_{i_{\text{real} }}^{t})^{2} \tag{17}\] In high dimensional optimization problems, such as this, it can become hard for the solver to converge and find an optimal solution. Dynamic coupling also occurs between the parameters, potentially leading to low cost but poor transfer to the hardware. To prevent these issues, the parameters that can be roughly estimated during modeling, i.e. link inertia and solver parameters, are fixed in a first optimization pass. In a second pass, we optimize all dynamic and solver parameters while placing bounds on the friction, damping, and armature parameters derived from the first pass. This helps converging to an optimal and pragmatic solution. Fig. 5 visualizes the optimization procedure. #### Ii-C2 Policy Robustness Two methods are employed during training to make the policy robust to delays, noise, and other disturbances. DelaysAs mentioned before, the observation space consists of sensor readings from the last three consecutive time steps. In addition, the observation data over the last ten time-steps is stored in a buffer. While training, with a probability of 0.5 at each time step, data of three time steps is randomly sampled from the buffer in correct temporal sequence and used as observation instead. This helps to simulate plausible delays on the real system, effectively making the policy more robust. NoiseNoise is added to the joint data and the torques given by the policy to simulate sensor noise and control inaccuracies. At each time step, the noise is sampled from a uniform distribution ranging from \(-\lambda u\) to \(\lambda u\), where \(\lambda\) is the error range and \(u\) is the observed value. We set \(\lambda=0.05\) for the joint positions and velocities, and \(\lambda=0.15\) for the output torques. ## III Results The optimization of simulation parameters described in Section II lead to the parameters shown in Table I. Training for the jump heights \(0.25\) m, \(0.3\) m, and \(0.35\) m in simulation yielded a controller that is able to interpolate between these three desired jump heights, showing that the controller learned an approximation of the task space inverse dynamics for this problem. Figure 6 shows the jump height of a \(30\) s trial with an initial desired jump height of \(0.25\) m. Every \(5\) s the desired jump height is increased by \(0.02\) m. While there is a significant deviation of the actual average jump height especially for intermediate commands, the mapping from desired to actual jump heights is monotonic. To assess the implicit contact detection of the controller, we analyse the applied torque in simulation to the elbow joint for a \(10\) s trial with a commanded jump height of \(0.30\) m. Figure 7 shows the controller torque output for all encountered configurations in the phase space of the actuated elbow joint. It is evident that during the stance phase, the controller applies significantly higher torques than in the flight phase, to generate the lift-off. In addition, the control torque increases after the minimal attitude is reached. This strongly implies that the controller indeed detects ground contact, solely based on the proprioceptive observation of joint positions and velocities. The controller trained in simulation was tested on the real robot without further adjustment. Figure 8 shows the base height trajectories for simulated and real robot for a trial with changing desired jump height. Whereas the real jump height is lower than the commanded height, the ordering of jump heights is as intended, i.e. a higher desired height leads to a higher actual jump height. The offset between commanded and actual jump height lies between \(0.04\) m and \(0.06\) m. \begin{table} \begin{tabular}{|c|c|c|c|} \hline **Joint** & Friction loss & Damping & Armature \\ \hline Rail Prismatic Joint & \(0.7024\) & \(1.0724\) & - \\ \hline Hip Joint & \(0.4364\) & \(0.0005\) & \(0.00004\) \\ \hline Knee Joint & \(0.0015\) & \(0.1441\) & \(0.0001\) \\ \hline \end{tabular} \begin{tabular}{|c|c|} \hline **Parameter** & Value \\ \hline Hip Link Z Inertia & \(0.004061\) \\ \hline Knee Link Z Inertia & \(0.000845\) \\ \hline Time Constant & \(0.0911\) \\ \hline Damping Ratio & \(0.6678\) \\ \hline \end{tabular} \end{table} TABLE I: The simulation parameters obtained after CMA-ES optimization. We observe that the temporal structure of consecutive jumps differs between simulation and the real robot. For a desired jump height of \(0.25\) m, the real robot shows jump heights alternating between \(\approx 0.2\) and \(\approx 0.24\) m. For a commanded height of \(0.35\) m, the jump frequency on the real robot is slightly reduced, because two jumps, around 11 and 12.5 s, were'skipped'. Note that the absolute values of the jump heights on the real system are not exact, since it is determined by tracking the center of the upper motor in video recordings. This induces some noise on the measurement. In addition, a height dependent small parallax error can be expected. For a statistical analysis of the jump height distribution for varying height commands, the data of the 15 s trial shown in Figure 8 is merged with 15 s trials of fixed jump heights at \(0.25\) m, \(0.30\) m, and \(0.35\) m, both in simulation and the real robot (also see the accompanying video available as the multimedia attachment). Figure 9 show the resulting jump height distributions for the trained controller, along with simulation results for an energy shaping controller as reference. While the distributions of RL based jumping are completely separated for the simulation, the real system tests show overlaps between neighbouring jump height distribu Fig. 8: Jumping heights of simulated and real robot for 15 seconds. The commanded jump height is 0.25 m for the first 5 s, 0.30 m for the next 5 s, 0.35 m for the last five seconds. Both in simulation and the real robot, increasing the desired jump height leads to higher actual jumps. Fig. 6: Jumping heights of the simulated robot in a 30 second trial with monotonically increasing desired jump heights. The desired heights increase from \(0.25\) m to \(0.35\) m in increments of \(0.02\) m. The controller was trained with only three desired jump heights of \(0.25\) m, \(0.30\) m, and \(0.35\) m. Fig. 7: Controller torque output during \(10\) s of continuous jumping with a commanded height of \(0.30\) m in phase space of the elbow joint. The black (grey) line is an example phase space trajectory for one flight (stance) phase. Circles and diamonds denote starting and end points of the jump phase. Fig. 9: Jump height distributions for different commands, in simulation and on the real system. Colored lines show the medians of the distributions, gray lines the respective commanded height. In simulation, the distributions for the RL controller are completely non-overlapping, thus clearly significantly different. On the real system, the distributions show some overlap. For comparison, the distributions of jump heights generated by an ES controller are also shown. tions. This can at least partly be attributed to the noise induced by the pixel tracking used to estimate the heights in the real experiments. The bimodal nature of the distribution for a commanded height of \(0.25\) m is a consequence of the alternating jump heights, also seen in Figure 8. We use a Wilcoxon-Ranksum test to evaluate the difference between neighbouring distributions. Both in simulation and on the real system, all neighbouring distributions are significantly different at \(p<0.001\). Our baseline energy shaping controller worked remarkably well in both simulation and on real hardware (see Fig. 9). This is as expected since we exploit the model knowledge and physics that captures the essence of jumping task. However, it requires expert knowledge to tune the contact detection threshold and other controller gains which can be time consuming. Our proposed End-to-End RL controller does not require such expert knowledge and demonstrates a similar trend for different jumping heights. The standard deviation in jumping height is even smaller in some cases, especially in simulation. However, there is substantial room for improvement in the performance of the RL controller on the real system in comparison to the baseline ES controller. ## IV Discussion The main objective was to find a jumping controller mapping proprioceptive feedback to torque control, including the avoidance of height estimation and PD control strategies as used by [22]. Thus it may seem counterintuitive that we impose soft joint limits with a PD controller and use the base height for reward calculation. However, since variable height jumping cannot be defined without the notion of height, it is strictly necessary information for the agent such that the task space inverse dynamics can be approximated. We want to emphasize though that the height is only used in the reward during training, and is not required as direct feedback to the controller. The soft joint limits serve as a gentle exploration guiding strategy, similar to initial example trajectories as used by [22]. On the hardware, they are still in place for safety reasons, but are rarely crossed. Thus, our control approach can be considered truly end-to-end. In the following, further features and critical design decisions are discussed in more depth. A prerequisite for successful transfer to the real system is a small simulation to reality gap. Prior to developing the current approach, the more common technique of domain randomization [36] was also tested, which generated unsatisfying behavior transfer. Our approach is adapted from [24], who used simulation parameter optimization to make trajectories in simulation follow real recorded training data. We extend this approach by introducing a two stage process for high dimensional parameter spaces and showing the applicability to collision rich and dynamic tasks. We chose to keep inertia parameters fixed in the first stage, since they can be reasonably well estimated from the structure, whereas other dynamical parameters are much harder to infer a priori. We also noted that identifying the rotor inertia was crucial. While not necessary for less dynamic behavior such as walking, rotor inertia becomes more influential for highly dynamic motions. The superior simulation to reality transfer can be explained by domain randomization leading to a trade off between generality over a range of parametrizations to optimality on the actual hardware, which has well defined parameters. In contrast, the method we propose is more akin to dynamic system identification. However, the target is not the true physical parameters, but the closest possible representation of the system dynamics within the simulation. To make the policy robust to expectable delays on the real system, we used random sampling from an observation buffer. This random sampling technique is easy to implement without having to know the exact delays and their distributions. An alternative approach would be to use an actuator model as suggested by [5] to learn quadruped walking. In their case, a good motor model was probably more relevant since the robot's legs use series elastic actuators, which are expected to have more complex delay dynamics. If this is not the case, we argue for our method as a simpler solution. The policy shows good interpolation performance for height values that were not explicitly included in the training. This suggests that the policy implicitly learned a task space inverse dynamics model of the system. This assumption is further supported by the implicit detection of different jump phases. However, height tracking shows relatively higher deviations at intermediate commands around 0.3 m. This could be an issue of the neural network not having enough capacity to represent the full dynamics. A thorough hyperparameter tuning of the network architecture could improve the results, but is out of scope for this paper. The remaining differences in the jump heights between simulation and reality can be a consequence of non-optimal dynamic parameters of the simulator. However, we noted that adding more data to the parameter optimization pipeline did not significantly change the optimization result. Another explanation could be additional, unmodelled non-linear dynamics such as motor backlash, motor torque saturation, or state dependent sensor noise. A strategy to improve performance without having to explicitly model these effects is to continue training the controller on the real system directly, using the current policy as a starting point. For this, the used SAC algorithm is particularly well suited [37]. ## V Conclusion In summary, we presented a method to train a unified torque controller for continuous hopping with a monoped robot. The controller is able to interpolate between jump heights and implicitly detect relevant jump phases and act accordingly. The simulation to reality mapping procedure eliminates the need of parameter tuning for behavior transfer. The trained policy realizes a direct mapping from proprioceptive feedback to torque control. To the authors' knowledge, this is the first reported end-to-end training procedure for a jump height adjustable monoped torque controller. However, much needs to be done to bring the height tracking accuracy of this approach closer to the model based energy shaping control. Future research directions include a thorough hyperparameter tuning of the neural network architecture to improve the jump height interpolation in simulation, as well as continued training on the real system to mitigate the effect of residual dynamics modeling inaccuracies of the simulator. We also plan to integrate this work in the RealAIGym ecosystem [38] similar to other canonical underactuated systems like simple pendulum [39], double pendulum [40], and AcroMonk [41].
2308.16795
Towards Multilingual Automatic Dialogue Evaluation
The main limiting factor in the development of robust multilingual dialogue evaluation metrics is the lack of multilingual data and the limited availability of open sourced multilingual dialogue systems. In this work, we propose a workaround for this lack of data by leveraging a strong multilingual pretrained LLM and augmenting existing English dialogue data using Machine Translation. We empirically show that the naive approach of finetuning a pretrained multilingual encoder model with translated data is insufficient to outperform the strong baseline of finetuning a multilingual model with only source data. Instead, the best approach consists in the careful curation of translated data using MT Quality Estimation metrics, excluding low quality translations that hinder its performance.
John Mendonça, Alon Lavie, Isabel Trancoso
2023-08-31T15:15:26Z
http://arxiv.org/abs/2308.16795v1
# Towards Multilingual Automatic Open-Domain Dialogue Evaluation ###### Abstract The main limiting factor in the development of robust multilingual open-domain dialogue evaluation metrics is the lack of multilingual data and the limited availability of open-sourced multilingual dialogue systems. In this work, we propose a workaround for this lack of data by leveraging a strong multilingual pretrained encoder-based Language Model and augmenting existing English dialogue data using Machine Translation. We empirically show that the naive approach of finetuning a pretrained multilingual encoder model with translated data is insufficient to outperform the strong baseline of finetuning a multilingual model with only source data. Instead, the best approach consists in the careful curation of translated data using MT Quality Estimation metrics, excluding low quality translations that hinder its performance. ## 1 Introduction Open-domain dialogue systems have gained substantial attention in the NLP (Natural Language Processing) and ML (Machine Learning) fields, thanks to their increasingly human-like behaviour [16, 20]. Their impressive generation capabilities can be attributed to new milestones in model development and scaling [1], and the amount of data used during training. Despite this research and development effort, advertised generation capabilities were only attainable in a select few languages (typically English or Chinese) due to low resources in dialogue for other languages [20]. More recently, however, the advent of LLMs (Large Language Models) finetuned with Reinforcement Learning from Human Feedback such as ChatGPT [23] has opened the path for high-quality and easily accessible multilingual dialogue generation. Similarly, automated open-domain dialogue evaluation has also been largely limited to evaluating a select few languages. Word-overlap based metrics from NLG (Natural Language Generation) such as BLEU [14] and METEOR [24] are agnostic to language, only requiring a reference response. However, these metrics are known to correlate poorly with human judgments due to the multifaceted nature of dialogue [15]. Reference-free metrics such as USR [13] and USL-H [22], however, require dialogue data for training. Considering most open-source dialogue data is in English, these models are expected to underperform significantly in other languages. Additionally, most open sourced dialogue systems are also limited to English, further disincentivising multilingual research. One solution to the issues previously mentioned is to leverage MT (Machine Translation). With MT services becoming more affordable and consistent, some authors resort to translation when developing their multilingual dialogue systems [27]. Figure 1: Proposed architecture. The original dialogue dataset is transformed into context-response pairs \((c_{n},r_{n})\) and translated using MT. The final dialogue submetric is trained using a combination of the original English data and the top \(k\) sentences or \((c_{n},r_{n})\) from each language, depending on the submetric. 2019; Anastasiou et al., 2022). This can either be included as a module in the system's pipeline - allowing the use of proven English generation models for other languages; or as a cross-lingual transfer method - by translating training data. In this paper, we extend the approach of training using data generated by MT for the development of multilingual models for evaluation of open-domain dialogue responses. We experiment with and evaluate several different possible workarounds for this problem. Namely, we leverage the availability of strong pretrained multilingual encoders as a foundation for training multilingual dialogue evaluation models. As a first step, we translate existing publicly-available English dialogue data into the target languages. We then explore multiple alternative ways to leverage this translated data in order to finetune and train monolingual and multilingual dialogue evaluation models for two specific dialogue submetrics. To address the impact of low quality translations, we propose using an MT Quality Estimation (QE) model to rank the translations and investigate the impact of finetuning models with varying amounts of quality-ranked data. Figure 1 illustrates the proposed approach. The performance of these alternative models is evaluated on a curated test set of dialogues which were human-annotated with dialogue quality scores for two subqualities. The original English test set was translated using MT and then post-edited by editors into six different target languages (PT-Portuguese, DE-German, FR-French, ZH-Chinese, ES-Spanish and JA-Japanese). The quality scores from the human annotations of the original English dialogues were then carried over to the target-language dialogues. Our finetuned multilingual dialogue evaluation models exhibit strong correlations with human judgements, comparable to LLMs, indicating it is possible to leverage multilingual dialogue evaluation metrics without the constraints LLMs currently possess (costs, latency, etc.). We hope this will encourage other researchers to update existing metrics using our proposed multilingual finetuning approach. In summary, the primary contributions of this work are as follow: * We evaluate cross-lingual transfer and translation augmented training approaches using MT for the task of training multilingual dialogue evaluation models, showing that, on average, the best performance is achieved by finetuning with subsets consisting of only the best translations. We found that, depending on the subquality and target language, the optimal amount of translated data can be as low as 5% and as high as 75%. * We translate and release DailyDialog and a corresponding test set of human quality annotations in 6 languages to facilitate future benchmarking of multilingual dialogue evaluation metrics1. Footnote 1: github.com/johndmendonca/DialEvalML ## 2 Background ### Open-Domain Dialogue Evaluation Metrics The recent trend in open-domain dialogue evaluation is to train dialogue submetrics using well-defined self-supervised tasks which correlate well with their corresponding subqualities. The most used self-supervised task is Next Sentence Prediction (NSP), as it is known to correlate well with subqualities that evaluate _"Context Awareness"_. Examples of this include: _Uses Context_Mehri and Eskenazi (2020), _Sensibleness_Phy et al. (2020); Mendonca et al. (2022) and _Relevance_Zhao et al. (2020); Zhang et al. (2022). Other subqualities include: _Fluency, Grammatically Correct_ or _Understanding_, which use word-level noising techniques to generate negative samples Phy et al. (2020); Mendonca et al. (2022); Zhang et al. (2022); and _Specificity_, which uses an MLM Masked Language Modelling) score Mehri and Eskenazi (2020); Phy et al. (2020); Zhang et al. (2022). For overall quality, these submetrics are typically combined using different methods (e.g. empirical observation, trained Linear Regression or multilayer perceptrons). To the best of our knowledge, there has not been any published research on cross-lingual transfer and/or development of trained multilingual metrics for open-domain dialogue evaluation. ### Multilingual Text Classification Despite the lack of research on multilingual dialogue evaluation, extending text classification to other languages is a well established subfield of research in NLP. The main constraint for multilingual performance parity is the lack of task-specific resources in the vast majority of written languages. Given the creation of these resources is both time consuming and expensive, most research effort has been geared towards general-purpose cross-lingual representations that are learned in an unsupervised way, therefore leveraging the unstructured data available in the wild. Large multilingual Transformer-based models (e.g mBERT, XLM-RoBERTa, and mT5) have been successfully used in a variety of classification tasks Conneau et al. (2020); Pires et al. (2019); Xue et al. (2021). The standard approach for cross-lingual transfer is to finetune on existing domain data in a source language and perform inference in a target language. However, this approach typically lags behind models specifically trained with in-domain (both task and language) data. As a solution to this problem, Pfeiffer et al. (2020) propose learning language-specific adapter modules via MLM on unlabelled target-language data followed by task-specific adapter modules by optimising a target task on labelled data in the source language. Task and language adapters are stacked, allowing cross-lingual transfer to the target language by substituting the target-language adapter at inference. Bornea et al. (2021) propose an augmentation strategy where a corpus of multilingual silver-labelled QA pairs is generated by combining the original English training data with MT-generated data. A language adversarial training and arbitration framework bring the embeddings closer to each other, making the model language invariant. To the best of our knowledge, there has not been any research on the utilization of MT Quality Estimation (QE) scoring as a means for identifying and demoting or excluding poorly translated data in such cross-language training scenarios. ## 3 Problem Formulation The goal of reference-free turn-level dialogue evaluation is, given a dialogue history (frequently denoted as context) \(c\) of varying amount of turns, and a response \(r\), to learn a scoring function that assigns a score \(f(c,r)\to s\). This scoring function is compared against human judgements, which annotate the same context-response pairs. These responses are evaluated using a scaling method, for instance, a binary \((0,1)\) judgement or a \([1,5]\) scale, where the lowest value means lowest quality and highest value maximum quality. The notion of quality varies wildly depending on the annotation. In this work, we evaluate dialogue in two dimensions: * **Understandability** An understandable response is one that can be understood without context. Such responses may contain minor typos that do not hinder the comprehension of the response. * **Sensibleness** A sensible response is one that takes into account its preceding context. Most automatic evaluation metrics reformulate the problem as regression. Performance is then evaluated using Pearson and Spearman correlations with human annotations. ### Automatic Dialogue Evaluation Metrics The majority of competitive metrics for dialogue evaluation include models trained in a self-supervised way for Valid Sentence Prediction (VSP) and Next Sentence Prediction (NSP) Yeh et al. (2021); Zhang et al. (2021). As such, the focus of this work was to evaluate multilingual dynamics for these models, which can then be employed on existing metrics. VSP: Valid Sentence PredictionIn this paper, we followed the approach used by Phy et al. (2020) and initially proposed by Sinha et al. (2020). A regression model was trained to differentiate between positive samples and synthetic negative samples. **Positive** samples are perturbed by randomly applying one of the following: (1) no perturbation, (2) punctuation removal, (3) stop-word removal. **Negative** samples are generated by randomly applying one of the following rules: (1) word reorder (shuffling the ordering of the words); (2) word-drop; and (3) word-repeat (randomly repeating words). NSP: Next Sentence PredictionThe task of predicting sensibleness can be considered a binary (NSP) task, distinguishing a positive example from a semantically negative one, given a context. A discriminative regression model was trained using the following sampling strategy: **positive** responses are drawn directly from the dialog; **negative** responses are randomly selected and a token coverage test discards semantically similar sentences. All responses are processed using the positive-sample heuristic used by VSP. ## 4 Cross-lingual Transfer Learning The goal of the experiments described in this section was to evaluate different basic approaches of cross-lingual transfer for the task of automatic dialogue evaluation. For encoder model training, we leveraged Machine Translation (MT) by fully translating an English source dialogue dataset and then finetuning monolingual and multilingual models using these translations. ### Experimental Setup #### 4.1.1 Dataset All experiments in this paper were based on the **DailyDialog Li et al. (2017)** dataset, a high-quality human-human open-domain dialogue dataset focused on day-to-day conversations. After processing, we obtained train/dev splits of 58,515/25,078 and 89,707/38,449 per language for the VSP and NSP models, respectively. For training and evaluation, the post-processed dataset was translated into the target languages using MBART50 Liu et al. (2020). We opted for using MBART50 as it is a relatively lightweight open sourced model with a large language coverage. For the test set, we leveraged the annotations from Phy et al. (2020). These human annotations evaluate five responses from two retrieval methods, two generative methods, and one human-generated response for 50 contexts. These responses were annotated in terms of _Understandability_ and _Sensibleness_2. We translated this set using Unbabel's3 translation service. A total of 300 sentences were translated, corresponding to the 50 shared contexts and 250 responses. The translations were then split into smaller tasks and were corrected by editors from a commercial provider. Editors were specifically asked to retain any source disfluencies or hallucinations stemming from low quality response generation (e.g. _"I'm afraid you can't."_, _"Au contraire, you need to be a bahn."_). This ensured the original human quality annotations remained valid for the translation. A secondary senior editor reviewed the edited content as a whole. Footnote 2: Annotations for _Specificity_ and _Overall Quality_ were also conducted, but were excluded since they do not map to the learned metrics under study. Footnote 3: unbabel.com #### 4.1.2 Finetuned Encoders We used XLM-RoBERTa Conneau et al. (2020) as the encoder model for the experiments. This model is the multilingual version of RoBERTa, pretrained on CommonCrawl data containing 100 languages. For both the VSP and NSP models, we added a regression head on top of the encoder model. EN - Zero-shot inferenceAs a baseline for our results, we conducted zero-shot inference on the target languages using a model finetuned only on the original English data. LANG - Target-Language FinetuningWe fine-tuned the encoder with target-language translated dialogue data only. The downside of this approach is that a unique model needs to be trained for each target language. However, this method can be scaled to every language, including new ones, and is optimised to perform best in that language. ML - Multilingual FinetuningInstead of fine-tuning a new model for each target language, one can finetune a single multilingual model by combining all of the translated data. In this case, the resulting single trained model is then used to evaluate responses in all languages. However, its performance may suffer in languages it has not seen during fine-tuning, even if they are supported by the encoder model. Furthermore, unlike target-language fine-tuned, the multilingual model is optimised jointly for all included languages. Mad-XIn this approach, we trained a VSP and NSP task adapter using the original English data by stacking the task adapter with a pretrained English language adapter (kept frozen during training). For zero-shot inference, the English language adapter was replaced by the target-language counterpart, while keeping the trained task adapter in place. #### 4.1.3 Large Language Model As an additional strong baseline, we leveraged gpt-3.5-turbo (colloquially known as ChatGPT) as an evaluator of Understandability and Sensibleness. The context (exclusively for Sensibleness) and response was provided as input, together with the prompt _"[Given the context,] evaluate from 1-5 the response in terms of [dimension]. Provide the score and nothing else."_. This prompt, paired with a temperature setting of 0.0 attempted to minimises the variability of the output. Nevertheless, we report a standard deviation of (.003,.003) and (.001,.001) for Understandability and Sensibleness correlations, respectively, across 3 runs. ### Results The correlation results for all subqualities and the overall quality are presented in Table 1. UnderstandingThe results show that, on average, the best performing encoder approach is the zero-shot inference using the English model (**EN**). Both the target-language finetuning (**LANG**) and multilingual finetuning approaches (**ML**) have much lower performances, indicating that translation augmentation is detrimental for this task. We also note that the **MAD-X** approach, although performing slightly better than ML and LANG, still lags behind EN considerably. In any case, ChatGPT largely outperforms other models on both metrics. SensiblenessThe best performing encoder approach for this subquality is LANG. Intuitively this makes sense, given that during finetuning the model is exposed to target-language data for the language it is being evaluated on. Furthermore, the performance difference between the different approaches is relatively much smaller, which indicates the Sensibleness subquality is less sensitive to MT quality. When comparing these results with ChatGPT, we observe a much smaller performance gap, with the best encoder models slightly outperforming on Spearman. ## 5 MT Quality-aware finetuning The effects of noise introduced to the training data is a subject of intense research in the literature (Zhang et al., 2017; Hu et al., 2020; Swayamdipta et al., 2020). It is expected that, for this task, noise is introduced by low quality translations, reducing the performance of trained models. This issue was identified in Section 4, where for the VSP model in particular, the models trained using translations performed much worse than the baseline approach. Our hypothesis is that some translations heavily disrupt morphosyntactic cues used to infer response fluency, as shown in Table 2. We acknowledge that these low quality translations may also reduce the quality of the response by disrupting keywords that point to the context (which is important for Sensibleness), or even more subtle quality cues (e.g. loss of empathy, inconsistency with named entities). However, the NSP model is trained to discriminate between the original response and randomly selected response from the corpus. As such, the model's prediction will remain invariant to most translation errors. These observations, paired with the fact encoder models only slightly underperform ChatGPT (a much larger and expensive model), motivate the work described in this section. We hypothesise that, by ameliorating the MT noise via identifying and filtering low quality translations, the encoder model performance can outperform LLMs such as ChatGPT, at a fraction of the cost. Since there are no available references, an MT QE (Specia et al., 2018) automatic metric is used for this purpose. Formally, an MT QE model is a scoring function that assigns a score given a source sentence and hypothesis translation. The unboundedness and uncalibrated nature of this score across languages results in the need for a cumbersome analysis for each individual language in order to determine a threshold for filtering. Instead, we propose to use QE scores for response ranking, for each target language. This ensures a standardised method for filtering, improving the scalability of this method to new languages. ### Experimental setup In order to confirm our hypothesis, we retrained all models using different amounts of translated data (100, 75, 50, 20, 10 and 5%). The ranking of the translations was conducted by scoring them using the WMT20 COMET-QE-DA model (Rei et al., 2020). For the VSP model, we ranked the individual sentences, and then applied negative sampling. For the NSP model, we ranked the positive and negative samples separately and then merged them together. Figure 3 presents the unnormalised score boxplot per language for all sentences (context and responses) for DailyDialog. One of the things we noticed when finetuning the monolingual models was that the VSP models had large variations in performance. This can be attributed to (1) the low amount of training data, especially when using very few examples (5%, 10%), and (2) low quality translations, which is the research question this experiment attempts to answer. Since the true impact of low quality translations is obfuscated by other factors, we decided to finetune the LANG models starting from the EN checkpoint instead of the pretrained XLM-RoBERTa, and include the zero-shot results as 0%. ### Results LangFor the monolingual models, we plot normalised correlation results with the amount of MT data used during finetuning in Figure 2. The _Understandability_ correlation results show that the optimal amount of translated data is language dependent, but with a clear indication that the inclu Figure 3: MT QE unnormalised score boxplot per language. Figure 2: Normalised Pearson and Spearman correlation for the Understandability and Sensibleness submetric with varying amount of translated training data. Numeric results available in Appendix B. sion of more translations decreases performance significantly. Instead, a lower amount of translations (5-10%) yields optimal performance. This shows that this small finetuning step is essentially adapting a model that was already finetuned for the downstream task to the target-language domain. For _Sensibleness_, we see that the inclusion of more translations yields the best results. As such, we can conclude that low-quality MT does not adversely affect performance. We hypothesise this is due to MT being able to correctly translate keywords that indicate context awareness. Since we are only concerned about relevance, the overall sentence may still contain MT errors and be scored highly. MlThe correlation results for the multilingual models are presented in Table 3. For _Understandability_, we note that, on average, and similar to LANG, the best performance is attained with the minimum amount of translated data (ML-5), with the performance decreasing when more translations are added. Comparing these results with ChatGPT, we observe an improvement in performance, but our encoder models are still generally weaker when using Spearman as a metric. For _Sensibleness_, decreasing the amount of data reduces the performance of the model. However, we note a decrease in performance when including the full amount of translated data (ML-100). This may be due to the inclusion of the worst translations - typically hallucinations - which is compounded by training on all languages. Unlike in Understandability, here we see that ChatGPT still outperforms the best encoder model in terms of Pearson correlation. ### Effect of low-quality translation during prediction One might ask if a low-quality translation can induce the submetrics to output a different score. Intuitively, we hypothesise each model will attribute different scores in the face of low quality translations. More specifically, given the results presented in previous sections, we expect the test prediction error to be: * **Negatively correlated with the MT QE scores for VSP.** We know this model is highly sensitive to low quality translations, since MT errors frequently affect the fluency of the response (as identified in previous sections); * **Weakly correlated for the NSP model.** The model showed robustness when including more translations during training, with performance decreasing only when we included all translations (ML-100) during training. In order to evaluate these assumptions, the correlation plots of the MT QE z-scores (obtained independently for each language) against the submetric absolute error using the best ML models (ML-5 for VSP and ML-75 for NSP) for the test set are presented in Figure 4. For the _Understandability_ subquality, we note that there is a slight negative correlation between the absolute error and the MT QE score. This is also confirmed by a calculated Pearson Correlation value of -0.245. For the _Sensibleness_ subquality, the relationship between these two measures is less obvious. For instance, we note that, unlike for Understandability, maximum deviations \begin{table} \begin{tabular}{l|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c} & \multicolumn{2}{c|}{**EN**} & \multicolumn{2}{c|}{**PT**} & \multicolumn{2}{c|}{**DE**} & \multicolumn{2}{c|}{**FR**} & \multicolumn{2}{c|}{**ZH**} & \multicolumn{2}{c|}{**ES**} & \multicolumn{2}{c|}{**JA**} & \multicolumn{2}{c}{**AVG**} \\ & **Pr.** & **Sp.** & **Pr.** & **Sp.** & **Pr.** & **Sp.** & **Pr.** & **Sp.** & **Pr.** & **Sp.** & **Pr.** & **Sp.** & **Pr.** & **Sp.** & **Pr.** & **Sp.** \\ \hline \multicolumn{11}{c}{**Understandingability**} \\ \hline **0** (EN) &.376 &.187 &.366 &.167 &.328 &.172 &.351 &.120 &.318 &.202 &.342 &.204 &.204 &.176 &.327 &.194 \\ **5** &.403 &.182 &.490 &.219 &.344 &.172 & **.385** &.091 &.320 & **.235** & **.429** &.236 & **.230** & **.179** & **.372** & **.211** \\ **10** &.377 &.180 & **.514** &.227 & **.381** &.193 &.294 &.097 & **.338** &.214 &.385 &.212 &.216 &.175 &.358 &.206 \\ **20** & **.384** &.177 &.478 &.236 &.333 &.203 &.153 &.087 &.318 &.219 &.315 &.214 &.174 &.168 &.308 &.202 \\ **50** & **.413** &.201 &.481 & **.242** & **.381** &.213 &.103 &.053 &.310 &.200 &.315 &.221 &.219 &.149 &.317 &.200 \\ **75** &.311 &.145 &.247 &.211 &.320 &.195 &.047 &.048 &.163 &.149 &.111 &.198 &.108 &.127 &.187 &.158 \\ **100** &.336 &.117 &.176 &.167 &.262 &.150 &.012 &.015 &.225 &.138 &.117 &.158 &.091 &.092 &.174 &.126 \\ **ChatGPT** &.397 & **.334** &.365 &.230 &.332 &.363 &.367 &.273 &.276 &.187 &.394 & **.263** &.258 & **.223** & **.337** & **.364** \\ \hline \multicolumn{11}{c}{**Sensibleness**} \\ \hline **0** (EN)** &.683 &.676 &.636 &.651 &.657 &.655 &.646 &.656 &.640 &.656 & **.646** &.657 &.590 &.599 &.649 \\ **5** &.637 &.674 &.629 &.632 &.627 &.648 &.637 &.656 &.629 &.646 &.626 &.647 &.567 &.596 &.621 &.640 \\ **10** &.642 &.675 &.639 &.664 &.661 & _.669_ &.636 &.661 &.637 &.656 &.635 &.668 &.575 &.604 &.632 &.654 \\ **20** &.650 &.689 &.627 &.670 &.649 &.681 &.627 &.666 &.621 &.661 &.637 &.673 &.568 &.614 &.626 &.660 \\ **50** &.667 &.691 & **.642** &.687 &.650 &.672 &.621 &.662 &.662 &.662 &.663 &.673 &.600 & **.642** &.637 &.666 \\ **75** &.677 &.712 &.629 & **.694** &.679 & **.702** &.633 & **.679** & **.661** & **.673** & **.643** & **.695** &.593 &.635 &.645 & **.679** \\ **100** &.651 &.691 &.606 &.675 &.634 &.680 &.605 &.669 &.642 &.667 &.596 &.676 &.599 &.637 &.619 &.664 \\ **ChatGPT** & **.746** & **.724** &.636 &.262 & **.683** &.675 & **.695** &.666 &.655 &.645 &.680 &.577 & **.625** &.610 & **.74** &.662 \\ \hline \end{tabular} \end{table} Table 3: Average correlation results across 3 runs with different seeds for multilingual models when varying the amount of translated data. are spread evenly across the QE scale, which points to the model erroneously predicting Sensibleness irrespective of the translation quality. Conversely, we also note a higher density of accurate predictions with lower QE scores. These results, paired with the calculated Pearson Correlation value of -0.129, confirm our hypothesis that the NSP model is more agnostic of MT quality than VSP. ### Example test predictions We present representative examples of our best ML models' prediction (ML 5/75) in Table 4. In the first example, the baseline English model fails to appropriately identify the understandability of the response. In the second example, we see that the multilingual model is able to correctly identify that the response takes into account the job presented in the context (manager) by complimenting it ("fantastic job"), which the EN model failed to identify. ## 6 Conclusions This paper explored the use of cross-lingual knowledge transfer for the novel task of automatic multilingual dialogue evaluation. We evaluated different strategies for this task, including zero-shot inference, MAD-X and Machine Translation augmentation. Empirically we showed that the naive approach of leveraging MT for augmentation is insufficient to outperform the baseline of English finetuning with a multilingual encoder-based LM, let alone a strong LLM. Instead, by filtering out low quality translations, we were able to reduce the gap of performance on ChatGPT, outperforming it on select correlation metrics. Experimental results showed that we obtain the best performance when training encoder models with the following proportions of MT-QE: 5% for Understandability and 75% for Sensibleness. One could argue the notion of quality is intrinsically related to cultural norms. For instance, Japanese speakers may prefer a polite conversation, whereas German speakers might prefer a more direct interaction. A future research direction is to evaluate generative model responses in different languages using annotators exposed to the culture associated with a given language. In addition to ensuring the evaluation of the response meets the criteria of "quality" in different cultures, it would also allow for a qualitative analysis of the differences in the notion of quality between languages. \begin{table} \begin{tabular}{l c} \multicolumn{3}{l}{**CTX:** Também me apercebi desta questião. E a automineralzão dos proces do escritño \# essencial. \\ **RES:** Sim, fazet duo manualmente demora demasiado. \\ **EN-VSP:** 394 & **EN-NSP:** 3824 \\ **ML-VSP:** 1.00 & **ML-NSP:** 1.00 \\ **Unders.:** 1.00 & **Sensibl:** 0.00 \\ \hline \hline **CTX:** Ja, ich leite die Jungs am Kai. \\ **RES:** Wow, das klingt nach einem fantastichen Job, de \\ du da bekommen hast. \\ **EN-VSP:** 963 & **EN-NSP:** 315 \\ **ML-VSP:** 941 & **ML-NSP:** 981 \\ **Unders.:** 1.00 & **Sensibl:** 1.00 \\ \end{tabular} \end{table} Table 4: Examples of subquality predictions from the test set. Figure 4: Scatter plot comparing the test set MBART50 per-language QE z-scores (x-axis) versus the per sample Absolute Prediction Error (y-axis in log scale) for Understandability and Sensibleness subqualities. ### Limitations Perhaps the main limitation of this work is the restricted amount of languages studied. Ideally, we would have used a more comprehensible set of languages, including low-resource ones, to evaluate the consistency of the conclusions drawn from the experiments. Another limitation is the focus on a single open-domain dialogue dataset. Dialogue evaluation metrics are known to correlate poorly when evaluated on unseen datasets (Yeh et al., 2021). As such, it is not certain that the observations presented in this work would hold for other datasets, or even different annotations (Mehri et al., 2022). Finally, the pretrained encoder, MT and QE models used in this work are not fully representative of all available models. We acknowledge that the optimal amount of filtering is likely to be different, depending on the combination of models used. ## Ethics Statement This work leverages dialogues and annotations developed exclusively by English-speakers. This introduces an English-centric bias with respect to the notion of quality (and subqualities) in dialogues. Although not evaluated in depth in this work, there could be a chance that the models erroneously yield lower scores to responses not conforming to English notions of quality responses. The original dialogue dataset and generated responses were checked for personally identifiable information or offensive content by the original authors. Although highly unlikely, we acknowledge the translations may contain offensive content resulting from decoding. The post-editing conducted in this work used a crowdsourcing platform that awarded users a fair wage according to their location. ## Acknowledgements This research was supported by the Portuguese Recovery and Resilience Plan through project C645008882-00000055 (Responsible.AI), and by national funds through _Fundacao para a Ciencia e a Tecnologia_ (FCT) with references PRT/BD/152198/2021 and UIDB/50021/2020, and by the P2020 program MAIA (LISBOA-01-0247-FEDER-045909).
2309.08682
A conformal Hopf-Rinow theorem for semi-Riemannian spacetimes
The famous Hopf-Rinow Theorem states, amongst others, that a Riemannian manifold is metrically complete if and only if it is geodesically complete. The Clifton-Pohl torus fails to be geodesically complete proving that this theorem cannot be generalized to compact Lorentzian manifolds. On the other hand, Hopf and Rinow characterized metric completeness also by properness. Garc\'ia-Heveling and the author recently obtained a Lorentzian completeness-compactness result for open manifolds with a similar flavor. In this manuscript, we extend the null distance used in this approach and our theorem to proper cone structures and to a new class of semi-Riemannian manifolds, dubbed $(n-\nu,\nu)$-spacetimes. Moreover, we demonstrate that our result implies, and hence generalizes, the metric part of the Hopf-Rinow Theorem.
Annegret Burtscher
2023-09-15T18:13:03Z
http://arxiv.org/abs/2309.08682v3
# A conformal Hopf-Rinow Theorem for semi-Riemannian spacetimes ###### Abstract. The famous Hopf-Rinow Theorem states, amongst others, that a Riemannian manifold is metrically complete if and only if it is geodesically complete. The compact Clifton-Pohl torus fails to be geodesically complete suggesting that this theorem cannot be generalized to Lorentzian manifolds. On the other hand, Hopf and Rinow characterized metric completeness also by properness. The author and Garcia-Heveling recently obtained a Lorentzian completeness result with a similar flavor. In this manuscript, we extend our theorem to cone structures and to a new class of semi-Riemannian manifolds, dubbed \((n-\nu,\nu)\)-spacetimes. Moreover, we demonstrate that our result implies, and hence generalizes, the metric part of the Hopf-Rinow Theorem. Key words and phrases:Hopf-Rinow theorem, semi-Riemannian manifolds, metric geometry, completeness, cone structure, conformal structure, causality theory, global hyperbolicity, spacetimes, time functions, null distance, vector fields, tangent frame, parallelizability 2020 Mathematics Subject Classification: 53C50 (primary), 53C23, 53C18, 57R25 _Acknowledgments:_ The author would like to thank Jim Isenberg for encouraging her to explore the semi-Riemannian setting and Steffen Sagave for discussing some references in algebraic topology. This project was supported by the Dutch Research Council (NWO), Project number VI.Veni.192.208 ## 1. Introduction In 1931 H. Hopf and his student W. Rinow published their famous completeness result for Riemannian manifolds. Throughout we consider a manifold to be smooth, paracompact, Hausdorff, connected, finite-dimensional, and without boundary. **Theorem 1.1** (Hopf-Rinow [27]).: _Let \((\Sigma,\sigma)\) be a Riemannian manifold. The following statements are equivalent:_ 1. \((\Sigma,\sigma)\) _is geodesically complete for some_ \(p\in\Sigma\)_, i.e., all geodesics through_ \(p\) _are defined for all times._ 2. \((\Sigma,\sigma)\) _is_ geodesically complete_, i.e., all geodesics are defined for all times._ 3. \((\Sigma,d_{\sigma})\) _is a_ complete metric _space._ 4. \((\Sigma,d_{\sigma})\) _satisfies the_ Heine-Borel property _(also called_ proper _or_ boundedly compact_), i.e., every closed and bounded subset of_ \(\Sigma\) _is compact._ Originally, Hopf and Rinow called (a\({}_{0}\)) the abatability postulate (_Abtragbarkeitspostulat_), (a) the infinity postulates (_Unendlichkeitspostulat_), (b) the completeness postulate (_Vollstandigkeitspostulat_), and (c) the compactness postulate (_Kompaktheitspostulat_). Most differential geometers focus on and apply the equivalence (a)\(\Longleftrightarrow\)(b). The metric aspects already became more central in a generalization of the Hopf-Rinow Theorem to locally compact length metric spaces by S. Cohn-Vossen. In this manuscript we argue that the equivalence (b)\(\Longleftrightarrow\)(c) is most important and far more robust when leaving the positive definite Riemannian world in negative directions. We refer to Theorem 1.1 (b)\(\Longleftrightarrow\)(c) also as the _metric Hopf-Rinow Theorem_ or the _Riemannian completeness-compactness Theorem_. In recent work the author and L. Garcia-Heveling proved the following Lorentzian completeness-compactness theorem hoping to carry over the equivalence (b)\(\Longleftrightarrow\)(c) to situations with _one_ negative direction (see Section 2 for all notions). **Theorem 1.2** (Burtscher-Garcia-Heveling [17]).: _Let \((M,g)\) be spacetime, i.e., a time oriented Lorentzian manifold. The following are equivalent:_ * _There exists a time function_ \(\tau\colon M\to\mathbb{R}\) _such that_ \(M\) _equipped with the corresponding null distance_ \(\hat{d}_{\tau}\) _is a complete metric space._ * \((M,g)\) _is globally hyperbolic._ Can we view Theorem 1.2 as the Lorentzian analogue or generalization of the metric Hopf-Rinow Theorem 1.1? In the present manuscript we show that the answer is yes. For a start, the metric Hopf-Rinow Theorem immediately appears to be more special and fragile than Theorem 1.2. This is because the notions (time functions, null distance, global hyperbolicity) relevant for (b') and (c') are _conformally_ invariant while the Riemannian distance function is only preserved by isometries. In Section 3 we demonstrate that Theorem 1.2 is indeed the correct extension of the metric Hopf-Rinow Theorem 1.1 (b)\(\Longleftrightarrow\)(c) by proving that both properties are (independently) equivalent to their Lorentzian counterparts when restricting to Lorentzian products. **Theorem 1.3**.: _Let \((\Sigma,\sigma)\) be a Riemannian manifold and \((\Sigma^{\prime},\sigma^{\prime})=(\mathbb{R}\times\Sigma,-dt^{2}\oplus\sigma)\) be the corresponding Lorentzian product. Then \((\Sigma^{\prime},\sigma^{\prime})\) is a stably causal spacetime and the following statements hold:_ * \((\Sigma,d_{\sigma})\) _satisfies the Heine-Borel property if and only if the causal diamonds of_ \((\Sigma^{\prime},\sigma^{\prime})\) _are compact, i.e.,_ \((\Sigma^{\prime},\sigma^{\prime})\) _is globally hyperbolic._ * \((\Sigma,d_{\sigma})\) _is a complete metric space if and only if_ \((\Sigma^{\prime},\hat{d}_{t})\) _is a complete metric space with respect to the canonical time function_ \(t(p_{0},p_{\Sigma})=p_{0}\)_._ In the product case it is furthermore already known that (a) geodesic completeness of \((\Sigma,\sigma)\) is equivalent to (a') geodesic completeness of \((\Sigma^{\prime},\sigma^{\prime})\), and equivalent to global hyperbolicity of \((\Sigma^{\prime},\sigma^{\prime})\) (see, for instance, [7, Theorems 3.66 and 3.67]). We thus know that in the Lorentzian product case (and only then!) all properties are equivalent independent of each other: While the equivalence of (a)\(\Longleftrightarrow\)(c') carries over to warped products \(M\times_{f}\Sigma\) for \((M,g)\) a spacetime, even for warped products with compact Riemannian slices (a)\(\not\Longrightarrow\)(a') (see [7, Theorem 3.68] and preceding discussion). More generally, the singularity theorems of Penrose and Hawking show that (c')\(\not\Longrightarrow\)(a') and due to Anti-de Sitter space (a')\(\not\Longrightarrow\)(c'). It was very recently shown by Sanchez [41, Section 6.4] that for general spacetimes also (c')\(\not\Longrightarrow\)(a) for its slices. Thus we are forced to let go of geodesic completeness (a) and (a') in semi-Riemannian geometry, have to work with other features instead and develop new and different tools than those that are used in Riemannian geometry. That lifting and projecting the properties relevant for the metric Hopf-Rinow Theorem 1.1 and the corresponding Lorentzian Theorem 1.2 is doable nonetheless gives hope that it is possible to iterate this procedure and add _arbitrarily many_ negative directions. The first obstacle we face is the lack of an analogue of causality theory for semi-Riemannian manifolds, meaning that none of the notions used in (b') and (c') yet exist. To this end we introduce and analyze in Section 5 a new subclass of semi-Riemannian manifolds by generalizing the concept of a Lorentzian \((n-1,1)\)-spacetime to semi-Riemannian manifolds of any index \(0\leq\nu\leq n\) as follows. **Definition 1.4**.: Let \((M,g)\) be a semi-Riemannian manifold with constant index \(0\leq\nu\leq n=\dim M\). We say that \(M\) is _time frame orientable_ if it admits \(\nu\) continuous vector fields \(X_{i}\in\mathfrak{X}(N)\) that satisfy \(g(X_{i},X_{i})<0\) and are linearly independent on each tangent space \(T_{p}M\), \(p\in M\). If \((M,g)\) is time frame orientable and equipped with a fixed set \(X=\{X_{i}\,;\,i=1,\dots,\nu\}\) of such vector fields, we say that it is _time frame oriented_ and call \((M,g,X)\) a _semi-Riemannian spacetime_ or, more specifically, a \((n-\nu,\nu)\)_-spacetime_. Whether a given manifold \(M\) admits such a structure is, as in the Lorentzian case, a purely topological question. But unlike in the Lorentzian case it is a question that does not yet have a definite answer for most (classes of) manifolds. That the existence problem for \(\nu\geq 2\) is more subtle is already evident from the fact that _not_ every manifold admitting a semi-Riemannian metric also admits a (possibly different) semi-Riemannian spacetime metric (\(\mathbb{S}^{2}\times\mathbb{S}^{2}\) is a counterexample, as explained in Example 5.8). On the other hand, we show in Theorem 5.3 that the existence of semi-Riemannian \((n-\nu,\nu)\)-spacetime structure is equivalent to the existence of a _tangent \(\nu\)-frame_ on a manifold \(M\), i.e., \(\nu\) linearly independent vector fields. In 1935 E. Stiefel asked in his seminal thesis [45], supervised by H. Hopf, precisely this question (translated from German): Is there a system of \(\nu\) continuous vector fields on \(M\) [of dimension \(n\) and closed], which are linearly independent in each point of \(M\)? For \(\nu=1\) this question was already answered by Hopf [28] in 1927 based on the Hairy Ball Theorem of Poincare [39] from 1885 (for \(\mathbb{S}^{2}\)) and Brouwer [14] from 1912 (for \(\mathbb{S}^{n}\)). For closed manifolds \(M\) (compact without boundary) existence holds if and only if \(\chi(M)=0\). In 1955 Markus [34] showed that no conditions are needed in the noncompact case and established links to hyperbolic partial differential equations, thereby paving the way for Lorentzian geometry as needed for mathematical general relativity (it can be shown that on closed manifolds time travel is always possible, which is physically highly undesired). For \(\nu>1\) Stiefel's question is in its generality still open, both in the closed and open manifold setting, but led to many exciting developments in algebraic topology starting with the discovery of the Stiefel-Whitney classes in that very paper of Stiefel [45], and independently by Whitney [48], and later to characteristic classes. Besides deriving some basic necessary conditions, Stiefel also showed that the answer is yes for any \(\nu\) if \(M\) is a closed orientable 3-manifold. Adams [1] settled the question of the largest possible \(\nu\) for \(\mathbb{S}^{n}\) in a breakthrough paper from 1962. By the end of the 1960s several special cases were known and the above question (in particular, the special cases \(\nu=2\) and \(\nu=n\)) investigated heavily by Atiyah, Bott, Frank, Hirzebruch, Hopf, Kervaire, Mayer, Milnor, Steenrod, Thomas, Whitehead, Wu and many others for tangent \(\nu\)-fields without and with finite singularities (see survey [46] for an overview and references). In the 1970s powerful \(K\)-theory obstructions especially for \(\nu=2,3\) were found by Atiyah and Dupont [4, 20] (based on the Atiyah-Singer Index Theorem, see also [5]) for closed oriented manifolds allowing finite singularities. See also Crabb and Steer [19] using the same technique and Koschorke [32] treating these cases for nonorientable closed manifolds using a different approach. The more recent paper by Bokstedt-Dupont-Svane [12] gives a nice overview of known results and for \(\nu=4,5,6,7\) (under additional assumptions on \(M\) and \(n\)) computes the index, whose vanishing characterizes the existence of a \(\nu\)-frame, and thereby showes that it is a global invariant. Note that the above results all concern the closed case and often allow finite singularities. To the best of our knowledge Stiefel's question has not been much investigated in the open case and we are not aware of any general results for \(\nu>1\). Nonetheless, it is this open case that we are interested in and even forced to consider in the present paper, simply because closed manifolds do not admit time functions. We hope that, besides our modest aim of extending the metric Hopf-Rinow Theorem to semi-Riemannian manifolds, the tools developed in this paper can also help to shed light on necessary conditions for the existence of a tangent \(\nu\)-frame in the noncompact case in the future. By establishing the following result we show, in particular, that \((n-\nu,\nu)\)-spacetimes can be studied with techniques from the theory of cone structures developed in recent years by Fathi and Siconolfi [21, 22], Bernard and Suhr [10, 11], and Minguzzi [37]. **Theorem 1.5**.: _A \((n-\nu,\nu)\)-spacetime with index \(0<\nu\leq n\) admits a conformally invariant continuous proper cone structure \((M,C)\)._ In the framework of cone structures the notions of time functions and global hyperbolicity appearing in Theorem 1.2 have already been introduced and are well studied. In Section 4 we show that also the null distance can be extended to proper cone structures. The main result of this manuscript is an extension of Theorem 1.2, and thus of the metric Hopf-Rinow Theorem, to cone structures. **Theorem 1.6**.: _Let \((M,C)\) be a proper cone structure. The following are equivalent:_ * _There exists a time function_ \(\tau\colon M\to\mathbb{R}\) _such that_ \((M,\hat{d}_{\tau})\) _is a complete metric space._ * \((M,C)\) _is globally hyperbolic._ Together with Theorem 1.5 this characterization yields a conformal completeness-compactness theorem in semi-Riemannian geometry. **Corollary 1.7**.: _For any \((n-\nu,\nu)\)-spacetime with \(0<\nu\leq n\) we have that (b") is equivalent to (c")._ Above we have already seen that the Riemannian equivalence (b)\(\Longleftrightarrow\)(c) is a special case of the Lorentzian equivalence (b')\(\Longleftrightarrow\)(c'). Can we in the same way show that Corollary 1.7 for \(\nu\) and \(\nu+1\) are also related? In Section 5.4 we show that the answer is no. For one, if we extend a \((n-\nu,\nu)\)-spacetime \((M,g)\) orthogonally to a \((n-\nu,\nu+1)\)-spacetime \((M^{\prime},g^{\prime})=(\mathbb{R}\times M,-dt^{2}\oplus g)\), then the corresponding null distances cannot be directly related because they are _both_ conformally invariant. In fact, even the existence of a null distance on \((M^{\prime},g^{\prime})\) requires _additional_ properties and is not a given as in the Lorentzian product case. For the same reason global hyperbolicity does not carry over. Nonetheless; we can still recover parts of Theorem 1.3 in semi-Riemannian geometry as summarized in the following result obtained in Section 5. **Theorem 1.8**.: _Let \((M,g)\) be a \((n-\nu,\nu)\)-spacetime, \(0<\nu\leq n\), with time frame orientation defining vector fields \(X_{1},\dots,X_{\nu}\). Then one can orthogonally extend the metric and time frame orientation (with \(X_{\nu+1}=\partial_{t}\)) on \(M^{\prime}=\mathbb{R}\times M\) such that the following implications hold:_ \[(M^{\prime},-dt\oplus g)\text{ is a globally hyperbolic }(n-\nu,\nu+1) \text{-spacetime}\] \[\Longrightarrow (M,g)\text{ is a globally hyperbolic }(n-\nu,\nu) \text{-spacetime}\] \[\Longleftrightarrow (M^{\prime},dt^{2}\oplus g)\text{ is a globally hyperbolic }(n-\nu+1,\nu) \text{-spacetime}.\] _If \(\nu=n\) then the first two statements are equivalent._ In Example 5.31 we demonstrate that the first implication in Theorem 1.8 _cannot_ be reversed if \(\nu<n\). Naively speaking, one could even say that this is already evident in the \(\nu=0\) case since _all_ manifolds admit a Riemannian metric and _all_ Riemannian manifolds are globally hyperbolic (strictly speaking, Riemannian manifolds have empty/degenerate cones and are usually not considered), but already in Theorem 1.3 (i) globally hyperbolic Lorentzian products _require_ completeness of the Riemannian slice. In spite of such problems we have shown through our approach that many tools from Lorentzian geometry (cone structures, conformal methods etc.) as well as Corollary 1.7 can successfully be introduced and utilized in the semi-Riemannian setting. Thus semi-Riemannian geometry is not only close to Lorentzian geometry in a differential geometric sense (collected in [38]) but also in a metric context. In fact, it prompts the question what the true Lorentzian (vs. general semi-Riemannian) features of recent nonsmooth Lorentzian geometric theories and approaches to quantum gravity are. Answering this question will be crucial for ultimately linking these approaches to the smooth setting of general relativity. Along these lines it could also be insightful to investigate the links between semi-Riemannian and Lorentzian geometry in PDE theory. After all, the notion of global hyperbolicity was introduced in this context by J. Leray in 1952 and is crucial for proving global uniqueness of solutions to wave equations. **Outline.** In Section 2 we recall the basic notions of Lorentzian geometry and causality theory including global hyperbolicity, time functions, and the null distance. We also derive new results for Lorentzian products needed in subsequent sections. In Section 3 we show that the Lorentzian Theorem 1.2 implies the metric Hopf-Rinow Theorem 1.1 (b)\(\Longleftrightarrow\)(c) by establishing Theorem 1.3. In Section 4 we recall basic results of the theory of cone structures and show when the null distance is a well-defined concept in this framework. We also prove the cone completeness-compactness Theorem 1.6. In Section 5 we introduce the notion of \((n-\nu,\nu)\)-spacetime structures for semi-Riemannian manifolds, discuss their existence and the link to differential/algebraic topology. We then proceed to show that they are continuous proper cone structures (Theorem 1.5) from which Corollary 1.7 follows. Finally, we relate the notions of (stable) causality and of global hyperbolicity for different \(\nu\) and \(n-\nu\) for products appearing in Theorem 1.8. **Notation.** Throughout we denote Riemannian manifolds by \((\Sigma,\sigma)\), Lorentzian and semi-Riemannian manifolds by \((M,g)\). We denote semi-Riemannian products by \((M^{\prime},g^{\prime})=(\mathbb{R}\times M,-dt^{2}\oplus g)\) and \((M^{\prime\prime},g^{\prime\prime})=(\mathbb{R}\times M,dt^{2}\oplus g)\). If a background Riemannian metric is used it is denoted by \(h\). We use the signature convention \((+,\dots,+)\) for Riemannian manifolds and \((+,\dots,+,-)\) for Lorentzian manifolds etc. ## 2. Causal and metric properties of Lorentzian manifolds In this section we recall basics notions and results of Lorentzian geometry (for more details see [7, 36, 38]) and the more recent null distance of Sormani and Vega [43]. Readers familiar with Lorentzian geometry or the null distance can skip Sections 2.1 or 2.2, respectively. In Section 2.3 we obtain new results about Lorentzian products that we apply in Section 3. ### Causality and time A _semi-Riemannian manifold_ of signature \((n-\nu,\nu)\) is an \(n\)-dimensional manifold \(M\) equipped with metric tensor \(g\), i.e., a covariant 2-tensor that is symmetric and nondegenerate, with constant index \[\nu=\max\{\dim S\,;\,S\text{ subspace of }T_{p}M\text{ for which }g_{p}|_{S}\text{ is negative definite}\}.\] A _Lorentzian manifold_ has signature \((n-1,1)\). In this section we assume that \((M,g)\) is a Lorentzian manifold, in Section 5 we consider the general semi-Riemannian setting. A tangent vector \(v\in T_{p}M\backslash\{0\}\) is _timelike_ if \(g(v,v)<0\), _null_ if \(g(v,v)=0\), and _spacelike_ if \(g(v,v)>0\). A tangent vector is _causal_ if it is timelike or null. By convention we consider \(v=0\) to be spacelike. A Lorentzian manifold is said to be time orientable if a continuous choice of future light cones is possible by means of a nowhere vanishing continuous timelike vector field \(X\in\mathfrak{X}(M)\). A time oriented Lorentzian manifold is a triple \((M,g,X)\) and called a _spacetime_ (we usually omit the \(X\) in this notation). It is well known that a manifold admits a spacetime structure if and only if it is either noncompact or compact with vanishing Euler characteristic (see, for instance, [38, p. 49] and Section 5.1). A causal vector \(v\in T_{p}M\setminus\{0\}\) in a spacetime \((M,g)\) is said to be _future directed_ if \(g_{p}(v,X(p))<0\). The class of locally Lipschitz curves (with respect to any background Riemannian metric) induces two relations on a spacetime, the _chronological relation_ \[p\ll q\Longleftrightarrow\exists\text{ future directed timelike curve from $p$ to $q$},\] and the _causal relation_ \[p\leq q\Longleftrightarrow\exists\text{ future directed causal curve from $p$ to $q$},\text{ or $p=q$},\] with the notation that \(p<q\) if \(p\leq q\) and \(p\neq q\). We write \[I^{+}(p) =\{q\in M\,;\,p\ll q\},\] \[J^{+}(p) =\{q\in M\,;\,p\leq q\}\] for the chronological future and causal future of \(p\), respectively. Analogous definitions apply for the past (with \(-\)). The chronological relation is open, transitive, and contained in the causal relation, which itself is transitive and reflexive. Every closed (compact without boundary) Lorentzian manifold admits closed timelike curves, and thus allows time travel. For physical and geometric reasons we would like to exclude this setting and implicitly then focus on the noncompact case. We require some mild additional property from our spacetimes, namely that they admit time functions. **Definition 2.1**.: Let \((M,g)\) be a spacetime. A continuous function \(\tau\colon M\to\mathbb{R}\) is a _time function_ if \[p<q\Longrightarrow\tau(p)<\tau(q).\] A smooth time function \(\tau\) is called _temporal_ if \(d\tau(v)>0\) for all future directed causal vectors \(v\) (equivalently, \(\nabla\tau\) is past directed timelike). The existence of a time function conveniently implies that a spacetime is _causal_, i.e., does not admit closed causal curves (and hence the causal relation is antisymmetric, thus an order relation). To be precise, the existence of a time function is equivalent to causality being a stable property, as shown by Hawking [26] (see also Proposition 4.8) who also argues that any physically reasonable theory of gravity must be _stably causal_. The following more refined property of a spacetime is inevitable for a well-posed initial value problem for the Einstein equations in general relativity, the singularity theorems of Penrose and Hawking, Lorentzian splitting theorems, and indeed for most results in Lorentzian geometry. **Definition 2.2**.: A spacetime \((M,g)\) is _globally hyperbolic_ if it is causal and all causal diamonds \(J^{+}(p)\cap J^{-}(q)\), \(p,q\in M\), are compact. There are many important characterizations and implications of global hyperbolicity which have been obtained since the 1950s, see [17, Section 1] for an outline and Theorem 4.10 (for proper cone structures) for some characterizations relevant for this work. ### Null distance Sormani and Vega [43] showed that any stably causal spacetime \((M,g)\) can be equipped with a conformally invariant length metric space structure that respects and encodes the causal structure and topology of a spacetime. We recall some basic constructions below. See [2, 3, 17, 23, 40, 43, 47] for more details and insights, and Section 4.3 for some proofs in the more general setting of cone structures. The chronological relation being open implies that any two points \(p,q\in M\) can be joined by a locally Lipschitz path \(\beta\colon I\to M\) which is _piecewise causal_, i.e., it is either future or past directed on each subinterval of a partition \(\inf I=a_{0}<a_{1}<\ldots<a_{k}=\sup I\) of the interval \(I\)[43, Lemma 3.5]. The class of piecewise causal curves on \(M\) is denoted by \(\hat{\mathcal{A}}\). Since \((M,g)\) is a stably causal spacetime is can be equipped with a time function \(\tau\). The _null length_ of \(\beta\in\hat{\mathcal{A}}\) is defined as \[\hat{L}_{\tau}(\beta)=\sum_{i=1}^{k}|(\tau\circ\beta)(a_{i})-(\tau\circ\beta)( a_{i-1})|.\] It is easy to see that the null length is additive, continuously depends on the curve parameter and is invariant under reparametrizations. Based on this length structure one can obtain a length metric. **Definition 2.3**.: Let \((M,g)\) be a spacetime with time function \(\tau\colon M\to\mathbb{R}\). The _null distance_\(\hat{d}_{\tau}\) between to points \(p,q\in M\) is \[\hat{d}_{\tau}(p,q)=\inf\{\hat{L}_{\tau}(\beta)\,;\,\beta\in\hat{\mathcal{A}} \text{ from }p\text{ to }q\}.\] By definition, \(\hat{d}_{\tau}\) is a conformally invariant pseudometric on \((M,g)\). That the null distance is distinguishing asks slightly more of the time function used. Such time functions always exist already in the stably causal setting, for example, one may use temporal functions [9]. **Theorem 2.4** (Sormani-Vega [43, Theorem 4.6]).: _Let \((M,g)\) be a spacetime with locally anti-Lipschitz time function \(\tau\colon M\to\mathbb{R}\), i.e., for every point exits a neighborhood \(U\) and Riemannian metric \(h\) such that_ \[p,q\in U\text{ and }p\leq q\Longrightarrow\tau(q)-\tau(p)\geq d_{h}(p,q).\] _Then the null distance \(\hat{d}_{\tau}\) is a conformally invariant metric on \(M\) that induces the manifold topology._ The author and B. Allen have shown the following important connection to completeness that was used in the proof of Theorem 1.2 (c')\(\Longrightarrow\)(b'). **Theorem 2.5** (Allen-Burtscher [3, Theorem 1.6, Corollary 3.15]).: _Let \((M,g)\) be a spacetime with time function \(\tau\) that is globally anti-Lipschitz with respect to a complete (Riemannian) metric. Then \((M,\hat{d}_{\tau})\) is a complete metric space._ Note that the completeness needed in the assumption of the statement is only the metric completeness property (b) in the Hopf-Rinow Theorem 1.1, in particular, the proof does _not_ require the use of the infinity postulate (a), as can be seen from the fact that it holds for _any_ complete metric that induces the manifold topology [3, Theorem 1.6]. The author and L. Garcia-Heveling have in [17, Section 1] established more connections of the null distance to Riemannian geometry. It should be noted, in particular, that weak temporal functions are precisely those time functions \(\tau\) that induce locally equivalent metrics \(\hat{d}_{\tau}\)[17, Corollary 1.8]. ### Lorentzian products Every Riemannian manifold \((\Sigma,\sigma)\) can be studied as a Lorentzian manifold by considering the canonical product \[(\Sigma^{\prime},\sigma^{\prime})=(\mathbb{R}\times\Sigma,-dt^{2}\oplus\sigma).\] In what follows we prove some results about such Lorentzian product spacetimes needed in Section 3. Although our focus is on the above case it is easy to see that most results also hold for products \(N\oplus\Sigma\) for \(N\) a suitable Lorentzian manifold and that similar results are true also for warped product spacetimes (see also [3, 7]). The canonical projection \(t=\pi_{0}\colon\Sigma^{\prime}\to\mathbb{R}\), \(t(p_{0},p_{\Sigma}):=p_{0}\), satisfies \[dt_{p}(v)=-g(v,\partial_{t})=v_{0}>0\] for future directed causal vectors \(v\in T_{p}\Sigma^{\prime}\cong\mathbb{R}\times T_{p_{\Sigma}}\Sigma\). Thus \(t\) is a smooth temporal function of \((M,g)\). The null distance \(\hat{d}_{t}\) on \(\Sigma^{\prime}\) is thus well-defined by Theorem 2.4 and by [3, Lemma 4.4] of the form \[\hat{d}_{t}(p,q)=\begin{cases}|t(q)-t(p)|,&q\in J^{\pm}(p),\\ d_{\sigma}(p_{\Sigma},q_{\Sigma}),&\text{otherwise}.\end{cases} \tag{2.1}\] We show slightly more in the following result. **Lemma 2.6**.: _Let \((\Sigma,\sigma)\) be a Riemannian manifold and \((\Sigma^{\prime},\sigma^{\prime})\) be the canonical Lorentzian product. Then_ \[\hat{d}_{t}(p,q)=\max\{|t(q)-t(p)|,d_{\sigma}(p_{\Sigma},q_{\Sigma})\}, \tag{2.2}\] _where \(t(p)=\pi_{0}(p)=p_{0}\) and \(p_{\Sigma}=\pi_{\Sigma}(p)\) are the orthogonal projections onto \(\mathbb{R}\) and \(\Sigma\), respectively._ Proof.: By (2.1) it is clear that the \(\leq\) inequality in (2.2) holds. It remains to be shown that the converse \(\geq\) inequality holds too. We distinguish the two cases mentioned in (2.1). If \(q\in J^{+}(p)\), then there exists a future directed causal curve \(\gamma\colon I\to M\) from \(p\) to \(q\). We write the different components of \(\gamma\) as \(\gamma_{0}=t\circ\gamma\) and \(\gamma_{\Sigma}=\pi_{\Sigma}\circ\gamma\). That \(\gamma\) is causal means that (the positive sign of \(\dot{\gamma}_{0}\) is due to the future directedness of \(\gamma\)), and thus by the fundamental theorem of calculus \[t(q)-t(p)=\int_{I}\dot{\gamma}_{0}(s)ds\geq\int_{I}\|\dot{\gamma}_{\Sigma}(s)\| _{g}ds=L_{g}(\gamma_{\Sigma})\geq d_{\sigma}(p_{\Sigma},q_{\Sigma}).\] The proof for \(q\in J^{-}(p)\) is the same (just exchange the role of \(p\) and \(q\)). Thus in case \(p\) and \(q\) are causally related the reverse inequality \(\geq\) in (2.2) holds. Suppose that \(q\not\in J^{\pm}(p)\). Then it follows immediately from the definition of the null distance and (2.1) that \[d_{\sigma}(p_{\Sigma},q_{\Sigma})=\hat{d}_{t}(p,q)\geq|t(q)-t(q)|,\] and we have thus shown \(\geq\) of (2.2). We use Lemma 2.6 to characterize the causal boundary. **Lemma 2.7**.: _Let \((\Sigma,\sigma)\) be a Riemannian manifold and let \((\Sigma^{\prime},\sigma^{\prime})\) be the canonical Lorentzian product. Then_ \[q\in\partial J^{+}(p)\Longleftrightarrow t(q)-t(p)=\hat{d}_{t}(p,q)=d_{ \sigma}(p_{\Sigma},q_{\Sigma}). \tag{2.3}\] Proof.: \((\Longrightarrow)\) Suppose that \(q\in\partial J^{+}(p)=\overline{J^{+}(p)}\cap\overline{J^{+}(p)^{\complement}}\). The identity (2.1) together with the continuity of \(t\) and \(\hat{d}_{t}\) implies the first equality in (2.3) for all \(q\in\overline{J^{+}(p)}\) (approximation from inside), and (2.1) together with the continuity of \(\pi_{\Sigma}\), \(d_{\sigma}\) and \(\hat{d}_{t}\) implies the second equality for all \(q\in\overline{J^{+}(p)^{\complement}}\) (approximation from outside). \((\Longleftarrow)\) That \(q\) is a boundary point if both equalities in (2.3) hold follows from Lemma 2.6 in a similar fashion. To be more precise, we show that \[d_{\sigma}(p_{\Sigma},q_{\Sigma})=t(q)-t(p)\] leads to a contradiction if \(q\) is not a a boundary point of \(J^{+}(p)\) based on openness and the product structure: Suppose \(q\in I^{+}(p)\). Since \(I^{+}(p)\) is open and \(\hat{d}_{t}\) induces the manifold topology, there exists an \(\varepsilon>0\) such that \[\hat{B}_{2\varepsilon}(q)=\{x\in\Sigma^{\prime}\,;\,\hat{d}_{t}(q,x)<2 \varepsilon\}\subseteq I^{+}(p).\] Clearly, the point \(q^{\prime}=(q_{0}-\varepsilon,q_{\Sigma})\in\hat{B}_{2\varepsilon}(q)\) and thus satisfies \[t(q^{\prime})-t(p)=t(q)-\varepsilon-t(p)<d_{\sigma}(p_{\Sigma},q_{\Sigma})=d_ {\sigma}(p_{\Sigma},q^{\prime}_{\Sigma}).\] By (2.1) the left hand side is equal to \(\hat{d}_{t}(p,q^{\prime})\), hence this contradicts (2.2) applied to \(p\) and \(q^{\prime}\). Suppose \(q\in\overline{I^{+}(p)}^{\complement}\). Since \(\overline{I^{+}(p)}^{\complement}\) is open, similarly, we have a point \(q^{\prime\prime}=(q_{0}+\varepsilon,q_{\Sigma})\in\overline{I^{+}(p)}^{\complement}\) satisfying \[t(q^{\prime\prime})-t(p)=t(q)+\varepsilon-t(p)>d_{\sigma}(p_{\Sigma},q_{ \Sigma})=d_{\sigma}(p_{\Sigma},q^{\prime}_{\Sigma}).\] By (2.1) the right hand side is equal to \(\hat{d}_{t}(p,q^{\prime\prime})\), again a contradiction to (2.2). Because of the disjoint union \(\Sigma^{\prime}=I^{+}(p)\sqcup\partial I^{+}(p)\sqcup(\overline{I^{+}(p)})^{ \complement}\), and the fact that we already know that the equalities in (2.3) hold _at least_ on the set \(\partial I^{+}(p)=\partial J^{+}(p)\) by the first implication \((\Longrightarrow)\), we are done. The above findings are interesting independent of completeness or the Hopf-Rinow Theorem. For us they are crucial for proving Theorem 1.3 in Section 3 because we do _not_ assume geodesic completeness of \((\Sigma,\sigma)\) nor any knowledge of the Hopf-Rinow Theorem 1.1 in parts or as a whole (otherwise we would obtain a circular argument). It is useful to note, however, that with the additional assumption of geodesic completeness a lot more can be shown. _Remark 2.8_ (Geodesic completeness of the fiber and global hyperbolicity).: It is long known that a Lorentzian warped product \(I\times_{f}\Sigma\) is globally hyperbolic if and only if the fibers \((\Sigma,\sigma)\) are geodesically complete Riemannian manifolds [7, Theorem 3.66]. In the case of Lorentzian products one can show that both statements are also equivalent to the Lorentzian product \((I\times\Sigma,-dt^{2}\oplus\sigma)\) being geodesically complete [7, Theorem 3.67]. Geodesic completeness of \((\Sigma,\sigma)\) can furthermore be employed constructively for proofs about the null distance on \(\mathbb{R}\times\Sigma\). _Remark 2.9_ (Geodesic completeness of the fiber and causality encodation of the null distance).: Suppose \(\Sigma\) is a geodesically complete Riemannian manifold. Given \(p,q\in\Sigma^{\prime}=I\times\Sigma\) we can then always construct a length-minimizing Riemannian geodesic \(\alpha\) between \(p_{\Sigma}\) and \(q_{\Sigma}\) in \(\Sigma\). We can lift \(\alpha\) to a curve \(\gamma(s)=(s,\alpha(s))\) (recall that geodesics satisfy \(\|\dot{\alpha}\|_{\sigma}=\text{const.}\)) connecting \(\gamma(t(p))=p\) to \(\gamma(t(q))=q\) in \(M\). This is particularly useful when \(p\) and \(q\) are such that \[\hat{d}_{t}(p,q)=t(q)-t(p)\] holds, because by Lemma 2.6 then \(L_{\sigma}(\alpha)=d_{\sigma}(p_{\Sigma},q_{\Sigma})\leq\hat{d}_{t}(p,q)=t(q)- t(p)\) and thus \(\|\dot{\alpha}\|_{\sigma}\leq 1\). In other words, \(\|\dot{\gamma}\|_{g}=-1+\|\dot{\alpha}\|_{\sigma}\leq 0\), meaning that \(\gamma\) is a future directed causal curve from \(p\) to \(q\). Hence we have shown that the canonical null distance of the Lorentzian warped product \((I\times\Sigma,-dt^{2}\oplus\sigma)\) encodes causality, i.e., \[p\leq q\Longleftrightarrow\hat{d}_{t}(p,q)=t(q)-t(p). \tag{2.4}\] Of course we could have also inferred this from Lemma 2.7 and the definition of the null distance (since (\(\Longrightarrow\)) in (2.4) holds always). With a significantly more involved proof one can show that Lorentzian warped products with complete fibers as well as all globally hyperbolic spacetimes encode causality globally for sensible time functions (see [43, Theorem 3.25] and [17, Theorem 1.9]). On the other hand, it is easy to construct examples of Lorentzian products with incomplete fibers that do not satisfy (2.4). Consider, for example, \(\mathbb{R}\times(\mathbb{R}^{n}\setminus\{0\})\) equipped with the metric induced from Minkowski metric and the canonical time function. Nevertheless, any null distance on any spacetime encodes causality locally as has been shown independently by Sakovich and Sormani [40, Theorem 1.1] and Garcia-Heveling and the author [17, Theorem 3.4]. **Theorem 2.10** ([17, Theorem 3.4]).: _Let \((M,g)\) be a spacetime and \(\tau\) a temporal function. Then at every point \(x\in M\) there is a neighborhood \(U\) of \(x\) such that for all \(p,q\in U\)_ \[p\leq q\Longleftrightarrow\hat{d}_{\tau}(p,q)=\tau(q)-\tau(p).\] ## 3. Theorem 1.2 implies the metric Hopf-Rinow Theorem In this section we are solely concerned with Lorentzian products, i.e., spacetimes of the form \[(\Sigma^{\prime},\sigma^{\prime})=(\mathbb{R}\times\Sigma,-dt^{2}\oplus\sigma)\] with \((\Sigma,\sigma)\) a Riemannian metric. We prove Theorem 1.3 (i) and (ii) in Sections 3.1 and 3.2 respectively. In other words we show that the metric Hopf-Rinow Theorem 1.1 (b)\(\Longleftrightarrow\)(c) is a special case of Theorem 1.2, and thus can be also viewed as a special case of Theorem 1.6. ### The Heine-Borel property vs. global hyperbolicity The product \((\Sigma^{\prime},\sigma^{\prime})\) is always a _stably causal_ spacetime independent of properties of \((\Sigma,\sigma)\), thus it remains to relate the Riemannian compactness postulate to the corresponding conformal compactness postulate for Lorentzian products. Strictly speaking we prove nothing new, but our proof is new: It is well-known that geodesic completeness of the Riemannian fiber \((\Sigma,\sigma)\) is equivalent to be the Lorentzian product \(I\times\Sigma\) being globally hyperbolic (see Remark 2.8), and together with the Hopf-Rinow Theorem 1.1 (a)\(\Longleftrightarrow\)(b) we would therefore be done. But we need to abolish the sophisticated analytic notion of geodesic completeness and just relate the two topological conditions directly. By doing away with (geodesic) completeness we can avoid a circular argument and thus ensure that Theorem 1.2 really implies Theorem 1.1 (b)\(\Longleftrightarrow\)(c). **Proposition 3.1**.: _Let \((\Sigma,\sigma)\) be a Riemannian manifold and let \((\Sigma^{\prime},\sigma^{\prime})=(\mathbb{R}\times\Sigma,-dt^{2}\oplus\sigma)\) be the corresponding Lorentzian product manifold. Then_ \[(\Sigma,\sigma)\] _has the Heine-Borel property \[\Longleftrightarrow(\Sigma^{\prime},\sigma^{\prime})\] is globally hyperbolic._ Proof.: By definition, \(t\) is a temporal function for \(\Sigma^{\prime}\), thus \(\Sigma^{\prime}\) is stably causal. it thus remains to be shown that \[(\Sigma,\sigma)\] has the Heine-Borel property \[\Longleftrightarrow\text{ causal diamonds in }(\Sigma^{\prime},\sigma^{ \prime})\text{ are compact.}\] \((\Longrightarrow)\) Suppose \((\Sigma,\sigma)\) satisfies the Heine-Borel property. Let \(p,q\in\Sigma^{\prime}\) and consider the causal diamond \(J^{+}(p)\cap J^{-}(q)\). If \(J^{+}(p)\cap J^{-}(q)=\emptyset\) nothing is to prove, so suppose that \(J^{+}(p)\cap J^{-}(q)\neq\emptyset\). Due to the definiteness of \(\hat{d}_{t}\) it follows that \(\hat{d}_{t}(p,q)=t(q)-t(p)>0\) due to the definiteness of \(\hat{d}_{t}\). We show that \(J^{+}(p)\cap J^{-}(q)\) is (i) contained in a compact set and (ii) closed, hence compact itself. (i) Let \(r:=t(q)-t(p)>0\). For any \(x\in J^{+}(p)\cap J^{-}(q)\), by (2.1), \[t(x)\in[t(p),t(q)]. \tag{3.1}\] Moreover, (3.1) together with Lemma 2.6 implies that \[\max\{d_{\sigma}(p_{\Sigma},x_{\Sigma}),d_{\sigma}(x_{\Sigma},q_{\Sigma})\} \leq\max\{d_{t}(p,x),d_{t}(x,q)\}\leq r,\] and hence \[x_{\Sigma}\in\overline{B_{r}^{\sigma}(p_{\Sigma})\cap B_{r}^{\sigma}(q_{\Sigma})}. \tag{3.2}\] Together, (3.1) and (3.2) imply that the causal diamond is contained in the intersection of two closed cylinders, \[J^{+}(p)\cap J^{-}(q)\subseteq[t(p),t(q)]\times\overline{B_{r}^{\sigma}(p_{ \Sigma})\cap B_{r}^{\sigma}(q_{\Sigma})}=:A(p,q).\] The interval \([t(p),t(q)]\) is closed and hence compact in \(\mathbb{R}\). Moreover, the set \(\overline{B_{r}^{\sigma}(p_{\Sigma})\cap B_{r}^{\sigma}(q_{\Sigma})}\) is bounded and closed in \(\Sigma\), thus by the Heine-Borel property of \(\Sigma\) it is also compact. Hence \(A(p,q)\) is compact. (ii) It remains to be shown that the causal diamonds \(J^{+}(p)\cap J^{-}(q)\) are closed. We show that \(J^{+}(p)\) is closed or, to be more precise, that \(\partial J^{+}(p)\subseteq J^{+}(p)\). Suppose that \(x=(x_{0},x_{\Sigma})\in\partial J^{+}(p)\) with \(x_{0}=t(x)\). We construct a particular sequence \((x^{n})_{n}\) approximating \(x\) from inside. By Lemma 2.7 \[T:=\hat{d}_{t}(p,x)=t(x)-t(p)=d_{\sigma}(p_{\Sigma},x_{\Sigma}).\] In particular, \(x_{\Sigma}\in\partial B_{T}^{\sigma}(p_{\Sigma})\subseteq\Sigma\), meaning that there is a sequence \(x_{\Sigma}^{n}\in B_{T}^{\sigma}(p_{\Sigma})\subseteq\Sigma\) in the interior of this ball approximating \(x_{\Sigma}\), i.e., satisfying \[d_{\sigma}(x_{\Sigma}^{n},x_{\Sigma})<\frac{1}{n},\qquad d_{\sigma}(x_{\Sigma }^{n},p_{\Sigma})<T,\] We can lift the sequence \((x_{\Sigma}^{n})_{n}\) to a sequence \((x^{n})_{n}\) in \(\Sigma^{\prime}\) by setting \(x^{n}=(x_{0},x_{\Sigma}^{n})\). Then \(x^{n}\) converges to \(x\) with respect to the null distance \(\hat{d}_{t}\) because \[\hat{d}_{t}(x,x^{n})=\max\{0,d_{\sigma}(x_{\Sigma}^{n},x_{\Sigma})\}<\frac{1} {n}\] by Lemma 2.6. Since \(d_{\sigma}(x_{\Sigma}^{n},p_{\Sigma})<T=\hat{d}_{t}(x^{n},p)\) it follows from (2.1) and Lemmas 2.6 and 2.7 that \(x^{n}\in I^{+}(p)=J^{+}(p)\setminus\partial J^{+}(p)\), hence there exists a timelike curve \(\gamma_{n}\) with null length \[\hat{L}_{t}(\gamma_{n})=t(x^{n})-t(p)=t(x)-t(p)=T.\] Moreover, each \(\gamma_{n}\) is contained in the compact set \(A(p,q)\) of Step (i). Recall that Allen and the author have shown that for piecewise causal curves \(\hat{L}_{t}=L_{\hat{d}_{t}}\) (the latter being lower semicontinuous by [15, Prop. 2.3.4] for rectifiable paths) and that \(\hat{d}_{t}\) is an intrinsic metric [3, Prop. 3.8]. By the Arzela-Ascoli Theorem a subsequence of \((\gamma_{n})_{n}\) thus uniformly converges to a rectifiable limit curve \(\gamma\) in \(A(p,q)\) connecting \(p\) and \(x\), which is Lipschitz with respect to \(\hat{d}_{t}\) (since the image of \(\gamma\) is compact, it is even Lipschitz continuous in the usual sense with respect to any Riemannian metric by [17, Theorem 1.7]) and by lower semicontinuity of the length functional \[t(x)-t(p)=\hat{d}_{t}(p,x)\leq L_{\hat{d}_{t}}(\gamma)\leq\lim_{n\to\infty} \hat{L}_{t}(\gamma_{n})=t(x)-t(p)=T. \tag{3.3}\] Assume without loss of generality that \(\gamma\colon[0,1]\to M\). Condition (3.3) can be localized, meaning that \[L_{\hat{d}_{t}}(\gamma|_{[s,s^{\prime}]})\leq t(\gamma(s^{\prime}))-t(\gamma( s)), \tag{3.4}\] because (3.3) and the converse of (3.4) would imply \[t(x)-t(p) =\hat{d}_{t}(p,x)=L_{\hat{d}_{t}}(\gamma)\] \[>|t(x)-t(\gamma(s^{\prime}))|+t(\gamma(s^{\prime}))-t(\gamma(s))+| t(\gamma(s))-t(p)|\] \[\geq t(x)-t(p),\] a contradiction. It remains to be shown that \(\gamma\) connects \(p\) and \(x\) causally. Consider the set \[A:=\{s\in[0,1]\,;\,\gamma(s)\in J^{+}(p)\}.\] Clearly, \(A\neq\emptyset\) because \(\gamma(0)=p\in J^{+}(p)\). By (3.4) all \(s,s^{\prime}\in[0,1]\) satisfy \[\hat{d}_{t}(\gamma(s),\gamma(s^{\prime}))=t(\gamma(s^{\prime}))-t(\gamma(s)). \tag{3.5}\] Since the null distance locally around \(p\) encodes causality by Theorem 2.10, we thus know that \(\gamma(s)\in J^{+}(p)\) for small \(s\), i.e., \(A\) is open. Suppose \[s_{0}:=\sup A.\] Again, local causality encodation around \(\gamma(s_{0})\) together with (3.5) applied to \(s\in A\) sufficiently close to \(s_{0}>s\), yields that \(s_{0}\in A\), i.e., \(A\) is also closed. Thus \(A=[0,1]\) and therefore \(x=\gamma(1)\in J^{+}(p)\), as desired. \((\Longleftarrow)\) Suppose \(C\) is a closed and bounded subset of \(\Sigma\), and the causal diamonds in \((\Sigma^{\prime},\sigma^{\prime})\) are compact. The product \(\widetilde{C}:=\{0\}\times C\) is closed in \(\Sigma^{\prime}\). We can assume that \(C\) is nonempty, and that there is a \(p_{\Sigma}\in\Sigma\) and \(r>0\) such that \(C\) is contained in the open \(d_{\sigma}\)-ball \(B^{\sigma}_{r}(p_{\Sigma})\). If \(x\in C\), then \(d_{\sigma}(p_{\Sigma},x)<r\), and hence there is a curve \(\alpha\) from \(p_{\Sigma}\) to \(x\) in \(\Sigma\) such that \(L_{\sigma}(\alpha)<r\). We may assume that \(\alpha\) is parametrized by constant speed \(\|\dot{\alpha}\|_{\sigma}\leq 1\) on \([-r,0]\), and lift it to a curve \(\gamma(s):=(s,\alpha(s))\) from \(p^{-}:=(-r,p_{\Sigma})\) to \((0,x)\) in \(\Sigma^{\prime}\). Since \[\sigma^{\prime}(\dot{\gamma},\dot{\gamma})=-1+\|\dot{\alpha}\|_{\sigma}\leq 0,\] \(\gamma\) is future directed causal and hence \((0,x)\in J^{+}(p^{-})\). Similarly it follows that \((0,x)\in J^{-}(p^{+})\) for \(p^{+}:=(r,p_{\Sigma})\). Thus the closed set \(\widetilde{C}\) is contained in the (by assumption) compact causal diamond \(J^{+}(p^{-})\cap J^{-}(p^{+})\), and therefore compact itself. Since the canonical projection \(\pi_{\Sigma}\colon\Sigma^{\prime}\to\Sigma\) onto the second component is continuous (in the product topology), \(C\) is also be compact. Therefore, \(\Sigma\) satisfies the Heine-Borel property. ### Metric completeness In the same spirit as in Section 3.1 we directly relate the metric completeness property of \(\Sigma\) to that of \(\Sigma^{\prime}=\mathbb{R}\times\Sigma\). **Proposition 3.2**.: _Let \((\Sigma,\sigma)\) be a Riemannian manifold and \((\Sigma^{\prime},\sigma^{\prime})=(\mathbb{R}\times\Sigma,-dt^{2}\oplus\sigma)\) be the canonical Lorentzian product manifold with time function \(t=\pi_{0}\). Then the metric spaces satisfy_ \[(\Sigma,d_{\sigma})\text{ is complete }\Longleftrightarrow(\Sigma^{\prime}, \hat{d}_{t})\text{ is complete}.\] Proof.: \((\Longrightarrow)\) Assume that \((\Sigma,d_{\sigma})\) is a complete metric space. To show the same of \((\Sigma^{\prime},\hat{d}_{t})\) assume that \(p_{n}=(p_{0}^{n},p_{\Sigma}^{n})_{n}\) is a Cauchy sequence in \((\Sigma^{\prime},\hat{d}_{t})\). Since by Lemma 2.6 \[\hat{d}_{t}(p_{m},p_{n})\geq\frac{1}{2}\left(|p_{0}^{m}-p_{0}^{n}|+d_{\sigma}(p _{\Sigma}^{m},p_{\Sigma}^{n})\right)\] it follows that \((p_{0}^{n})_{n}\) and \((p_{\Sigma}^{n})_{n}\) are Cauchy sequences in \(\mathbb{R}\) and \(\Sigma\), respectively. Since both spaces are complete, there is an \(p_{\infty}=(p_{0}^{\infty},p_{\Sigma}^{\infty})\in\mathbb{R}\times\Sigma\) such that, again by Lemma 2.6, as \(n\to\infty\), \[\hat{d}_{t}(p_{n},p_{\infty})=\max\{|p_{0}^{n}-p_{0}^{\infty}|,d_{\sigma}(p_{ \Sigma}^{n},p_{\Sigma}^{\infty})\}\to 0.\] Hence \(p_{\infty}\) is the limit of \((p_{n})_{n}\), and \((\Sigma^{\prime},\hat{d}_{t})\) is complete. \((\Longleftarrow)\) Suppose \((\Sigma^{\prime},\hat{d}_{t})\) is a complete metric space. Let \((p_{\Sigma}^{n})_{n}\) be a Cauchy sequence in \(\Sigma\). Since by (2.1) \[\hat{d}_{t}((0,p_{\Sigma}^{m}),(0,p_{\Sigma}^{n}))=d_{\sigma}(p_{\Sigma}^{m}, p_{\Sigma}^{n})\] the sequence \(p_{n}=(0,p_{\Sigma}^{n})_{n}\) is a Cauchy sequence in \(\Sigma^{\prime}\). Hence there exists a limit point \(p_{\infty}=(p_{0}^{\infty},p_{\Sigma}^{\infty})\) in \(\Sigma^{\prime}\). Since \(t=\pi_{0}\) is continuous it follows that \(p_{0}^{\infty}=0\). Therefore, as \(n\to\infty\), \[d_{\sigma}(p_{\Sigma}^{n},p_{\Sigma}^{\infty})=\hat{d}_{t}(p_{n},p_{\infty}) \to 0,\] and so \(p_{\Sigma}^{\infty}\) is the limit point of the sequence \((p_{\Sigma}^{n})_{n}\) in \((\Sigma,d_{\sigma})\). Thus \((\Sigma,d_{\sigma})\) is complete. ## 4. Cone structures The aim of this section is to introduce all notions of and to prove Theorem 1.6. In Sections 4.1 and 4.2 we recall some definitions and results of the theory of cone structures. Readers familiar with the recent theory of cone structures as in [10, 11, 21, 22, 37] may skip this part. In Section 4.3 we introduce the null distance and show that it is a well-defined metric on proper cone structures. In Section 4.4 we prove that the Lorentzian Theorem 1.2 also has a cone version, namely Theorem 1.6. These extensions are straightforward but nonetheless have to be checked carefully. The main asset of setting up everything in the cone framework is, apart from its own benefit, that we can efficiently apply it to our new notion of semi-Riemannian spacetimes in Section 5. ### Cone structures We use mostly the conventions of Minguzzi [37] which builds on the pioneering work of Fathi and Siconolfi [21, 22] and results of Bernard and Suhr [10, 11]. **Definition 4.1**.: Let \(V\) be a finite-dimensional vector space. A _convex cone_\(C\) is a subset of \(V\setminus\{0\}\) such that \[v\in C,s>0\Longrightarrow sv\in C.\] A cone \(C\) is _sharp_ (or regular [10]) if \(C\cup\{0\}\) does not contain any line passing through the origin. A cone is called _closed_ if it is closed in the topology induced on \(V\setminus\{0\}\) by the topology of \(V\). A _proper cone_ is a closed sharp convex cone with nonempty interior. **Definition 4.2**.: Let \(M\) be a manifold. A _cone structure_\((M,C)\) is a multivalued map \(p\mapsto C_{p}\), where \(C_{p}\subseteq T_{p}M\setminus\{0\}\) is a closed sharp convex nonempty cone. A _closed cone structure_ is a cone structure which is a closed subbundle of the slit tangent bundle \(TM\setminus(TM)_{0}\), where \((TM)_{0}=\{0_{p}\,;\,0\in T_{p}M\}\) denotes the zero section. A _proper cone structure_ is a closed cone structure in which the cone bundle is proper, i.e., \((\operatorname{int}C)_{p}\neq\emptyset\) for all \(p\in M\). Note that for a closed or proper cone structure it is _not_ sufficient to interpret these notions fiberwise, i.e, that each cone \(C_{p}\) is closed or proper! Instead, these notions require the use of the topology of the cone bundle. Higher regularity generally helps. For many properties, approximation by wider and more regular cones is sufficient. We write \(C\prec C^{\prime}\) if \(C\subseteq\operatorname{int}C^{\prime}\) and \(C\preccurlyeq C^{\prime}\) if \(C\subseteq C^{\prime}\). **Proposition 4.3** ([37, Proposition 2.6]).: _For a \(C^{0}\) proper cone structure \((\operatorname{int}C)_{p}=\operatorname{int}C_{p}\) for every \(p\)._ ### Causality and time functions By using the differential inclusions \[\dot{\gamma}(t)\in C_{\gamma(t)},\qquad\dot{\gamma}(t)\in(\operatorname{int}C )_{\gamma(t)}\] on a closed or proper cone structure \((M,C)\) one can define a chronological relation \(I^{+}\) and a causal relation \(J^{+}\) on \((M,C)\) that resembles the corresponding Lorentzian relations (cf. Section 2.1). In addition to recalling the notation of [37, Section 2.1] we add some elements. For various reasons it is convenient and sufficient to work within the class of locally absolutely continuous (or locally Lipschitz) paths [37, 18]. **Definition 4.4**.: Let \((M,C)\) be a closed cone structure. An element \(v\in C_{p}\subseteq T_{p}M\) is called _future directed causal vector_. An element \(w\in(\operatorname{int}C)_{p}\subseteq T_{p}M\) is called _future directed timelike vector_. A vector \(z\in(\partial C)_{p}=C_{p}\setminus(\operatorname{int}C)_{p}\) is called _future directed lightlike_. A vector \(v\) is called _past directed_ causal/timelike/lightlike if \(-v\) is future directed causal/timelike/lightlike. A _future directed causal curve_ is the image of an absolutely continuous solution to the differential inclusion \(\dot{\gamma}(t)\in C_{\gamma(t)}\). A _future directed timelike curve_ is the image of a (piecewise) \(C^{1}\) solution of \(\dot{\gamma}(t)\in(\operatorname{int}C)_{\gamma(t)}\). The definitions for past directed curves are analogous. Based on Definition 4.4 one proceeds as in Section 2.1 to define the _causal relation_\(J^{+}\) by \[p\leq q\Longleftrightarrow\exists\text{ future directed causal curve from $p$ to $q$, or $p=q$,}\] with \(p<q\) if \(p\leq q\) and \(p\neq q\), and the _chronological relation_\(I^{+}\) by \[p\ll q\Longleftrightarrow\exists\text{ future directed timelike curve from $p$ to $q$.}\] We also write \[J^{+}(p) =\{q\in M\,;\,p\leq q\},\] \[I^{+}(p) =\{q\in M\,;\,p\ll q\},\] for the causal and chronological future (and correspondingly for the past) of \(p\), respectively. The usual properties for \(I^{+}\) and \(J^{+}\) follow. **Proposition 4.5** ([37, Proposition 2.8]).: _Let \((M,C)\) be a closed cone structure. The causal relation \(J^{\pm}\) is transitive and reflexive, i.e., a preorder. The chronological relation \(I^{\pm}\) is transitive and contained in \(J^{\pm}\). If \((M,C)\) is proper then \(I^{\pm}\) is open and nonempty._ Sketch of proof.: Clearly, \(I^{\pm}\subseteq J^{\pm}\). Reflexivity of \(J^{\pm}\) is clear by definition. Transitivity is clear for both relations since it just requires concatenating paths. That \(I^{\pm}\) is open follows from [37, Proposition 2.8] since the corresponding cone structure \(C\) is proper and one can then use an approximation by a \(C^{0}\) proper cone structure. Furthermore, that \(I^{\pm}\) is nonempty follows from the assumption that there exists \(v\in(\operatorname{int}C)_{p}\neq\emptyset\) for all \(p\) and the fact that there is a timelike curve through \(p\) with velocity \(v\)[37, Theorem 2.2]. **Corollary 4.6**.: _A closed manifold with proper cone structure \((M,C)\) admits closed timelike curves._ Proof.: The collection \(\{I^{+}(p)\,;\,p\in M\}\) is an open cover of \(M\) by Proposition 4.5. Since \(M\) is compact there exists a finite subcover \(\{I^{+}(p_{i})\,;\,i=1,\dots,m\}\). Suppose \(p_{i}\not\in I^{+}(p_{i})\) for all \(i\). Then \(p_{1}\in I^{+}(p_{i_{1}})\) for some \(p_{i_{1}}\neq p_{1}\) and there is a future directed timelike curve from \(p_{i_{1}}\) to \(p_{1}\), and subsequently from \(p_{i_{2}}\) to \(p_{i_{1}}\), \(i_{1}\neq i_{2}\), etc. Since only finitely many \(p_{i}\) exist after at most \(m+1\) steps a repetition occurs, and hence there exists a closed timelike curve. In order to extend the notion of null distance to semi-Riemannian manifolds, we recall that time and temporal functions for closed cone structures are defined in the same way as in Definition 2.1. One can also define locally anti-Lipschitz time functions in the same way as in Theorem 2.4. See also [37, Section 2.2] for more notions. As in Lorentzian geometry the following result is obvious. **Lemma 4.7**.: _Let \((M,C)\) be a closed cone structure and \(\tau\colon M\to\mathbb{R}\) a time function. Then \(M\) is causal, i.e., there exist no closed future/past directed causal curves (or, equivalently, \(J^{+}\) is antisymmetric)._ To prove the existence of a time function one needs to work significantly harder. We show only one direction which is based on the intrinsic notion of the \(K^{+}\)-relation of Sorkin and Woolgar [42] and the very neat and clean proof of Minguzzi [35, Theorem 7] that \(K\)-causality is equivalent to the existence of a time function for spacetimes. Everything extends to closed cone structures (see [37, Theorem 2.30], and [22, Theorem 1.1] for the partial earlier result). **Proposition 4.8**.: _Let \((M,C)\) be a closed cone structure. The following are equivalent:_ 1. \((M,C)\) _is_ \(K\)-causal_, i.e., the smallest closed and transitive relation_ \(K^{+}\) _containing_ \(J^{+}\) _is antisymmetric,_ 2. \((M,C)\) _is_ stably causal_, i.e., there is a_ \(C^{0}\) _proper cone structure_ \(C^{\prime}\) _such that_ \(C\prec C^{\prime}\) _which is causal,_ 3. _there exists a time function on_ \(M\)_,_ 4. _there exists a smooth temporal function on_ \(M\)_._ Proof.: We only prove (i)\(\Longrightarrow\)(iii) here. The rest of the proof (and more) can be found in [37, Sections 3.2-3.6]. (i)\(\Longrightarrow\)(iii) Since \(K^{+}\) is a closed preorder (by definition, \(K^{+}\) is closed and transitive, and since \(J^{+}\) is reflexive, so is also \(K^{+}\)) on a second countable locally compact space \(M\), by Levin's Theorem [33] (see [35, Theorem 3] for the explicit statement) there exists a continuous utility function \(\tau\colon M\to\mathbb{R}\), i.e., \[p\sim_{K^{+}}q\Longrightarrow\tau(p)=\tau(q),\qquad\text{and}\qquad p<_{K^{+}} q\Longrightarrow\tau(p)<\tau(q).\] Since \(K^{+}\) is antisymmetric, \(\sim_{K^{+}}\) is just the equality case. Moreover, for the causal relation, \(<_{K^{+}}\) is obtained from \(K^{+}\) by removing the diagonal \(\Delta\). Since \[p<q\Longrightarrow(p,q)\in K^{+}\setminus\Delta\Longrightarrow\tau(p)<\tau( q),\] every \(K^{+}\)-utility in a \(K\)-causal closed cone structure is a time function. Several important characterizations of global hyperbolicity are also known for closed and proper cone structures as well as the stability of this notion. For continuous cone structures these results were obtained in the pioneering work of Fathi and Siconolfi [22]. We only state the results that we will use and refer to the literature [10, 11, 21, 22, 37] for proofs. **Definition 4.9**.: A closed cone structure \((M,C)\) is called _globally hyperbolic_ if it is causal and all causal diamonds \(J^{+}(p)\cap J^{-}(q)\), \(p,q\in M\), are compact. **Theorem 4.10**.: _Let \((M,C)\) be a closed cone structure. Then the following properties are equivalent:_ 1. \((M,C)\) _is globally hyperbolic,_ 2. _there exists a_ Cauchy time function_, i.e., a time function_ \(\tau\colon M\to\mathbb{R}\) _such that for every inextendible future/past directed causal curve_ \(\gamma\) _we have_ \((\tau\circ\gamma)(\mathbb{R})=\mathbb{R}\)_,_ 3. _there exists a smooth_ completely uniform temporal function _(automatically Cauchy) on_ \(M\)_, i.e., a temporal function_ \(\tau\colon M\to\mathbb{R}\) _for which there exists a complete Riemannian metric_ \(h\) _such that_ \(d\tau(v)\geq\|v\|_{h}\) _for all future directed causal vectors_ \(v\)_,_ 4. _there exists a (stable)_ Cauchy hypersurface_, i.e., an acausal (no two points are connected by a future/past directed causal curve) topological hypersurface_ \(\Sigma\) _such that_ \(D(\Sigma)=D^{+}(\Sigma)\cup D^{-}(\Sigma)=M\)_, where_ \[D^{+}(\Sigma)=\{p\in M\,;\text{every inextendible past directed}\] \[\text{causal curve through $p$ intersects $\Sigma$}\}.\] _Moreover, if \((M,C)\) is proper then \(M\) is smoothly diffeomorphic to \(\mathbb{R}\times\Sigma\) (the projection to \(\mathbb{R}\) is a completely uniform temporal function), all Cauchy hypersurfaces are diffeomorphic to \(\Sigma\) and the fibers of the smooth projection to \(\Sigma\) are smooth timelike curves._ Theorem 4.10 for closed and proper cone structures is shown in Minguzzi [37, Theorem 2.45] based on the earlier works of Fathi and Siconolfi [22, Theorem 1.3] for (i)\(\Longleftrightarrow\)(ii) and of Bernard and Suhr [10, Theorem 3, Corollary 1.8] for (i)\(\Longleftrightarrow\)(iii) and the splitting. Note that in (iii) we use the recent notation of Bernard and Suhr [11, Definition 1.2] rather than calling it \(h\)-steep as in [37, Section 2.2]. Being a closed cone structure does, of course, not imply that the causal relation is closed. But global hyperbolicity and properness suffices. **Lemma 4.11** ([37, Lemma 2.5, Proposition 2.19]).: _Let \((M,C)\) be a proper cone structure. Then compactness of causal diamonds implies that the causal relation is closed or, equivalently, that the sets \(J^{\pm}(p)\) are closed for all \(p\in M\)._ This result thus shows that global hyperbolicity is stronger than \(K\)-causality (because \(K^{+}=J^{+}\), which is antisymmetric if causal) in the framework of proper cone structures independent of the use of time functions. It is also clear that other notions on the causal ladder can be defined equally well for cone structures, however, none of those notions are relevant for us. ### The null distance for cone structures Equipped with a causal structure and time function we are finally in a position to introduce a cone version of the null distance of Sormani and Vega [43]. The definitions of piecewise causal paths, null lengths, and null distance of Section 2.2 carry over verbatim to closed cone structures. For the corresponding results to hold, however, the properness of cone structures is crucial. **Lemma 4.12**.: _Let \((M,C)\) be proper cone structure. Then there is a piecewise timelike (causal) curve between any two points of \(M\)._ Proof.: The proof is analogous to that of [43, Lemma 3.5] and relies on the fact the the chronological future/past sets are open (and nonempty) by Proposition 4.5. For the sake of completeness we recall the full argument. For each \(x\in M\) the sets \(I^{\pm}(x)\) are open (and nonempty) by Proposition 4.5 and hence \(\{I^{-}(x)\,;\,x\in M\}\) is an open cover of \(M\). Since \(M\) is connected there is a continuous path \(\alpha\colon[0,1]\to M\) between any to points \(p,q\in M\). Because \(\alpha([0,1])\) is compact it can be covered by finitely many sets \(I^{-}(x_{2i})\), \(i=0,1,\ldots,m\). Without loss generality we may assume that \(p\in I^{-}(x_{0})\), \(q\in I^{-}(x_{2m})\) and \(I^{-}(x_{2i})\cap I^{-}(x_{2i+2})\neq\emptyset\). Since \(p\in I^{-}(x_{0})\) there is a future directed timelike curve \(\beta_{0}\) from \(p\) to \(x_{0}\). For all \(0\leq i<m\) fix a point \(x_{2i+1}\in I^{-}(x_{2i})\cap I^{-}(x_{2i+2})\neq\emptyset\). Then there is a past directed timelike curve \(\beta_{2i+1}\) from \(x_{2i+1}\) to \(x_{2i}\) and a future directed timelike curve \(\beta_{2i+2}\) from \(x_{2i+1}\) to \(x_{2i+2}\). The concatenation \(\beta=\beta_{0}\cdots\beta_{2m}\) is a piecewise timelike curve from \(p\) to \(q\). Lemma 4.12 immediately implies that \(\hat{d}_{\tau}\) is a pseudometric on \(M\) but even in Minkowski space more is needed. The following result is the cone version of Theorem 2.4. **Theorem 4.13**.: _Let \((M,C)\) be a proper cone structure with a locally anti-Lipschitz time function \(\tau\colon M\to\mathbb{R}\). Then the null distance \(\hat{d}_{\tau}\) is a metric on \(M\) that induces the manifold topology._ Proof.: By Lemma 4.12\(\hat{d}_{\tau}\colon M\times M\to[0,\infty)\), and it is clear that \(\hat{d}_{\tau}\) is symmetric and satisfies the triangle inequality. It thus remains to prove the distinguishing property, and that \(\hat{d}_{\tau}\) induces the manifold topology. Let \(x\in M\), \(h\) a Riemannian metric, and \(U\) the neighborhood of \(x\) so that the anti-Lipschitz condition \[p\leq q\Longrightarrow d_{h}(p,q)\leq\tau(q)-\tau(p)\] holds for all \(p,q\in U\). Suppose \(y\neq x\). Let \(V\subseteq U\) be a relatively compact open neighborhood of \(x\) such that \(y\not\in\overline{V}\). By Lemma 4.12 there is a piecewise causal path \(\beta\colon[0,1]\to M\) from \(x\) to \(y\). Let \(z=\beta(s_{0})\in\partial V\) be the first point on \(\beta\) that meets \(\partial V\). Then \[\hat{L}_{\tau}(\beta)\geq\hat{L}_{\tau}(\beta|_{[0,s_{0}]})\geq\hat{d}_{\tau}( p,z),\] and therefore \(\hat{d}_{\tau}(x,y)=\inf\hat{L}_{\tau}(\beta)\geq d_{\tau}(x,\partial V)>0\). Hence \(\hat{d}_{\tau}\) is definite. It remains to be shown that \(\hat{d}_{\tau}\) induces the manifold topology. As in [43, Proposition 3.14] one can show that the continuity of \(\tau\) naturally implies the continuity of \(\hat{d}_{\tau}\). That this proof carries over rests on the crucial fact that for proper cone structures \((\operatorname{int}C)_{p}\) is nonempty for all \(p\in M\) and that for every vector \(v\in(\operatorname{int}C)_{p}\) there is a timelike curve passing through \(p\) with velocity \(v\) (see the proof of Proposition 4.5). Hence the topology induced by \(\hat{d}_{\tau}\) is coarser than the manifold topology. As in [43, Proposition 3.15] it follows that it is also finer. Theorem 4.13 furthermore implies, as already established in [3, Theorem 1.1, Proposition 3.8] in the Lorentzian setting, that the length structure respects the manifold topology and that the null distance is an intrinsic metric. ### A completeness-compactness theorem for cone structures As in Theorem 2.5 it follows that the null distances of "complete" time functions are complete. Time functions that satisfy this condition exist for globally hyperbolic cone structures by Theorem 4.10. Because both results are valid for proper cone structures we obtain a natural extension to a refined Lorentzian completeness-compactness result of Garcia-Heveling and the author [17, Theorem 4.2]. Theorem 1.6 is a simplified version of the following result. **Theorem 4.14**.: _Let \((M,C)\) be a proper cone structure._ * _Suppose_ \(\tau\) _is a time function such that the metric space_ \((M,\hat{d}_{\tau})\) _is complete. Then_ \(\tau\) _is a Cauchy time function and_ \((M,C)\) _is globally hyperbolic._ * _Suppose_ \((M,C)\) _is globally hyperbolic. Then there exists a completely uniform (weak) temporal function_ \(\tau\colon M\to\mathbb{R}\)_, and for all such time function, the corresponding metric space_ \((M,\hat{d}_{\tau})\) _is complete._ Proof.: (i) It remains to be shown that \(\tau\) is a Cauchy function, then global hyperbolicity follows from Theorem 4.10 (ii)\(\Longrightarrow\)(i). Suppose \(\tau\) is not a Cauchy time function. Then there exists, without loss of generality, an inextendible future directed causal curve \(\gamma\colon\mathbb{R}\to M\) such that \(A:=\sup(\tau\circ\gamma)<\infty\). Consider the sequence of points \(p_{n}=\gamma(n)\). For any \(n,m\in\mathbb{N}\) \[\hat{d}_{\tau}(p_{n},p_{m})=|\tau(p_{m})-\tau(p_{n})|.\] Since the sequence \((\tau(p_{n}))_{n}\) is strictly increasing it converges to \(A\) and therefore is a Cauchy sequence in \(\mathbb{R}\). In other words, for every \(\varepsilon>0\) there is an \(N\in\mathbb{N}\) such that for all \(n,m\geq N\) we have \[\hat{d}_{\tau}(p_{n},p_{m})=|\tau(p_{m})-\tau(p_{m})|<\varepsilon,\] thus \((p_{n})_{n}\) itself is a Cauchy sequence in \((M,\hat{d}_{\tau})\) and therefore most converge to a point \(p\in M\) by completeness. Thus \(\gamma\) is future extendible, a contradiction to the assumption. Hence \(\tau\) must be a Cauchy time function. (ii) By Theorem 4.10 a completely uniform temporal function exists. As in [17, Theorem 4.2 (ii)] it follows from the cone version of Theorem 2.5 that \((M,\hat{d}_{\tau})\) is complete. ## 5. Semi-Riemannian spacetimes Thanks to Theorem 1.3 we have seen that Theorem 1.2 is a conformal generalization of the metric Hopf-Rinow Theorem 1.1 (b)\(\Longleftrightarrow\)(c) to Lorentzian signature. A natural question that arises is whether one can iterate this procedure and also obtain a semi-Riemannian version of Theorem 1.2. In this section we show that the answer is yes. To this end we introduce semi-Riemannian spacetimes and initiate their study in Section 5.1. We provide several important examples and see that their existence is intimately tied to a deep and largely open problem in differential/algebraic topology. In Section 5.2 we answer the above question by showing that semi-Riemannian spacetimes admit continuous proper cone structures that are conformally invariant. Thus Theorem 1.6 yields a true conformal and semi-Riemannian version of the metric Hopf-Rinow Theorem. In Section 5.4 we finally investigate if and how closely (stable) causality as well as global hyperbolicity are related for products (with regards to dimension and signature) and find, amongst others, that some but not all parts of Theorem 1.3 (i) can be recovered. ### Definition, existence, and examples Recall from the introduction that we consider the following class of semi-Riemannian manifolds \((M,g)\) with signature \((n-\nu,\nu)\), which interpolates between all manifolds equipped with a metric of positive definite signature \((n,0)\) and parallelizable manifolds equipped with a metric of negative definite signature \((0,n)\). **Definition 5.1**.: Let \((M,g)\) be a semi-Riemannian manifold with constant index \(0\leq\nu\leq n=\dim M\). We say that \(M\) is _time frame orientable_ if it admits \(\nu\) continuous vector fields \(X_{i}\in\mathfrak{X}(N)\) that satisfy \(g(X_{i},X_{i})<0\) and are linearly independent on each tangent space \(T_{p}M\), \(p\in M\). If \((M,g)\) is time frame orientable and equipped with a fixed set of such vector fields \(X=\{X_{i}\,;\,i=1,\ldots,\nu\}\), we say that it is _time frame oriented_ and call \((M,g,X)\) a _semi-Riemannian spacetime_ or, more specifically, a \((n-\nu,\nu)\)_-spacetime_. In this section we discuss a vast list of examples and nonexamples of semi-Riemannian spacetimes and obtain some statements about the existence of such semi-Riemannian structures. We will see that there is a novel and interesting link of semi-Riemannian geometry to a challenging problem studied in differential and algebraic topology. By definition, any Riemannian manifold is time orientable and a \((n,0)\)-spacetime with \(X=\emptyset\). Every time oriented Lorentzian manifold is a \((n-1,1)\)-spacetime, simply called _spacetime_ in Lorentzian Geometry (cf. Section 2). It is well-known that not every Lorentzian manifold is time orientable, and that not every manifold even admits a Lorentzian metric. Using homotopy theory Steenrod first characterized the existence of a semi-Riemannian metric on a given (closed) manifold by the existence of a corresponding continuous tangent subbundle. We recall his argument in modern terminology. **Theorem 5.2** ([44, Theorem 40.11]).: _Let \(M\) be a smooth manifold. The following are equivalent:_ 1. \(M\) _admits a semi-Riemannian metric of index_ \(\nu\)_,_ 2. \(TM\) _admits a subbundle of rank_ \(\nu\)_._ Proof.: Let \(n=\dim M\) and \(0\leq\nu\leq n\). (i)\(\Longrightarrow\)(ii) Suppose \(g\) is a semi-Riemannian metric of index \(\nu\). This means that the tangent bundle \(TM\) is associated to a principal \(O(n-\nu,\nu)\) bundle by the standard representation of the group \(O(n-\nu,\nu)\). The subgroup \(O(n-\nu)\times O(\nu)\) is a maximal compact subgroup, and hence a deformation retract, of \(O(n-\nu,\nu)\). So \(TM\) can be associated to the product of the standard representation of \(O(n-\nu)\times O(\nu)\). This implies that there are tangent subbundles \(\xi\) and \(\eta\) of ranks \(\nu\) and \(n-\nu\), respectively, satisfying \(TM=\xi\oplus\eta\). (ii)\(\Longrightarrow\)(i) Suppose \(\xi\) is a rank-\(\nu\) subbundle of \(TM\). Let \(h\) be any Riemannian metric \(M\). Since \(M\) is paracompact \[\xi^{\perp}:=\bigsqcup_{p\in M}\{p\}\times\xi_{p}^{\perp}\] is a subbundle of rank \(n-\nu\) of \(TM\) via the natural projection \(\xi^{\perp}\to M\) and such that \[TM=\xi\oplus\xi^{\perp}.\] A semi-Riemannian metric of index \(\nu\) is then given by setting \[g=-h|_{\xi}\oplus h|_{\xi^{\perp}}=h-2h|_{\xi\times\xi}.\qed\] In general, the frame bundle associated to the tangent subbundle (ii) does _not_ admit a global section, and thus the semi-Riemannian metric obtained in Theorem 5.2 may _not_ be time frame orientable. Interestingly, if \(\nu=1\) the _existence_ of a Lorentzian metric implies the _existence_ of a time oriented Lorentzian metric (the converse being trivially true). After having discussed topological constraints characterizing such an existence we show in Example 5.8 that the existence of a semi-Riemannian metric of index \(\nu\geq 2\) on \(M\) does _not_ imply the existence of a \((n-\nu,\nu)\)-spacetime structure on \(M\). We can, however, independently characterize the existence of a spacetime metric by the existence of a suitable global partial frame. Note that the condition of admitting such structures is invariant under \(C^{1}\) diffeomorphisms. **Theorem 5.3**.: _Let \(M\) be a smooth manifold of dimension \(n\). The following are equivalent:_ 1. \(M\) _admits a_ \((n-\nu,\nu)\)_-spacetime metric,_ 2. \(M\) _is_ parallelizable of degree \(\nu\)_, i.e., there exist_ \(\nu\) _everywhere linearly independent continuous vector fields on_ \(M\) _(called a_ tangent \(\nu\)_-frame_)._ Proof.: (i)\(\Longrightarrow\)(ii) is trivial. (ii)\(\Longrightarrow\)(i) If the vector fields \(X_{1},\ldots,X_{\nu}\) are linearly independent on \(M\), then \(\xi_{p}=\operatorname{span}(X_{1}(p),\ldots,X_{\nu}(p))\) defines a rank-\(\nu\) subbundle of \(TM\). The semi-Riemannian metric \(g\) explicitly constructed via an arbitrary Riemannian metric \(h\) in the proof of Theorem 5.2 has the desired property that for all \(i=1,\ldots,\nu\) \[g(X_{i},X_{i})=-h(X_{i},X_{i})<0.\qed\] **Corollary 5.4**.: _Every \(n\)-dimensional parallelizable manifold admits a \((n-\nu,\nu)\)-spacetime structure for all \(0\leq\nu\leq n\). _ **Example 5.5** (Parallelizable manifolds).: The following classes of manifolds are well-known to be parallelizable and thus can be turned into \((n-\nu,\nu)\)-spacetimes for any \(\nu\): 1. \(\mathbb{R}^{n}\), \(\mathbb{C}^{n}\), and all other finite-dimensional vector spaces over \(\mathbb{R}\) (see Section 5.3 for more properties), 2. Lie groups (by choosing a basis at the identity and using group translations to move it around), 3. closed orientable \(3\)-manifolds (by a result of Stiefel [45, Satz 21], see also the recent "bare hands" proof in [8]), 4. \(\mathbb{S}^{1}\), \(\mathbb{S}^{3}\) (because \(\mathbb{S}^{3}=\operatorname{SU}(2)\) is a Lie group), \(\mathbb{S}^{7}\) (shown independently by Hirzebruch, Kervaire, and by Bott and Milnor in 1958), and no other spheres (see Adam's Theorem 5.10 below), 5. closed \(\pi\)-manifolds (manifolds with trivial normal bundle when embedded in high dimensional Euclidean space, a concept due to Whitehead) of dimension \(n\) are either parallelizable or have the same maximal \(\nu\) as the corresponding \(\mathbb{S}^{n}\) (this and related results are collected in [46, Theorem 11]), 6. products of parallelizable manifolds. Parallelizable manifolds are useful for obtaining general semi-Riemannian spacetimes via the following (warped) product construction. **Example 5.6** (Semi-Riemannian warped products).: Let \(M\) be an \(n\)-dimensional manifold equipped with a \((n-\nu,\nu)\)-spacetime structure. Suppose that \(\Sigma\) is an \(m\)-dimensional manifold, equipped with an arbitrary Riemannian metric \(\sigma\), and let \(f\colon M\to(0,\infty)\) be a smooth function. Then by Theorem 5.3 the warped product \[M\times_{f}\Sigma=(M\times\Sigma,g+f^{2}\sigma)\] is a \((m+n-\nu,\nu)\)-spacetime with the same time frame orienting vector fields (modulo pullback along \(\pi_{M}\)). Similarly, the warped product of \(M\) with another \((m-\rho,\rho)\)-spacetime \((N,k)\) is also a \((m+n-\rho-\nu,\rho+\nu)\)-spacetime. Of course, this construction can be iterated. Let us return to the question of existence of a \((n-\nu,\nu)\)-spacetime structure on a given manifold \(M\). The corresponding topological problem, i.e., that of characterizing the existence of a tangent \(\nu\)-frame in condition (ii) of Theorem 5.3, is well-known and still largely open since almost \(100\) years. We shall provide some necessary conditions and discuss some progress that has been made since, and recall some of the important notions that have been developed in this context. The most basic necessary condition for the existence of a tangent \(\nu\)-frame is an immediate consequence of the Poincare-Hopf Theorem [28] (an extension of the Hairy Ball Theorem of Poincare [39] for \(\mathbb{S}^{2}\) from 1885 and of Brouwer [14] for \(\mathbb{S}^{2n}\) from 1912) since we certainly demand continuity and nowhere vanishing of at least one vector field. The full characterization in the \(\nu=1\) case was shown in Markus [34, Theorem 3]. **Theorem 5.7**.: _Let \(M\) be a closed manifold that admits a \((n-\nu,\nu)\)-spacetime structure for \(\nu\geq 1\). Then \(\chi(M)=0\). If \(\nu=1\) then this condition is also sufficient._ If \(\nu=1\) one can furthermore show that the existence of Lorentzian metric on a given manifold implies the existence of a time oriented Lorentzian metric by moving to a double covering and applying Theorem 5.7 (see, for instance, [38, p. 149]). This is not the case for \(\nu\geq 2\) as the following simple example demonstrates (unless, for instance, \(M=\mathbb{S}^{n}\) and \(2\nu\leq n\)[44, Theorem 27.16]). **Example 5.8** (Manifold admitting a semi-Riemannian metric but no spacetime structure).: Consider \(M=\mathbb{S}^{2}\times\mathbb{S}^{2}\) and let \(\sigma\) denote the standard sphere metric on \(\mathbb{S}^{2}\). Then \(g=-\sigma\oplus\sigma\) is clearly a semi-Riemannian metric of signature \((2,2)\). Since the Euler characteristic is \(\chi(\mathbb{S}^{2}\times\mathbb{S}^{2})=\chi(\mathbb{S}^{2})^{2}=4\) it follows from Theorem 5.7 that \(M\) does not admit any semi-Riemannian spacetime structure of index \(\nu=1,2,3,4\). The refined notion of parallelizability of degree \(\nu\) used in Theorem 5.3 (ii) already appears in the seminal thesis of E. Stiefel [45] from 1935, supervised by H. Hopf, which he opens precisely by posing the following question in the closed case: When does an \(n\)-dimensional manifold \(M\) admit a tangent \(\nu\)-frame for \(1\leq\nu\leq n\)? Neither Stiefel nor followers did so far succeed in fully characterizing the existence of a tangent \(\nu\)-frame for \(\nu>1\). But already Stiefel obtained landmark results on the existence of tangent \(\nu\)-frames with certain types of singularities (points where the vector fields become linearly dependent or discontinuous) and introduced a sequence of obstruction classes in cohomology (independently and shortly afterwards Whitney [48] studied the analogous classes for sphere bundles, the modern axiomatic definition of the Stiefel-Whitney classes for vector bundles is due to Hirzebruch). Although we are not interested in understanding the structure of singularities in this work, Stiefel's and subsequent results could provide a very fruitful starting point to pursue in the future. The reason for this is that we already know that understanding degeneracies in Lorentzian manifolds is related to finding an admissible notion for topology change of acausal slices, a topic that is highly relevant for quantum gravity and nonsmooth Lorentzian geometry (see, for instance, [13, 16, 24, 29]). One of the most basic results for the existence of a tangent \(\nu\)-frame without singularities can in modern terminology and thanks to Theorem 5.3 be formulated as follows. **Theorem 5.9** (Stiefel [45, Satz A'\({}_{m}\)]).: _If a smooth manifold \(M\) admits a \((n-\nu,\nu)\)-spacetime structure then the top \(\nu\) Stiefel-Whitney classes of the tangent bundle vanish, i.e., \(w_{n-\nu+1}(TM)=\ldots=w_{n}(TM)=0\)._ The necessary condition of Theorem 5.9 is very weak and far from being sufficient. For instance, any unit sphere \(\mathbb{S}^{n}\) satisfies \(w_{1}(T\mathbb{S}^{n})=\ldots=w_{n}(T\mathbb{S}^{n})=0\) because \(w_{0}(T\mathbb{S}^{n})=1\) and the total Stiefel-Whitney class is \(w(T\mathbb{S}^{n})=1\)[30, Chapter 16, Proposition 4.4], but we have already mentioned in Example 5.5 that only spheres of dimension \(n=1,3,7\) are parallelizable. In 1962 Adams managed to settle the question on the maximal degree of parallelizability for spheres, based on earlier necessary criteria due to the Hurwitz-Radon-Eckman Theorem in linear algebra and James. **Theorem 5.10** (Adams [1] and James [31]).: _The \(n\)-dimensional unit sphere \(\mathbb{S}^{n}\subseteq\mathbb{R}^{n+1}\) admits exactly \(\nu(n)=2^{c}+8d-1\) linearly independent vector fields, where \(c\) and \(d\) are given implicitly by \(n+1=(2a+1)2^{b}\) and \(b=c+4d\) for \(a,b,c,d\in\mathbb{Z}\), \(0\leq c\leq 3\)._ Much less is known for general manifolds. Let us briefly recall the progress that has been made beyond Theorem 5.9 since the 1930s. For certain classes of manifolds with particular dimensional restrictions \(n\) (often \(\operatorname{mod}4\), spin, etc.) and for small \(\nu\) knowledge about characteristic classes provides important necessary conditions. Thomas [46] reviews many classical results from the 1950s and 1960s and discusses topological invariants that govern whether one can turn a tangent \(\nu\)-field with finite singularities into a regular one without singularities and collects restrictions and obstructions for the existence of a tangent 2-field with singularities. See also the later work of Atiyah and Dupont [4, 20] from the 1970s, using the Atiyah-Singer Index Theorem, which includes particularly strong results for the existence of tangent 2- and 3-fields, and recent results [12] for \(\nu\geq 4\) in the general and spin case. Despite the huge amount of fascinating and deep mathematical work that has been carried out on the existence of tangent \(\nu\)-frames and singularities in the _closed_ manifold setting the equivalent problem for and also the topological structure of open (noncompact without boundary) manifolds is much less understood. For us the noncompact situation is significantly more important and in the hope to spark some interest in this problem we collect some known constructions and basic ideas that could be useful for analyzing the noncompact problem. _Remark 5.11_ (Open manifolds).: From a semi-Riemannian perspective the situation is better if compactness is dropped. On an open manifold there _always_ exists a nowhere vanishing continuous vector field and thus a Lorentzian spacetime structure (mentioned already in [34]) because one can simply "sweep out" the zeroes of a vector field with isolated zeroes to infinity, for instance, by means of a suitable compact exhaustion and induction. It is not immediately obvious, though likely, that a similar procedure can be applied to obtain several linearly independent continuous vector fields. One may also be able to employ the homotopy equivalence property of deformation retractions: Smooth manifolds have the homotopy type of CW complexes. Whitehead showed that for open \(n\)-dimensional manifolds \(M\) there is a subcomplex of dimension \(k\leq n-1\) onto which \(M\) deformation retracts. By Theorem 5.9 this is a necessary condition for having \(n-k\) linearly independent vector fields. For more recent and sophisticated topological tools for analyzing noncompact manifolds see recent expository articles, for instance, [25]. We believe that our setup could ultimately also shed some light on necessary conditions for the existence of such a tangent \(\nu\)-frame because the causal structure (which we explore in Sections 5.2 and 5.4) and topology for \((n-\nu,\nu)\)-spacetimes are beautifully intertwined and are able to capture global features of a manifold. This could then, due to the just mentioned relations between the closed and open case, also lead to new results in the closed case. Let us mention a specific result where the use of a spacetime structure leads to new insights: We will see in Section 5.2 that for \((n-\nu,\nu)\)-spacetimes we can apply the results of Section 4. By Theorem 5.16 and Corollary 4.6 there _always_ exist closed timelike (thus directly related to the chosen tangent \(\nu\)-frame!) curves on closed manifolds that admit a semi-Riemannian spacetime structure. ### Semi-Riemannian spacetimes admit conformal proper cone structures Given a spacetime structure on a manifold there are several ways to distinguish between future and past, and study causality. We pick a natural way to do so and show that it defines to proper cone structures. **Definition 5.12**.: Let \((M,g)\) be a \((n-\nu,\nu)\)-spacetime with time frame orientation given by the vector fields \(X_{1},\ldots,X_{\nu}\). A vector \(v\in T_{p}M\setminus\{0\}\) is said to be * _future directed causal_ if \[g_{p}(v,v)\leq 0,\quad\text{and}\quad g_{p}(v,X_{i}(p))\leq 0,\ i=1,\ldots,\nu,\] (5.1) * _future directed timelike_ if \[g_{p}(v,v)<0,\quad\text{and}\quad g_{p}(v,X_{i}(p))<0,\ i=1,\ldots,\nu.\] (5.2) Similarly, we call a causal or timelike vector \(v\in T_{p}M\)_past directed_ if (5.1) holds with \(\geq\) or \(>\) in the second inequalities, respectively. **Lemma 5.13**.: _Let \((M,g)\) be a \((n-\nu,\nu)\)-spacetime with time frame orientation given by the vector fields \(X_{1},\ldots,X_{\nu}\). If \(v\in T_{p}M\setminus\{0\}\) is a future directed causal vector then there exists at least one \(j\in\{1,\ldots,\nu\}\) such that_ \[g_{p}(v,X_{j}(p))<0.\] Proof.: Suppose there is a vector \(v\in T_{p}M\setminus\{0\}\) that satisfies \[g_{p}(v,X_{i}(p))=0,\qquad\text{for all }i=1,\ldots,\nu. \tag{5.3}\] The subspace \(\xi_{p}=\operatorname{span}(X_{1},\ldots,X_{\nu})\) is a negative definite subspace of \((M,g)\) and \(T_{p}M=\xi_{p}\oplus\xi_{p}^{\perp}\) with \(g|_{\xi_{p}^{\perp}}\) being positive definite. Thus can write \(v\) uniquely as a sum of vectors \(u\in\xi_{p}\) and \(w\in\xi_{p}^{\perp}\). Assumption (5.3) implies that \(g(u,u)=0\), i.e., \(u=0\). If \(v\) also satisfies \(g(v,v)\leq 0\) then \[0\geq g_{p}(v,v)=g_{p}(u,u)+g_{p}(w,w)=g_{p}(w,w)\geq 0,\] i.e., also \(w=0\), a contradiction to the assumption that \(v\neq 0\). Thus there must be at least one index \(j\in\{1,\ldots,\nu\}\) for which \(g_{p}(v,X_{i}(p))<0\). _Remark 5.14_ (Alternative definition).: If \((M,g)\) is a semi-Riemannian spacetime we know from Theorem 5.3 that for each \(p\in M\) we can write \[T_{p}M=\xi_{p}\oplus\xi_{p}^{\perp},\] where \(\xi_{p}=\operatorname{span}(X_{1}(p),\ldots,X_{\nu}(p))\). Thus we can write every tangent vector \(v\in T_{p}M\setminus\{0\}\) as \(v=u+w\) with unique \(u\in\xi_{p}\) and \(w\in\xi_{p}^{\perp}\). Suppose \(u=\sum_{i=1}^{\nu}u^{i}X_{i}(p)\) is the unique representation in terms of the basis of \(\xi_{p}\). Then we could also define \(v\) to be _future directed_ if \(u^{i}\geq 0\) for all \(i=1,\ldots,\nu\). If the vector fields \(X_{i}\) are orthogonal this definition agrees with Definition 5.12. By the Gram-Schmidt orthogonalization procedure we can indeed always assume orthonormality of the vector fields \(X_{1},\ldots,X_{\nu}\) but we chose not to require this condition in our setup. _Remark 5.15_ (Lorentzian vs. semi-Riemannian).: Unlike in the Lorentzian case, not every vector \(v\neq 0\) satisfying \(g_{p}(v,v)\leq 0\) is _either_ future _or_ past directed if \(\nu>1\). In fact, the set of "undirected causal" vectors is actually bigger than the directed ones. Another way of seeing this is that in the Lorentzian setting a different choice of time orientation defining vector field at most exchanges the future with the past, while in the semi-Riemannian setting it generally leads to an entirely different causal structure and a particular choice of tangent \(\nu\)-frame is more intimately linked to the topology of \(M\). Nonetheless, in analogy to the Lorentzian setting, we show that the definition of future/past directed causal vectors defines a proper cone structure on \(M\). The regularity of the metric tensor and the vector fields controls the regularity of the cone structure and ensures that it is topologically well behaved. We restate Theorem 1.5 of the introduction in a more refined way as follows. **Theorem 5.16**.: _Let \((M,g)\) be a \((n-\nu,\nu)\)-spacetime with \(0<\nu\leq n\). The map_ \[p\mapsto C_{p}:=\{v\in T_{p}M\setminus\{0\}\,;\,v\text{ is future directed causal}\}\] _defines a continuous proper cone structure on \(M\) with_ \[(\operatorname{int}C)_{p}=\operatorname{int}C_{p}=\{v\in T_{p}M\setminus\{0 \}\,;\,v\text{ is future directed timelike}\}.\] Before we prove Theorem 5.16 a few remarks are in order. Let us first recall the definition of continuity of set-valued maps that is used in the statement (see, for instance, Aubin and Cellina [6, Chapter 1]). **Definition 5.17**.: Let \(F\colon X\to Y\) be a set-valued map. Then we say that 1. \(F\) is _upper semicontinuous_ at \(x_{0}\in X\) if for any neighborhood \(V\) of \(F(x_{0})\), there exists a neighborhood \(U\) of \(x_{0}\) such that \(F(U)\subseteq V\). 2. \(F\) is _lower semicontinuous_ if for any generalized sequence of elements \(x_{\mu}\) converging to \(x_{0}\) and for any \(y_{0}\in F(x_{0})\), there exists a sequence of elements \(y_{\mu}\in F(x_{\mu})\) that converges to \(y_{0}\). The map \(F\) is _continuous_ if it is both upper and lower semicontinuous. For cone structures \(p\mapsto C_{p}\) on a manifold \(M\) these properties can be checked on coordinate patches (see [37, p. 12]): At every \(p\in M\) we have a local coordinate system \(\{x^{\alpha}\}\) on a neighborhood \(U\) of \(p\). Via the local trivialization of the tangent bundle \(TU\) we obtain a splitting \(U\times\mathbb{R}^{n}\) and can compare sets over different tangents spaces this way. The notion of upper and lower semicontinuity are then applied to the set-valued map \[F(p)=[C_{p}\cup\{0\}]\cap\mathbb{B}^{n},\] where \(\mathbb{B}^{n}\) denotes the closed unit ball of \(\mathbb{R}^{n}\) (instead one could also work with the sphere subbundle and compare the set values \(C_{p}\cap\mathbb{S}^{n}\) using the Hausdorff distance on \(\mathbb{S}^{n}\) (one can also say when \(F\) is locally Lipschitz then). Furthermore, recall that properness of a cone structure requires a nonempty interior bundle. We thus construct a vector in the interior of the cone explicitly by making use of the following result. **Lemma 5.18**.: _Let \(V\) be a finite-dimensional inner product space, and let \((b_{1},\dots,b_{m})\) be a basis for \(V\). Then there exists a vector \(v\in V\setminus\{0\}\) such that_ \[\langle v,b_{i}\rangle>0,\qquad i=1,\dots,m. \tag{5.4}\] Proof.: We proceed by induction in \(m\). If \(m=1\), then we can simply pick \(v=b_{1}\). Suppose we have shown the result already for the inner product spaces of dimension \(m-1\) and \(\tilde{v}\) is the vector satisfying (5.4) for \(\widetilde{V}=\operatorname{span}(b_{1},\dots,b_{m-1})\). By the Gram-Schmidt orthogonalization procedure there exists an orthogonal basis \((e_{1},\dots,e_{m})\) of \(V\) such that \[\operatorname{span}(b_{1},\dots,b_{k})=\operatorname{span}(e_{1},\dots,e_{k}),\qquad k=1,\dots,m.\] In particular, this holds for \(k=m-1\), and therefore \[\langle b_{m},e_{m}\rangle\neq 0,\] since \(b_{m}\) is a basis vector and therefore not in \(\widetilde{V}=\{e_{m}\}^{\perp}=\operatorname{span}(b_{1},\dots,b_{m-1})\). Choose \(a\in\mathbb{R}\) such that \[\langle b_{m},e_{m}\rangle a>-\langle\tilde{v},b_{m}\rangle\] and define \[v=\tilde{v}+ae_{m}.\] Clearly \(v\neq 0\). We show that \(v\) satisfies (5.4). For any basis vector \(b_{k}\) with \(k=1,\dots,m-1\) we have \(b_{k}\in\widetilde{V}=\{e_{m}\}^{\perp}\), and hence (5.4) follows directly from the induction hypothesis since \[\langle v,b_{k}\rangle=\langle\tilde{v},b_{k}\rangle>0.\] For \(b_{m}\) we have by choice of \(a\) also \[\langle v,b_{m}\rangle=\langle\tilde{v},b_{m}\rangle+a\langle b_{m},e_{m} \rangle>0,\] which completes the inductive step from \(m-1\) to \(m\) Proof of Theorem 5.16.: Suppose the time frame orientation of \((M,g)\) is given by the continuous vector fields \(X_{1},\ldots,X_{\nu}\in\mathfrak{X}(M)\). We first investigate the properties of the cones pointwise. Let \(p\in M\). By Definition 5.12 \[C_{p}=G_{p}\cap\left(\bigcap_{i=1}^{\nu}C_{p}^{i}\right),\] for the sets \[G_{p}:=\{v\in T_{p}M\setminus\{0\}\,;\,g_{p}(v,v)\leq 0\}\] and \[C_{p}^{i}:=\{v\in T_{p}M\setminus\{0\}\,;\,g_{p}(v,X_{i}(p))\leq 0\}.\] Bilinearity of \(g\) implies that for each \(p\in M\) the cone \(C_{p}\) is a sharp convex cone. Furthermore, \(C_{p}\) is closed in \(T_{p}M\setminus\{0\}\) as finite intersection of closed sets in the subspace topology of \(T_{p}M\setminus\{0\}\). Each cone \(C_{p}\) is also proper because for each \(p\in M\), by Lemma 5.18 (in the negative definite case), there exists \(v\in\operatorname{span}(X_{1}(p),\ldots,X_{\nu}(p))\setminus\{0\}\subseteq T _{p}M\) such that \[g_{p}(v,X_{i}(p))<0,\qquad i=1,\ldots,\nu.\] Since \(g_{p}(v,v)<0\) we also know that \(v\) is timelike (and not causal) and therefore \[v\in\operatorname{int}G_{p}\cap\left(\bigcap_{i=1}^{\nu}\operatorname{int}C_ {p}^{i}\right)=\operatorname{int}C_{p}\neq\emptyset.\] Thus \(C\) is a cone structure, and it remains to be shown that \(C\) is closed and proper not only pointwise but as subbundle of \(TM\setminus(TM)_{0}\). Our approach is similar to Minguzzi [37, Proposition 2.4] in the Lorentzian case: We first prove closedness which by [37, Proposition 2.3] immediately also implies upper semicontinuity of the multivalued map \(p\mapsto C_{p}\). In a second step, we show lower semicontinuity directly and by continuity conclude properness using [37, Proposition 2.5]. Suppose \(p\in M\) and \(U\) is a chart neighborhood of \(p\). The cone bundle is characterized by nonpositivity of the function \[f(q,w):=\max\{g_{\alpha\beta}(q)w^{\alpha}w^{\beta},g_{\alpha\beta}(q)X_{1}^{ \alpha}(p)w^{\beta},\ldots,g_{\alpha\beta}(q)X_{\nu}^{\alpha}(p)w^{\beta}\},\] which is continuous on the associated local trivialization \(U\times\mathbb{R}^{n}\) of the tangent bundle. The interior is characterized by negativity of \(f\). Note that \[(C\cup(TM)_{0})\cap TU=\{(q,w)\,;\,q\in U,w\in T_{q}M,\text{ and }f(q,w)\leq 0\}\] is closed in the topology of \(TU\) by continuity of \(f\). This yields that \(C\cup(TM)_{0}\) is closed in \(TM\), hence \(C\) is a closed cone structure. Because we have already shown that each \(C_{p}\) is a closed sharp convex nonempty cone, by [37, Proposition 2.3], being a closed cone structure is equivalent to upper semicontinuity of \(p\mapsto C_{p}\). For lower semicontinuity, since \(\overline{\operatorname{int}C_{p}}=C_{p}\cup\{0\}\), it is sufficient to consider the set-valued map \(p\mapsto\operatorname{int}C_{p}\). Suppose \((p,v)\in\operatorname{int}C_{p}\). Then there is an \(\varepsilon<0\) such that \(f(p,v)<\varepsilon<0\). If now \(p_{n}\to p\), then there is an \(N\in\mathbb{N}\) such that for all \(n\geq N\) we have \(f(p_{n},v)<0\) as well. Hence \((p_{n},v)\in\operatorname{int}C_{p_{n}}\). For the first \(N-1\) elements \(p_{n}\) of the sequence one can also find vectors \(v_{n}\in\operatorname{int}C_{p_{n}}\) by Lemma 5.18. Furthermore, \(v_{n}\) converges to \(v\) since they agree for all \(n\geq N\), which ensures the lower semicontinuity of the cone structure. Having shown lower and upper semicontinuity for the multivalued map \(p\mapsto C_{p}\) we conclude that it is continuous. By [37, Proposition 2.5] continuity implies that \(C\) is a proper cone structure. By [37, Proposition 2.6] continuity and properness furthermore imply \((\operatorname{int}C)_{p}=\operatorname{int}C_{p}\) for all \(p\in M\). Thanks to Theorem 5.16 we can apply the tools of the theory of cone structures from Section 4 to semi-Riemannian spacetimes. For instance, the causal and chronological relation behave as desired and we can use the most important tools of Lorentzian causality theory in the semi-Riemannian context. The steps on the causal ladder and, in particular, global hyperbolicity can be defined directly based on Definition 5.12 or via the theory of cone structures and Theorems 4.10 and 1.6 apply. We obtain Corollary 1.7 directly from Theorems 4.14 and 5.3. That we actually gain information in our semi-Riemannian setup in contrast to general cone structures is evident from its conformally invariant setup. **Theorem 5.19**.: _Let \((M,g)\) be a \((n-\nu,\nu)\)-spacetime, \(0<\nu\leq n\), with future causal cone defined pointwise as in Definition 5.12. The notions of causal relation \(J^{+}\) and the chronological relation \(I^{+}\) (and the steps on the causal ladder) induced by the corresponding proper cone structure are conformally invariant, and so are time functions \(\tau\) and the null distance \(d_{\tau}\)._ _Remark 5.20_ (\(\nu=0\)).: Although one can successful interpret Riemannian manifolds as \((n,0)\)-spacetimes it is--as expected--not very insightful to study their causal structure. Since the cone \(C=\emptyset\) is degenerate and thus all \(J^{+}(p)=\{p\}\) it would mean that all Riemannian manifolds are globally hyperbolic and every continuous function is a time function (if one would generalize this definition at all, the theory of cone structures often disregards degenerate cones). Finally, note that the stability of global hyperbolicity (as shown for continuous proper cone structures by Fathi and Siconolfi [22, Theorem 1.2] and later extended by others, see also [37, Theorem 2.39]) can in the semi-Riemannian context be interpreted not only in terms of a nearby metric tensor \(\tilde{g}\) but also in terms of a nearby time frame orientation. We show a result via the second approach. **Lemma 5.21**.: _Let \((M,g)\) be a globally hyperbolic \((n-\nu,\nu)\)-spacetime, \(0<\nu\leq n\), with time frame orientation defining vector fields \(X_{1},\ldots,X_{\nu}\) and corresponding proper cone structure \(C\). Then there exist time frame orientation defining vector fields \(\widetilde{X}_{1},\ldots,\widetilde{X}_{\nu}\) on \((M,g)\) such that the corresponding cone structure \(\widetilde{C}\) satisfies \(C\preccurlyeq\operatorname{int}\widetilde{C}\cup(\partial\widetilde{C}\cap\{g (v,v)=0\})\) and \(\widetilde{C}\) is also globally hyperbolic._ Proof.: Let \(X\) be the \(g\)-unital version of the vector field \(X_{1}+\ldots+X_{\nu}\). Then for any \(\varepsilon\in(0,1)\) the vector fields \[\widetilde{X_{i}}=X_{i}+\varepsilon X,\quad i=1,\ldots,\nu, \tag{5.5}\] define a time orientation such that for each future directed causal vector \(v\in C_{p}\) in \(M\) we still have \[g(v,v)\leq 0,\qquad g^{\prime}(v,\widetilde{X}_{i})<0,\qquad i=1,\dots,\nu+1.\] In particular, \(v\in\operatorname{int}\widetilde{C}\cup(\partial\widetilde{C}\cap\{g(v,v)=0\})\), where \(\widetilde{C}\) denotes the cone structure corresponding to the time frame orientation \(\widetilde{X}_{1},\dots,\widetilde{X}_{\nu}\). It remains to be shown that \(\widetilde{C}\) can be adapted to be globally hyperbolic. Since \((M,C)\) is globally hyperbolic and globally hyperbolic cone structures are stable there exists a globally hyperbolic locally Lipschitz proper cone structure \(C^{\prime}\) such that \(C\prec C^{\prime}\) (see, for instance, [37, Theorem 2.39]). Since \(C^{\prime}\), \(C\) and \(\widetilde{C}\) are all continuous cone structures we can continuously choose an \(\varepsilon(p)>0\) on \(M\) defining \(\widetilde{C}\) as above by (5.5) such that \(C\preccurlyeq\widetilde{C}\prec C^{\prime}\). Since \(C^{\prime}\) is globally hyperbolic it admits a Cauchy time function \(\tau\). Clearly, \(\tau\) is also a Cauchy time function for \(\widetilde{C}\) and hence it is globally hyperbolic by Theorem 4.10. ### Semi-Riemannian vector spaces In this section we are concerned with the flat spacetime \(\mathbb{R}^{n-\nu,\nu}\). We first fix the canonical \((n-\nu,\nu)\)-spacetime structure on \(\mathbb{R}^{n}\) (and \(\mathbb{C}^{n}\)) by using the standard orthonormal global coordinate frame \((E_{i})_{i}\). **Example 5.22** (\(\mathbb{R}^{n-\nu,\nu}\)).: We equip the manifold \(\mathbb{R}^{n}\) with the standard scalar product of index \(\nu\): With respect to the standard global coordinate frame \((E_{1},\dots,E_{n})\), and \(v=v^{i}E_{i}\), \(w=w^{i}E_{i}\), we have a semi-Riemannian metric given by \[\langle v,w\rangle=-\sum_{i=1}^{\nu}v^{i}w^{i}+\sum_{j=\nu+1}^{n}v^{j}w^{j}.\] The vector fields \(E_{1},\dots,E_{\nu}\) are negative unital and orthonormal, and thus define a time frame orientation on \(\mathbb{R}^{n-\nu,\nu}\). Note, however, that for \(\nu\geq 2\) these vector fields are _not_ future directed timelike (only causal). **Example 5.23** (\(\mathbb{C}^{n}\) as the real \((n,n)\)-spacetime \(\mathbb{R}^{n,n}\)).: The complex vector space \(\mathbb{C}^{n}\cong\mathbb{R}^{n}\oplus i\mathbb{R}^{n}\) with scalar product \[\langle w,z\rangle=-\sum_{i=1}^{n}\Im(w^{i})\Im(z^{i})+\sum_{i=1}^{n}\Re(w^{ i})\Re(z^{i})\] is isomorphic to the \((n,n)\)-spacetime \(\mathbb{R}^{n,n}\) and the spacetime structure that respects the complex nature of \(\mathbb{C}^{n}\). We discuss and derive several properties of \(\mathbb{R}^{n-\nu,\nu}\) by hand. All cases \(0<\nu\leq n\) are covered. If \(\nu=n\) the empty sum convention \(\sum_{j=n+1}^{n}=0\) is used. \(I^{+}\) _and \(J^{+}\) relations._ Two points \(p,q\in\mathbb{R}^{n-\nu,\nu}\) are future/past directed timelike/causally/lightlike related if and only if the straight line connecting them has the same causal character, i.e., if the corresponding vector \(q-p\in\mathbb{R}^{n-\nu,\nu}\cong T_{0}\mathbb{R}^{n-\nu,\nu}\) is future/past directed timelike/causal/lightlike. For \(p=\sum_{i=1}^{n}p^{i}E_{i}\) and \(q=\sum_{i=1}^{n}q^{i}E_{i}\) thus \[p\leq q\Longleftrightarrow q^{i}\geq p^{i}\text{ for all }i=1,\dots,\nu, \text{ and }\] \[\sum_{i=1}^{\nu}(q^{i}-p^{i})^{2}\geq\sum_{j=\nu+1}^{n}(q^{j}-p^{ j})^{2}.\] If \(\nu<n\) and \(p\neq q\) in the last line then automatically at least one \(q^{i}>p^{i}\) (see also Lemma 5.18). If \(\nu=n\) this inequality follows already from the first line. Analogous characterizations hold for timelike and lightlike with all \(>\) and at least one = compared to the above, respectively (recall that lightlike means \(q-p\in\partial C\)). #### Canonical time function The function \[T(p):=\sum_{i=1}^{\nu}p^{i}\] is a smooth time function because by the above \[p<q\Longrightarrow q^{i}\geq p^{i}\text{ for all }i=1,\dots,\nu,\text{ and }\] \[q^{k}>p^{k}\text{ for at least one }k=1,\dots,\nu,\] \[\Longrightarrow T(q)>T(p).\] It is easy to see that \(T\) is, in fact, completely uniform Cauchy temporal: Since \(dT=\sum_{i=1}^{\nu}E_{i}^{p}\), for any \(v\in T_{p}\mathbb{R}^{n-\nu,\nu}\) future directed causal \(v^{i}\geq 0\), \(i=1,\dots,\nu\), and \[dT(v)=\sum_{i=1}^{\nu}v^{i}\geq\sqrt{\sum_{i=1}^{\nu}(v^{i})^{2}}\geq\sqrt{ \sum_{i=\nu+1}^{n}(v^{i})^{2}},\] which implies that it is bounded below by the Euclidean norm of \(v\) since \[2dT(v)\geq\sqrt{\sum_{i=1}^{\nu}(v^{i})^{2}}+\sqrt{\sum_{i=\nu+1}^{n}(v^{i})^{ 2}}\geq\sqrt{\sum_{i=1}^{n}(v^{i})^{2}}=\|v\|.\] If \(\nu=1\) then \(T\) is the canonical time function used for Minkowski space. #### Global hyperbolicity Since \(T\) is a Cauchy time function it follows from Theorem 4.10 (ii)\(\Longrightarrow\)(i) that \(\mathbb{R}^{n-\nu,\nu}\) is globally hyperbolic. In connection with Theorem 1.3 that we aim to (partly) generalize in Section 5.4 it is insightful to prove global hyperbolicity also with "bare hands" using the Heine-Borel property of Euclidean space \(\mathbb{R}^{n}\). For any \(p\in\mathbb{R}^{n-\nu,\nu}\) \[J^{+}(p)=\left(\bigcap_{j=1}^{\nu}\{x^{j}\geq p^{j}\}\right)\cap\left\{\sum_ {i=1}^{\nu}(x^{i}-p^{i})^{2}\geq\sum_{i=\nu+1}^{n}(x^{i}-p^{i})^{2}\right\},\] and thus the causal future and past sets \(J^{\pm}(p)\) are closed as an intersection of closed sets. Hence for any \(p\) and \(q\) the causal diamond \(J^{+}(p)\cap J^{-}(q)\) is closed. In what follows we show that \(J^{+}(p)\cap J^{-}(q)\) is a bounded subset of the Euclidean space \(\mathbb{R}^{n}\). Since \[J^{+}(p)\cap J^{-}(q)\subseteq\bigcap_{j=1}^{\nu}\{q^{j}\geq x^{j}\geq p^{j}\}\] the set is bounded in the first \(\nu\) coordinate directions by future and past directedness. If \(\nu=n\) we have shown boundedness of \(J^{+}(p)\cap J^{-}(q)\). If \(\nu<n\) boundedness in the remaining \(n-\nu\) directions follows from causality. For \(x\in J^{+}(p)\cap J^{-}(q)\) and \(i=1,\ldots,\nu\) \[(x^{i}-q^{i})^{2}+(x^{i}-p^{i})^{2}\leq 2(q^{i}-p^{i})^{2}\leq 2\max_{1\leq j \leq\nu}(q^{j}-p^{j})^{2}=:\frac{R}{\nu}\] and thus \[\sum_{i=\nu+1}^{n}(x^{i}-q^{i})^{2}+\sum_{i=\nu+1}^{n}(x^{i}-p^{i})^{2}\leq R,\] meaning that \(x\), and thus \(J^{+}(p)\cap J^{-}(q)\), is also bounded in the remaining \(n-\nu\) coordinate directions. Finally, we apply the Heine-Borel property of the Euclidean space and conclude that \(J^{+}(p)\cap J^{-}(q)\) is compact. Null distanceBy Theorem 5.19 the null distance \(\hat{d}_{T}\) is a conformally invariant metric that induces the topology of \(\mathbb{R}^{n-\nu,\nu}\). Sormani and Vega [43, Proposition 3.4] have shown that Minkowski space can easily be equipped with a non-locally anti-Lipschitz time function so that the corresponding null distance is only a pseudometric. We show that \(\tau=T^{2k+1}\), \(k\in\mathbb{N}\), are analogous examples for \(\mathbb{R}^{n-\nu,\nu}\) as long as \(\nu<n\). Let \(p=(0,\ldots,0)\) and \(q=(0,\ldots,0,1)\). By the above \(p\nleq q\). The paths \(\gamma^{\pm}\colon[0,1]\to M\), \[\gamma^{\pm}(t)\to(\pm t,0,\ldots,t)\] are causal, with \(\gamma^{+}\) being future directed and \(\gamma^{-}\) being past directed. By rescaling and concatenating such future and past directed causal paths we obtain piecewise causal paths \(\beta_{j}\colon[0,1]\to[0,\frac{1}{2j}]\times\{0\}\times\ldots\times[0,1]\) from \(p\) to \(q\) with \(2j\) pieces of \(\tau\)-height \((\frac{1}{2j})^{2k+1}\), i.e., \[\hat{L}_{\tau}(\beta_{j})=\frac{2j}{(2j)^{2k+1}}=\frac{1}{4^{k}j^{2k}}.\] Thus \[\hat{d}_{\tau}(p,q)\leq\lim_{j\to\infty}\hat{L}_{\tau}(\beta_{j})=0,\] which means that \(\hat{d}_{\tau}\) is not distinguishing with respect to any \(\tau=T^{2k+1}\). This example furthermore shows that the case \(\nu=n\) is special, which foreshadows some results of the following section. ### Global hyperbolicity iterated In Theorem 1.3 (i) we have related the Heine-Borel property of a Riemannian manifold \((\Sigma,\sigma)\)_directly_ to the compactness of causal diamonds of the corresponding Lorentzian product spacetime \((\mathbb{R}\times\Sigma,-dt^{2}\oplus\sigma)\). Can we prove a similar connection for the related notions of global hyperbolicity for different \(\nu\)? Since we have already treated the \(\nu=0\) case we can restrict to \(0<\nu\leq n\). We first attempt a naive approach. By that we mean that to go from a spacetime structure on \((M,g)\) given by vector fields \(X_{1},\ldots,X_{\nu}\) we define one on the product \[(M^{\prime},g^{\prime})=(\mathbb{R}\times M,-dt^{2}\oplus g) \tag{5.6}\] by using the additional vector field \(X_{\nu+1}=\partial_{t}\). We call this \((n-\nu,\nu+1)\)-spacetime structure on \((M^{\prime},g^{\prime})\) the _orthogonally extended_\((n-\nu,\nu+1)\)_-spacetime_. We also consider the product \[(M^{\prime\prime},g^{\prime\prime})=(\mathbb{R}\times M,dt^{2}\oplus g) \tag{5.7}\] equipped with the same spacetime structure but viewed as a \((n+1-\nu,\nu)\)-spacetime. We show that the answer to the above question is no, i.e., that the notions of global hyperbolicity for different \(\nu\) are in general unrelated. For instance, we demonstrate that global hyperbolicity of \((M,g)\) does not imply global hyperbolicity of \((M^{\prime},g^{\prime})\). On the other hand, we succeed in showing that the top-down implication for \((M^{\prime},g^{\prime})\) to \((M,g)\) still holds. In this section we prove Theorem 1.8 and provide a counterexample for the missing bottom-up implication. The reason why going from \((M,g)\) to \((M^{\prime},g^{\prime})\) does not work is, in short, because the boundary \(\partial C^{\prime}\) of the corresponding cone structure includes all of \(C\) and is "too wide". In other words, \(M^{\prime}\) leaves more room than in \(M\) because \(g^{\prime}(v,v)\leq 0\) does not imply \(g(v,v)\leq 0\) (this was not an issue for positive definite \(g\)). The metric perspective and the fact that we crucially exploited the direct (and not just conformal) relation between \(d_{\sigma}\) and \(\hat{d}_{t}\) in Section 3 also offers an explanation for why our proof from Theorem 1.3 cannot be extended. Before jumping right into the analysis of global hyperbolicity in this framework it is insightful and necessary to consider (stable) causality first. Indeed, we see that problems already occur at this level. **Lemma 5.24**.: _Let \((M,g)\) be a \((n-\nu,\nu)\)-spacetime with \(0\leq\nu\leq n\). Then \((M^{\prime},g^{\prime})\) and \((M^{\prime\prime},g^{\prime\prime})\) as defined in (5.6) and (5.7) are \((n-\nu,\nu+1)\)- and \((n+1-\nu,\nu)\)-spacetimes, respectively. The corresponding cone structures satisfy_ \[C^{\prime\prime}\preccurlyeq\mathbb{R}\times C\quad\text{and}\quad\{0\}\times C \preccurlyeq C^{\prime}\cap C^{\prime\prime}. \tag{5.8}\] _If \(\nu=n\) then furthermore_ \[C^{\prime}=([0,\infty)\times C)\cup((0,\infty)\times TM_{0}). \tag{5.9}\] Proof.: Since all cone structures are continuous by Theorem 5.16 it is sufficient to prove the set-theoretic implications pointwise. We first prove the inclusions of (5.8). Suppose the time frame orientation of \(M\) is given by the vector fields \(X_{1},\ldots,X_{\nu}\), and \(X_{\nu+1}=\partial_{t}\). We denote the pulled back version onto \(\mathbb{R}\times M\) along \(\pi_{M}\) in the same way. If \(v=(v_{0},v_{M})\in C^{\prime\prime}\subseteq TM^{\prime\prime}=\mathbb{R} \times TM\) then \[g(v_{M},v_{M}) =g^{\prime\prime}(v,v)-dt^{2}(v_{0},v_{0})\leq 0,\] \[g(v_{M},X_{i}) =g^{\prime\prime}(v,X_{i})-\underbrace{g^{\prime\prime}(v_{0},X_ {i})}_{=0}\leq 0,\] with a strict inequality for at least one \(i=1,\ldots,\nu\) in the last line. Hence \(v_{M}\in C^{\prime}\subseteq TM\setminus(TM)_{0}\) irrespective of the value of \(v_{0}\). In other words, \(C^{\prime\prime}\preccurlyeq\mathbb{R}\times C\). If, on the other hand, \(v_{M}\in C\subseteq TM\setminus(TM)_{0}\) then \(v=(0,v_{M})\neq 0\) satisfies due to the orthogonal product structure \[g^{\prime}(v,v)\leq g^{\prime\prime}(v,v) =g(v_{M},v_{M})\leq 0,\] \[g^{\prime}(v,X_{i})=g^{\prime\prime}(v,X_{i}) =g(v_{M},X_{i})\leq 0,\qquad i=1,\ldots,\nu.\] Moreover, \(g^{\prime}(v,\partial_{t})=0\). Thus \(v\in C^{\prime}\cap C^{\prime\prime}\). It remains to prove (5.9) for \(\nu=n\). Consider once more \(v=(v_{0},v_{M})\in C^{\prime}\). Due to the assumption \(g^{\prime}(v,\partial_{t})\leq 0\) we must have \(v_{0}\geq 0\). In addition, for all \(i=1,\ldots,n\), we have \[g(v_{M},X_{i})=g^{\prime}(v,X_{i})\leq 0. \tag{5.10}\] Since \(g\) is negative definite by assumption, we have for all \(v_{M}\) \[g(v_{M},v_{M})\leq 0, \tag{5.11}\] with equality if and only if \(v_{M}=0\). If \(v_{M}\neq 0\) then (5.10)-(5.11) show that \(v_{M}\in C\). If, on the other hand, \(v_{M}=0\) then by Lemma 5.13 we must have \(g^{\prime}(v,\partial_{t})<0\), i.e., \(v_{0}>0\). We have thus shown the inclusion \(\preccurlyeq\) in (5.9). The other inclusion \(\succcurlyeq\) follows in the same way from the above inequalities and cases. _Remark 5.25_ (\(C^{\prime}\) and \(C^{\prime\prime}\)).: Note that \(C^{\prime}\not\prec C^{\prime\prime}\) because \(\partial_{t}\in C^{\prime}\setminus C^{\prime\prime}\). On the other hand, for a unital vector field \(X\in C\) we have \(Y=X-\partial_{t}\in C^{\prime\prime}\setminus C^{\prime}\) because \(g^{\prime}(Y,\partial_{t})=1\). The implications of (5.8) allow us to efficiently relate the causal structures of \(M\), \(M^{\prime}\), and \(M^{\prime\prime}\), which ultimately cumulates to the prove of Theorem 1.8. **Proposition 5.26**.: _Let \((M,g)\) be a \((n-\nu,\nu)\)-spacetime with \(0\leq\nu\leq n\). Then for the spacetimes \((M^{\prime},g^{\prime})\) and \((M^{\prime\prime},g^{\prime\prime})\) as defined in (5.6) and (5.7) we have_ \[(M^{\prime},g^{\prime})\text{ is causal}\Longleftrightarrow(M,g)\text{ is causal}\Longleftrightarrow(M^{\prime\prime},g^{\prime\prime})\text{ is causal}.\] Proof.: \((M^{\prime}\Longrightarrow M\Longleftarrow M^{\prime\prime})\) By (5.8), if there is a closed causal curve \(\gamma_{M}\) in \(M\), then \(\gamma=(0,\gamma_{M})\) is a closed causal curve in both \(M^{\prime}\) and \(M^{\prime\prime}\). Thus causality of \(M^{\prime}\) (or \(M^{\prime\prime}\)) enforces causality of \(M\). \((M\Longrightarrow M^{\prime\prime})\) If \(\gamma\) is a closed \(g^{\prime\prime}\)-causal curve in \(M^{\prime\prime}\), then by (5.8) the projection \(\gamma_{M}=\pi_{M}\circ\gamma\) is a closed causal curve in \(M\), a contradiction. \((M^{\prime}\Longleftarrow M)\) Suppose \(\gamma\) is a closed \(g^{\prime}\)-causal curve in \(M^{\prime}\). Without loss generality we assume that it is future directed, thus either (i) \(\dot{\gamma}_{0}=0\) almost everywhere or (ii) \(\dot{\gamma}_{0}\neq 0\) on a set of nonzero Lebesgue measure. In the case of (i), since \(\gamma_{0}\) is absolutely continuous and the fundamental theorem of calculus applies, \(\gamma_{0}\) must be constant and thus the projection \(\gamma_{M}\) is a closed \(g\)-causal curve in \(M\), a contradiction. In case of (ii) there must be a set of nonzero Lebesgue measure where \(\dot{\gamma}_{0}>0\) and thus, due to closedness of the curve in the \(t\) variable and the fundamental theorem of calculus, there must also be a set of nonzero Lebesgue measure where \(\dot{\gamma}_{0}<0\), a contradiction to \(\gamma\) being future directed causal. Thus \(\gamma\) does not exist. Note that Proposition 5.26 is already weaker than in the Riemannian vs. Lorentzian case (\(\nu=0\)) where \(M^{\prime}\) is (even stably!) causal completely independent of \(M\). From the proof it is also clear which implication is the most fragile one. Indeed, we lose it when stepping up on the causal ladder. **Proposition 5.27**.: _Let \((M,g)\) be a \((n-\nu,\nu)\)-spacetime with \(0\leq\nu\leq n\), and let \((M^{\prime},g^{\prime})\) and \((M^{\prime\prime},g^{\prime\prime})\) be as defined in (5.6) and (5.7), respectively. Then_ \[(M^{\prime},g^{\prime})\text{ is stably causal}\] \[\Longrightarrow (M,g)\text{ is stably causal}\] \[\Longleftrightarrow (M^{\prime\prime},g^{\prime\prime})\text{ is stably causal}.\] _To be precise, for each valid implication, the same (up to natural projection and embedding) time functions can be used._ _Moreover, if \(\nu=n\) and \(\tau_{M}\) is a time function for \(M\) then the function \(\tau=\tau_{M}\circ\pi_{M}+t\) is a time function for \(M^{\prime}\), which also shows that_ \[(M^{\prime},g^{\prime})\text{ is stably causal}\Longleftrightarrow(M,g)\text{ is stably causal}.\] Proof.: \((M^{\prime}\Longrightarrow M\Longleftarrow M^{\prime\prime})\) Since the cone structures satisfies \(\{0\}\times C\eqslantin C^{\prime}\cap C^{\prime\prime}\) by (5.8) the causal curves in \(M\) are fully captured by those in \(C^{\prime}\) (and in \(C^{\prime\prime}\)). Thus a time function \(\tau\) of \(M^{\prime}\) (or of \(M^{\prime\prime}\)) induces a time function \(\tau_{M}(p_{M}):=\tau(0,p_{M})\) on \(M\). \((M\Longrightarrow M^{\prime\prime})\) Suppose \(\tau_{M}\) is a time function of \(M\). Since by (5.8) \(\pi_{M}(C^{\prime\prime})\eqslantin C\) we can simply pullback \(\tau_{M}\) to \(M^{\prime\prime}\) to obtain a time function \(\tau=\tau_{M}\circ\pi_{M}\). \((M\Longrightarrow M^{\prime}\) if \(\nu=n)\) Suppose \(\tau_{M}\) is a time function for \(M\) and \(t\) the coordinate of the additional dimension. It follows immediately from (5.9) that \(\tau_{M}\circ\pi_{M}+t\) is a time function for \(M^{\prime}\). Recall that the reason why the second implication \((M\Longrightarrow M^{\prime\prime})\) in Proposition 5.27 holds is that compared to \(M\) the projections of the cones in \(M^{\prime\prime}\) to \(M\) are narrower (due to more positive directions) or remain the same (if the new variable remains fixed). On the other hand, the projections of the cones in \(M^{\prime}\) become strictly wider than the cones on \(M\) (due to more negative directions) even if the new variable is fixed (unless \(\nu=n\), in which case they remain the same). In other words, if \(\gamma\) is a future directed \(g^{\prime}\)-causal curve then \(\pi_{M}\circ\gamma\) need not be future directed \(g\)-causal. This also explains why the implication \((M\Longrightarrow M^{\prime})\) only works for \(\nu=n\). Moreover, unlike from the Riemannian to the Lorentzian case the coordinate \(t\) is not a time function for \(M^{\prime}\) if \(X_{\nu+1}=\partial_{t}\). We show that if one mildly perturbs the vector field \(\partial_{t}\) one can nonetheless ensure that \(t\) is a time function and thus safe stable causality for \(M^{\prime}\) while leaving the first \(\nu\) time frame orienting vector fields intact. As in Section 2.3 (stable) causality of \(M\) is then also not required. **Proposition 5.28**.: _Let \((M,g)\) be a \((n-\nu,\nu)\)-spacetime with \(0<\nu\leq n\) and time-orientation defining vector fields \(X_{1},\ldots,X_{\nu}\). Then for every \(\varepsilon>0\) the semi-Riemannian manifold \((M^{\prime},g^{\prime})=(\mathbb{R}\times M,-dt^{2}\oplus g)\) with vector fields \(X_{1},\ldots,X_{\nu}\) and \(X_{\nu+1}=\partial_{t}-\varepsilon(X_{1}+\ldots+X_{\nu})\) is a \((n-\nu,\nu+1)\)-spacetime with temporal function \(t\). In particular, \((M^{\prime},g^{\prime})\) is stably causal._ Proof.: Since the vector fields \(X_{1},\ldots,X_{\nu},X_{\nu+1}\) are continuous, linearly independent and satisfy \[g^{\prime}(X_{i},X_{i}) =g(X_{i},X_{i})<0,\qquad i=1,\ldots,\nu,\] \[g^{\prime}(X_{\nu+1},X_{\nu+1}) =-1+\varepsilon^{2}\underbrace{g(X_{1}+\ldots+X_{\nu},X_{1}+ \ldots+X_{\nu})}_{<0}<0,\] they define a time orientation for \((M^{\prime},g^{\prime})\) that extends the time orientation of the submanifolds \(\{p_{0}\}\times M\) (but not orthogonally!). We show that \(t\) is a temporal function. To this end write any \(v\in C^{\prime}\subseteq T_{p}M^{\prime}\) as direct sum \(v=v_{0}+v_{M}\in\operatorname{span}(\partial_{t}(p))\oplus\operatorname{span} (\partial_{t}(p))^{\perp}\). Future directedness in the first \(\nu\) directions and orthogonality \(g^{\prime}=-dt^{2}\oplus g\) implies that \[g(v_{M},X_{i}(p_{M}))=g^{\prime}(v,X_{i}(p))\leq 0,\qquad i=1,\ldots,\nu. \tag{5.12}\] In particular, \(g(v_{M},X_{1}+\ldots+X_{\nu})\leq 0\). Future directedness of \(v\) in the \((\nu+1)\)-th direction means that \[0\geq g^{\prime}(v,X_{\nu+1}) =g^{\prime}(v,\partial_{t})-\varepsilon g^{\prime}(v,X_{1}+ \ldots+X_{\nu}) \tag{5.13}\] \[=-dt(v)-\varepsilon g(v_{M},X_{1}+\ldots+X_{\nu}).\] Note that, in addition, by Lemma 5.13 there is at least one \(j\in\{1,\ldots,\nu+1\}\) such that \[g^{\prime}(v,X_{j}(p))<0.\] If \(j=\nu+1\) then (5.13) is a strict inequality and hence \[dt(v)>-\varepsilon g(v_{M},X_{1}+\ldots+X_{\nu})\geq 0.\] If (5.13) is an equality then there must be \(j\in\{1,\ldots,\nu\}\) for which (5.12) is a strict inequality. Hence \[dt(v)\geq-\varepsilon g(v_{M},X_{1}+\ldots+X_{\nu})\geq-\varepsilon g(v_{M},X _{j})>0.\] Either way, for all future directed causal vectors \(v\in T_{p}M\) we have \(dt(v)>0\). Hence \(t\) is a temporal function of \((M^{\prime},C^{\prime})\). Having established the equivalence of causality in Proposition 5.26 and having shown how time functions carry over from one spacetime to another in Propositions 5.27 and 5.28, we finally turn to Cauchy time functions and global hyperbolicity and prove the remaining implications of Theorem 1.8. We start with the easier situation \(M^{\prime\prime}\). Adding positive definite directions, meaning \(+dt^{2}\), is not a problem as no additional time orientation defining vector fields are needed and projections and pullbacks behave as desired. **Proposition 5.29**.: _Let \((M,g)\) be a \((n-\nu,\nu)\)-spacetime with time orientation defining vector fields \(X_{1},\ldots,X_{\nu}\), and \((M^{\prime\prime},g^{\prime\prime})=(\mathbb{R}\times M,dt^{2}\oplus g)\) the \((n-\nu+1,\nu)\)-spacetime via the same vector fields. Then_ \[(M,g)\] _globally hyperbolic \[\iff(M^{\prime\prime},g^{\prime\prime})\] globally hyperbolic._ Proof.: \((\Longrightarrow)\) If \((M,g)\) is globally hyperbolic then, by Theorem 4.10, there exists a Cauchy time function \(\tau_{M}\colon M\to\mathbb{R}\). In the proof of Proposition 5.27 we have seen that \(\tau=\tau_{M}\circ\pi_{M}\) is a time function for \(M^{\prime\prime}\). Suppose \(\tau\) is not Cauchy. Then there is an inextendible causal curve \(\alpha\) such that \(\sup(\tau\circ\alpha)<\infty\). Since \(\mathbb{R}\) is complete we can always extend in the first component \(\alpha_{0}\). Moreover, since \(\alpha_{M}=\pi_{M}\circ\alpha\) is a causal curve in \(M\) and \((M,g)\) is globally hyperbolic we have that \(\alpha_{M}\) must be extendible in \(M\), and hence \(\alpha\) must be extendible in \(M^{\prime\prime}\), a contradiction. Hence \(\tau\) is Cauchy in \((M^{\prime\prime},g^{\prime\prime})\) which is therefore globally hyperbolic by Theorem 4.10. (\(\Longleftarrow\)) Suppose \((M^{\prime\prime},g^{\prime\prime})\) is globally hyperbolic and \(\tau\) is a Cauchy time function for \(M^{\prime\prime}\). By Proposition 5.27 the pulled backed \(\tau_{M}(p_{M})=\tau(0,p_{M})\) is a time function for \(M\). If \(\tau_{M}\) is not Cauchy then there is an inextendible future directed causal curve \(\alpha_{M}\colon\mathbb{R}\to M\) such that \(\sup(\tau_{M}\circ\alpha_{M})<\infty\). For the pulled back curve \(\alpha=(0,\alpha_{M})\) we would then have \(\sup(\tau\circ\alpha)<\infty\), meaning that it is extendible in \(M^{\prime\prime}=\mathbb{R}\times M\) because of Cauchyness of \(\tau\). Since the first component of \(\alpha\) is constant this implies that \(\alpha_{M}\) is extendible in \(M\). Hence \(\tau_{M}\) is a Cauchy time function. **Proposition 5.30**.: _Let \((M,g)\) be a \((n-\nu,\nu)\)-spacetime with \(0<\nu\leq n\) and corresponding proper cone structure \((M,C)\). The for the orthogonally extended \((n-\nu,\nu+1)\)-spacetime \((M^{\prime},g^{\prime})=(\mathbb{R}\times M,-dt^{2}\oplus g)\) we have that_ \[(M^{\prime},g^{\prime})\text{ globally hyperbolic }\Longrightarrow(M,g) \text{ globally hyperbolic.}\] _If \(\nu=n\) then also the reverse implication (\(\Longleftarrow\)) holds._ Proof.: By Lemma 5.24\((M^{\prime},g^{\prime})\) is a spacetime with \(\{0\}\times C\preccurlyeq C^{\prime}\). That global hyperbolicity descends from \(M^{\prime}\) to \(M\) thus follows as in the proof of Proposition 5.29 for \(M^{\prime\prime}\). It remains to prove the reverse implication if \(\nu=n\). If \(\tau_{M}\) is a Cauchy time function in \(M\) then we already know from Proposition 5.27 that \(\tau=\tau_{M}\circ\pi_{M}+t\) is a time function for \(M^{\prime}\). Suppose \(\tau\) is not Cauchy in \(M^{\prime}\). Then there is a future directed future inextendible \(g^{\prime}\)-causal curve \(\alpha\) such that \(\sup(\tau\circ\alpha)<\infty\). This implies that both \(\tau_{M}\circ\alpha_{M}\) and \(t\circ\alpha\) are bounded from above, but since \(M\) is globally hyperbolic (and \(\dot{\alpha}_{M}\) is \(g\)-causal or \(=0\) by (5.9)) and \(\mathbb{R}\) is complete, we can extend both \(\alpha_{M}\) and \(\alpha_{0}\) to the future, and hence \(\alpha\) as well, a contradiction. We conclude with an example that demonstrates that, in general, the implication of Proposition 5.30 cannot be reversed if \(\nu<n\). This should come as no surprise because in Lemma 5.24 we have observed that the cones in \(M^{\prime}\) include the base \(M\) if we choose \(X_{\nu+1}=\partial_{t}\). However, we argue that even when turning them into true cones, for instance, by using \(X_{\nu+1}=\partial_{t}-\varepsilon(X_{1}+\ldots+X_{\nu})\) for \(\varepsilon>0\) as in Proposition 5.28 for enforcing stable causality on \(M^{\prime}\), global hyperbolicity of \(M\) does not, in general, imply global hyperbolicity of \(M^{\prime}\). This is in stark contrast to the Riemannian and Lorentzian equivalence obtained in Theorem 1.3. **Example 5.31** (Global hyperbolicity of \(M^{\prime}\) not inherited from \(M\) if \(\nu<n\)).: Let \(M=\mathbb{R}^{1,1}\backslash J^{+}(0)\subseteq\mathbb{R}^{1,1}\) be the a submanifold of \(2\)-dimensional Minkowski space with metric tensor \(\eta=-dx^{2}+dy^{2}\) (but same holds if \(\dim M>2\)). It is a spacetime with time orientation defining vector field \(X_{1}=\partial_{x}\). Since \(M\) is causal and all causal diamonds are compact it is easy to see that \(M\) is globally hyperbolic. In particular, for the points \(p_{M}=(-1,1)\) and \(q_{M}=(1,-2)\) we have \(J^{+}(p_{M})\cap J^{-}(q_{M})=\emptyset\). First, let us consider \((M^{\prime},g^{\prime})=(\mathbb{R}\times M,-dt^{2}\oplus\eta)\) with additional vector field \(X_{2}=\partial_{t}\). Consider now \(p=(-2,p_{M})\), \(q=(2,q_{M})\). Then \(J^{-}(q)\neq\emptyset\) but the causal diamond not closed: Consider the sequence of points \(x_{j}=(0,-\frac{1}{j},-\frac{1}{j})\). Then the curves \(\alpha_{j}\colon[0,1]\to M^{\prime}\), \[\alpha_{j}(s)=\left(2s-2,-1+s-\frac{s}{j},1-s-\frac{s}{j}\right),\] are from \(p\) to \(x_{j}\). One can show that \(\dot{\alpha}_{j}=(2,1-\frac{1}{j},-1-\frac{1}{j})\) and thus for \(j>1\) \[g^{\prime}(\dot{\alpha}_{j},\dot{\alpha}_{j})=-4+\frac{4}{j}<0,\] \[g^{\prime}(\dot{\alpha}_{j},X_{1})=-1+\frac{1}{j}<0,\qquad g^{ \prime}(\dot{\alpha}_{j},X_{2})=-2<0,\] i.e., the curves are future directed causal and \(x_{j}\in J^{+}(p)\). Moreover, the curves \(\beta_{j}\colon[0,1]\to M^{\prime}\), \[\beta_{j}(s)=\left(2-2s,1-s-\frac{s}{j},-2+2s-\frac{2s}{j}\right)\] are past directed causal in \(M^{\prime}\) from \(q\) to \(x_{j}\) for sufficiently large \(j\), because \(\dot{\beta}_{j}=(-2,-1-\frac{1}{j},2-\frac{2}{j})\) and thus \[g^{\prime}(\dot{\beta}_{j},\dot{\beta}_{j})=-2^{2}-\left(1+\frac {1}{j}\right)^{2}+2^{2}\left(-1-\frac{1}{j}\right)^{2}=-1+\frac{6}{j}+\frac{3} {j^{2}}<0,\] \[g^{\prime}(\dot{\beta}_{j},X_{1})=1+\frac{1}{j}>0,\qquad g^{ \prime}(\dot{\beta}_{j},X_{2})=2>0.\] Thus \(x_{j}\in J^{-}(q)\). The limiting point \(x=\lim_{j\to\infty}x_{j}=(0,0,0)\), however, is not in \(M^{\prime}\). Thus \(J^{+}(p)\cap J^{-}(q)\) is not closed and hence also not compact. Therefore, by definition, \((M^{\prime},g^{\prime})\) is not a globally hyperbolic spacetime. One could have the hope that by choosing \(\varepsilon\) appropriately one could still carry over global hyperbolicity from \(M\) to \(M^{\prime}\). This is generally not the case. One can still find examples of the type studied above as \(\partial_{t}\) always remains to be a future directed causal vector field for any choice of \(\varepsilon\) in the assumptions of Proposition 5.28 (even if it is negative) as \(g^{\prime}=-dt^{2}\oplus g\) consists of orthogonal components and one can go infinitely fast in the \(t\)-direction to compensate for any \(\varepsilon\). Note that such examples can only be constructed for \(0<\nu<n\) and the problem may very well be that the Wick-rotated metric \(dx^{2}+dy^{2}\) is _not_ a complete Riemannian metric on \(M\). All in all, we have shown Theorem 1.8. The implications are established in Propositions 5.29 and 5.30, and Example 5.31 shows that the missing implication is indeed false.
2309.12802
Deepfake audio as a data augmentation technique for training automatic speech to text transcription models
To train transcriptor models that produce robust results, a large and diverse labeled dataset is required. Finding such data with the necessary characteristics is a challenging task, especially for languages less popular than English. Moreover, producing such data requires significant effort and often money. Therefore, a strategy to mitigate this problem is the use of data augmentation techniques. In this work, we propose a framework that approaches data augmentation based on deepfake audio. To validate the produced framework, experiments were conducted using existing deepfake and transcription models. A voice cloner and a dataset produced by Indians (in English) were selected, ensuring the presence of a single accent in the dataset. Subsequently, the augmented data was used to train speech to text models in various scenarios.
Alexandre R. Ferreira, Cláudio E. C. Campelo
2023-09-22T11:33:03Z
http://arxiv.org/abs/2309.12802v1
Deepfake audio as a data augmentation technique for training automatic speech to text transcription models ###### Abstract To train transcript models that produce robust results, a large and diverse labeled dataset is required. Finding such data with the necessary characteristics is a challenging task, especially for languages less popular than English. Moreover, producing such data requires significant effort and often money. Therefore, a strategy to mitigate this problem is the use of data augmentation techniques. In this work, we propose a framework that approaches data augmentation based on deepfake audio. To validate the produced framework, experiments were conducted using existing deepfake and transcription models. A voice cloner and a dataset produced by Indians (in English) were selected, ensuring the presence of a single accent in the dataset. Subsequently, the augmented data was used to train speech to text models in various scenarios. data augmentation, deepfake audio, voice cloning, transcription models ## I Introduction Artificial intelligence has experienced significant growth in recent years due to increased computational power and the expansion of variety and volume of data exchanged over the internet. The pursuit of machine learning-generated models has expanded through various applications worldwide, such as speech-to-text transcription models. These models are utilized, for example, in translators, virtual assistants, voice search, and audio sentiment analysis [1]. For training such transcription models, labeled data is necessary, which consist of audio samples and their respective transcriptions. These transcriptions should be performed by humans to avoid biasing the results caused by transcriptions generated by another model. Robust transcription models should be able to generate consistent outcomes regardless of variations in a particular language (e.g., accents). However, producing such a robust model requires additional training, along with the utilization of more diversified and abundant data. Acquiring datasets with these characteristics is a challenging task, especially for languages less popular than English. On the other hand, producing a large dataset with these characteristics is costly and time-consuming, requiring significant financial resources and the necessary infrastructure for production. Multiple qualified individuals must manually produce the transcriptions to ensure good quality. Furthermore, to ensure transcription quality, each audio should have its transcription generated by more than one person, enabling the selection of the transcription that best represents the audio. One option to mitigate this problem and reduce time and cost is to use data augmentation techniques. There are various data augmentation techniques available, although most of them only allow the generation of new data with similar characteristics. For example, adding background noise or modifying the speaker's voice pitch in the audio. These techniques are efficient to produced improved transcritors to meet certain requirements. For example, it can produce models which presents consistent results regardless of background noise or voice tone present in the input audio. However, these data augmentation techniques do not help produce models that maintain the quality of their transcription when other characteristics vary in the input audio, such as the speaker accents. To achieve this, the model needs to be trained with data that includes a great variety of accents among speakers. The data augmentation technique proposed in this paper is based on deepfake audio. Deepfake audio is an area of artificial intelligence that aims to produce audios that simulate the voices of specific individuals, making them sound as if they themselves had produced the audio. There are various types of models that are supposed to achieve this objective. In this paper, a model that allows voice cloning from a few seconds of audio from the original speaker is used. As a result, the data augmentation technique benefits by generating audios from the same speaker with different speech contents while preserving the voice characteristics present in the audio, such as accent. The objective of this work is to investigate the use of this technique in datasets used for training automatic speech-to-text transcription models, evaluating the impact it has on their effectiveness. For this purpose, a framework has been implemented to investigate this technique. The framework requires a voice cloning model and a small dataset which will be submitted to the data augmentation process. In order to validate this produced framework, various scenarios are investigated and a small dataset is used. Next, a transcription model is trained using the produced augmented dataset, which involves fine-tuning a pre-trained model. Finally, a slice of the original data is separated to evaluate the transcription model before and after training, comparing whether the training process helped the model produce better transcriptions. The main contributions of this paper are: * Provide and implement a framework able to use deepfake audio as a data augmentation technique. The implementation is ready for use and available in the repository1 of this paper. So, it is possible to execute the produced framework using any voice cloning model by replacing the component responsible for generating new audios. Footnote 1: [https://github.com/alexandrerf3/data-augmentation-deepfake-audio](https://github.com/alexandrerf3/data-augmentation-deepfake-audio) * Evaluation of a completely different scenario for data augmentation: using deepfake audio. Regarding this, no previous work was found in the literature. Two experiments were conducted to validate the developed framework. In the first one, the framework is executed using the voice cloner with the pre-trained models provided by the author. As a result, the generated audios are used to train the transcript in multiple scenarios. Finally, the results were evaluated and showed that the quality of the transcriptions declined as the Word Error Rate (WER) metric increased by about two percent. In the second experiment, unlike the previous one, two out of the three models used by the voice cloner were trained in different scenarios. Then the best trained model combination was selected for audio generation and subsequent training of the transcription model in multiple scenarios, like the previous experiment. Finally, the results were evaluated and showed a decline in the quality of the transcriptions, with the Word Error Rate (WER) metric increasing by about six percent. The quality of the transcriptions generated by the trained transcription models in both experiments decreased in comparison with the pre-trained model. However, this decrease is believed to be due to the low quality of the audio generated by the voice cloner. Therefore, by using the produced framework and a voice cloner capable of producing high quality audio, the result should be better. The remainder of this paper is structured as follows. The next section presents Related Work. Then Section III provides details about the Theoretical Foundation to facilitate understanding of the research. Following this, Section IV discusses the procedures performed for the execution of the experiments. Then Section V describes the experiments conducted, presents and discuss the obtained results. Finally, in Section VI, concludes the paper while also highlighting potential directions for future research and exploration. ## II Related Work Due to the need for large datasets, several data augmentation techniques have been developed over the years. Some techniques are used to increase the data for training speech-to-text models, creating audio from modifying existing ones [2, 3, 4] or generating audio using text-to-speech models [5]. The audio speed perturbation technique [4] involves modifying the audio sampling rate through the definition of an alpha value, resulting in the generation of new audios with adjusted sampling rates. This technique's efficacy has been validated through various tests. SpecAugment [2] modifies the audio spectrogram using three methods: compressing/stretching the spectrogram, masking frequency channels, and masking time steps. Combined use of these methods yields good results. Similar to SpecAugment, the technique called SpecSwap [3] swaps frequency blocks and time blocks in the audio spectrogram. While it produces good results, a comparison with SpecAugment is lacking. Zevallos [5] conducted data augmentation through synthetic audio and text generation. The author used Quechua language, sequence-to-sequence text generation, and text-to-speech audio generation models. The experiments produced good results and improved the transcription quality. This paper explores data augmentation techniques for transcription model training. A framework is developed using a voice cloner model to generate new audios while preserving original dataset characteristics, such as accent. This approach provides an advantage over simpler techniques and conventional text-to-speech models, which introduce small changes/distortions or generate standardized voices without specific characteristics of the dataset. ## III Theoretical Foundation This section provides details of the theoretical foundations necessary for a complete understanding of the research. First, the operation of the voice cloner chosen to be used is explained, then the chosen transcription model is described. ### _Voice Cloning_ The chosen voice cloner for the investigations was the Real-Time Voice Cloning, provided by Corentin Jemine on his GitHub [6] and developed during his master's thesis. This cloner was selected for use due to its ability to generate new audios from a few seconds of a reference audio, without the need for retraining the models, even if the reference audio was not used during its training. The Real-Time Voice Cloning is an implementation of the SV2TTS deep learning architecture [7], which consists of three independently trained components. The first component is an encoder trained on a speaker verification task using a dataset without transcriptions. It takes a few seconds of a reference audio as input and outputs a fixed-size embedding vector. The second component is a synthesizer based on Tacotron 2[8] and is responsible for generating a mel spectrogram based on the input embedding vector and text. The third component is a vocoder, which takes the mel spectrogram as input and generates audio output. It was implemented based on WaveRNN[9] to enable real-time operation. Figure 1 illustrates the three components with their respective inputs and outputs. In the first component, a digital representation of the voice is created, and then in the second and third components, this representation is used as a reference for generating speech from arbitrary text. ### _Transcriptor_ The speech-to-text transcript chosen to be used in this work was DeepSpeech [10], which is an open-source speech-to-text model. The architecture of this model consists of a large Recurrent Neural Network (RNN). This model is simple but quite robust to background noise, speaker variation, and reverberation. The DeepSpeech project provides pre-trained models for inference or training through transfer learning in each version. ## IV Methodology This work consists of a qualitative experimental study. The following sections detail the procedures carried out for conducting the experiments and analyzing the results. ### _Dataset_ In order to conduct the investigations, it is necessary to have a dataset that includes pairs of audio recordings with their respective transcriptions. Additionally, these recordings should be in English and spoken by individuals with the same accent. Typically, accents can vary across different regions even within the same language. Therefore, for the experiment execution, datasets recorded in English by Indian speakers were sought, as they have a distinct accent compared to American and British speakers [11]. The chosen dataset for the experiments is the NPTEL [12] (NPTEL2020 - Indian English Speech Dataset), which was collected from YouTube videos. All the videos are in English and produced by Indians, most of them educational and with a South Asian accent. The audio from each video was extracted along with its transcription, as all the collected videos had transcriptions available that were manually uploaded by the author. The complete NPTEL dataset consists of 6.2 million audio segments, with an average duration of each segment ranging from 3 to 10 seconds. It is structured in the format of LibriSpeech [13], where the audio files are in WAV format, the transcriptions are in text files, and the metadata is in JSON format. Since the NPTEL dataset is not manually annotated by the authors, it is not certain whether the transcriptions for each video are done manually or with the assistance of a transcription model. To address this issue, the NPTEL authors decided to create a sample of one thousand audios, where all of them are manually transcribed by the authors themselves. This sample is called the Pure-Set. For this reason, this portion of the data was chosen to be used as the dataset for conducting the experiments. Table I show some important metadata regarding this dataset. ### _Data Preprocessing_ _Dataset Preprocessing:_ To preprocess the dataset, a script2 was created to generate unique and sequential IDs for each file, ensuring consistency across the audios, transcriptions, and metadata. Additionally, the script utilizes the ffmpeg-normalize [14] library to normalize the audios, set them to a frequency of 16000 Hz, and perform post-processing steps such as noise removal and the use of a high-pass filter. Finally, the audios with empty transcriptions are removed from the dataset, and the script provides a report indicating which files were removed upon completion. Footnote 2: [https://github.com/alexandrerf3/data-augmentation-deepfake-audio/blob/main/preprocess_nptel-pure.py](https://github.com/alexandrerf3/data-augmentation-deepfake-audio/blob/main/preprocess_nptel-pure.py) With this script, it is also possible to create subsets from the dataset by specifying the number of subsets and the number of audios in each subset. The audios for each subset are randomly selected without repetition. At the end, all the audios from the dataset are separated into the desired subsets, and text files are generated containing the IDs of the audios for each subset. If any audio is removed during the process due to having an empty transcription, the last subset will have a smaller number of audios. _Data Preprocessing for Cloner Training:_ To train the synthesizer or vocoder models of the voice cloner, additional preprocessing of the data is required. For this purpose, a script3 was created to organize the audios that will be used and place them in the file structure expected by the cloner's training scripts. It takes as input a text file containing the IDs of the audios and copies them, building the structure expected by the cloner. Footnote 3: [https://github.com/alexandrerf3/data-augmentation-deepfake-audio/blob/main/dataset_from_id.py](https://github.com/alexandrerf3/data-augmentation-deepfake-audio/blob/main/dataset_from_id.py) _Data Preprocessing for use in the Transcript:_ Two scripts were created to preprocess the data used in the inference and training of the transcript. The first script4 is responsible for generating CSV files in the format expected by DeepSpeech for training purposes. It takes a folder of audios and the number Fig. 1: Voice Cloner Architecture (Real-Time Voice Cloning) of audios to be separated for validation as input, processes them, and produces the training and validation CSV files. It is expected that the input audio folder contains audios generated by the voice cloner. Therefore, these audios are analyzed and compared with the original audios of their respective transcriptions during the execution of this first script. This comparison is done to discard audios generated with poor quality because, during manual analyses, it was observed that long-duration audios generated by the voice cloner tend to have poor quality compared to the original audios of their transcriptions. The generated audios have short pauses during speech, while the original audios have longer pauses. Additionally, when the voice cloner fails to generate a particular word in an audio, it intermittently tries to generate it, producing noise until reaching the maximum duration set. Therefore, if a generated audio has a longer duration than the original audio, it can be seen as an indication that it was not generated correctly. Two attributes were defined to perform this comparison. The first attribute is called gap_size_percentage and represents the percentage of additional duration that the generated audio must have compared to the original audio in order to be discarded. For example, using a value of 50% for this attribute and considering that the original audio is five seconds long, the generated audio needs to have a duration of 7.5 seconds or more to be discarded. However, during some tests using this attribute, it was noticed that when the original audios were short and the generated audios were slightly longer than them, they were being discarded when they shouldn't be. For instance, considering a value of 50% for the attribute and an original audio with a duration of two seconds, generated audios with durations of three seconds or more, based on the transcription of that original audio, were being discarded. However, after analysis, it was realized that a difference of just one second between the generated audios and the original audio was discarding audios that didn't have poor quality. In order to mitigate this issue, a second attribute was added to be used during the comparison, called gap_size. It indicates the duration by which the generated audio needs to exceed the original audio in order to be discarded. For example, considering that the original audio is seven seconds long and using a value of five for the attribute, the generated audio needs to be 12 seconds long or longer to be discarded. The generated audio is only discarded when it exceeds the duration of the original audio from its transcription, considering both attributes in the comparison. Therefore, the discarded audios are highly likely to be generated audios with poor quality. Finally, a text file is generated with information about the discarded audios, displaying the attribute values used in the comparison, the discarded audios with their respective durations, the durations of the original audios for each transcription, and, at the end, the total number of discarded audios. After discarding, the audios that will be part of the validation set are randomly selected without repetition, and the remaining audios are assigned to the training set. Subsequently, each transcription undergoes preprocessing, converting the text to lowercase and converting numbers into words, for example, the number 1 is transformed into 'one'. Finally, each validation and training file is created in CSV format following the DeepSpeech's expected model, including the respective audios and transcriptions. The other script5 operates similarly to the one described earlier. It is responsible for generating CSV files in the format expected by DeepSpeech for audios that have not been generated by the voice cloner. For example, it can be used to generate the desired test file with audios and transcriptions that will be used to test the transcript. This script performs the same transcription preprocessing as the previous one and creates the CSV file in the desired format. Footnote 5: [https://github.com/alexanderrf3/data-augmentation-deepfake-audio/blob/main/train-deepspeech/create_csv_file.py](https://github.com/alexanderrf3/data-augmentation-deepfake-audio/blob/main/train-deepspeech/create_csv_file.py) ### _Voice Cloner Training_ After the preprocessing performed by the script detailed in section IV-B, it is possible to use the preprocessed data for training the models used by the voice cloner. However, the chosen dataset for this work only includes audios and their respective transcriptions. Therefore, as explained in section III-A, it is only possible to train the synthesizer and vocoder models of the voice cloner. To train these models, the scripts available in the Real-Time Voice Cloning repository [6] are used. Additionally, a step-by-step6 guide is provided in the same repository, which instructs on what scripts to use and in what order. For training the synthesizer model, a data preprocessing is performed using the scripts with the prefix synthesizer_preprocess. Finally, the synthesizer_train.py script is used for the training itself. Furthermore, for training the vocoder model, the same preprocessing steps as the synthesizer model are applied, followed by a specific vocoder preprocessing using the vocoder_preprocess.py script. Finally, the training is carried out using the vocoder_train.py script. Footnote 6: [https://github.com/Alecanderrf3/data-augmentation-deepfake-audio/blob/main/voice_cloning_inferences.py](https://github.com/Alecanderrf3/data-augmentation-deepfake-audio/blob/main/voice_cloning_inferences.py) ### _Audios Generation_ For generating audios using the voice cloner, two scripts were created: a main script7 and an auxiliary script8. The main script takes as input a text file containing the IDs of the audios used as reference audios, as well as the maximum number of audios to be generated from each reference audio. Then, for each reference audio, a random selection is made of the maximum number of other reference audios whose transcriptions will be used in the generation of the new audios. Footnote 7: [https://github.com/alexanderrf3/data-augmentation-deepfake-audio/blob/main/voice_cloning_inferences.py](https://github.com/alexanderrf3/data-augmentation-deepfake-audio/blob/main/voice_cloning_inferences.py) For example, considering that the main script receives eight reference audios and a maximum limit of five, five new audios are generated for each of the eight reference audios. The text of these new audios consists of randomly selected transcriptions, without repetition, from the other reference audios, excluding the current one. Figure 2 illustrates a step in this example, where audio 3 is the reference audio and the highlighted transcriptions are the ones that were randomly selected for generating new audios, using the voice from audio 3 as the cloning reference. Therefore, the maximum limit of audios generated from each reference audio should be smaller than the total number of audios, considering that the transcription of the reference audio itself is not used in generating the new audios. With the reference audios and their respective transcriptions that will be used in generating the new audios, the auxiliary script is used for applying the voice cloner. It was created based on the script called demo_cli.py9 from the Real-Time Voice Cloning repository [6]. With this script, it is possible to perform inferences on the three models of the cloner in the correct order, allowing voice cloning. During the process, some audios that would have been generated may be discarded if there is an error or if the synthesizer model has generated a very small mel spectrogram. Footnote 9: [https://github.com/CorentinJ/Real-Time-Voice-Cloning/blob/master/demo_cli.py](https://github.com/CorentinJ/Real-Time-Voice-Cloning/blob/master/demo_cli.py) ### _Training the Transcriptor_ For training the DeepSpeech transcription model, it is necessary to preprocess the data as described in section IV-B. After preprocessing and generating the required CSV files, the training is conducted using the repository10 and pre-trained models11 provided by DeepSpeech. The training commands for the transcriptor are listed in the repository's README12. By default, when training is conducted over multiple epochs, DeepSpeech evaluates the validation set at the end of each epoch and calculates a loss metric, saving the model with the lowest value. Therefore, at the end of training, the model with the lowest loss in all epochs is the one that is saved. Footnote 10: [https://github.com/mozilla/DeepSpeech](https://github.com/mozilla/DeepSpeech) Footnote 11: [https://github.com/mozilla/DeepSpeech/releases/tag/v0.9.3](https://github.com/mozilla/DeepSpeech/releases/tag/v0.9.3) Footnote 12: [https://github.com/alexandrerf/3/data-augmentation-deepfake-audio/blob/main/README.md](https://github.com/alexandrerf/3/data-augmentation-deepfake-audio/blob/main/README.md) ### _Inferences in the Transcriptor_ To perform inferences in the transcriptor, a script13 was developed to make this process easier. This script takes as input the model, the scorer, and a CSV file, formatted as expected by DeepSpeech, containing the audios used for inference and their original transcriptions. Footnote 13: [https://github.com/alexandrerf3/data-augmentation-deepfake-audio/blob/main/deepspeech/inferences_deepspeech.py](https://github.com/alexandrerf3/data-augmentation-deepfake-audio/blob/main/deepspeech/inferences_deepspeech.py) For evaluating the transcriptions produced by the transcriptor, the Word Error Rate [15] (WER) metric was chosen. The WER is frequently employed in the performance evaluation of transcription systems, considering potential instances of word omission, addition, and substitution. Therefore, after the inferences, the original transcriptions and the transcriptions generated by the transcriptor are used to calculate the average WER of all the inferences made. ## V Results and Discussions In this section, we present the experiments conducted, the results obtained, and the discussions regarding them. To do this, we performed the data preprocessing described in Section IV-B to create two experiments using the same dataset. The experiments involve audio generation, training of the transcriptor using the generated audios, and evaluation of the transcriptor before and after training. The second experiment, unlike the first, focuses on training the cloning models to improve the results. ### _Experiment 1_ In this experiment, the dataset is preprocessed and then split into two portions, each with 500 and 498 audios. The reduction in the second portion's size results from the discarding process during preprocessing. As a result, the first portion is used to evaluate the transcriptions generated before and after training, while the second portion is used to generate new audios used to train the transcriptor. Figure 3 illustrates the entire step-by-step process of this experiment. As seen in Figure 3, the second portion is used for generating new audios. In this case, the 498 audios serve as reference audios, and the limit quantity is set to 21, resulting in Fig. 3: Illustration of the step-by-step performed in Experiment 1 Fig. 2: Illustration of a step in the process of generating new audios the generation of 10,458 audios. Subsequently, the generated audios are used to train the transcriptor in various scenarios, each consisting of 200 epochs. In each scenario, a different hyperparameter is modified to achieve better training results. Dropout is used with both the default value and a specific value of 0.4. Additionally, in one of the scenarios, the scorer is also incorporated. Portion number 1 is used to perform inferences with the transcriptor to evaluate it. Firstly, inferences are made with the pre-trained model, and the generated transcriptions are used to calculate WER metric. After each model training, the portion is used again to perform new inferences and calculate a new WER value. Table II displays the different training scenarios, including variations in the hyperparameters, and the corresponding WER results obtained for each scenario. After fine-tuning the transcription model, the WER result worsened compared to the pre-trained model, despite the variations in hyperparameters. After analyzing the results, it became evident that the generated audios lack satisfactory quality, with some being totally or partially incomprehensible. This factor is most likely a contributor to the observed decline in the achieved results. ### _Experiment 2_ In this experiment, the voice cloner's synthesizer and vocoder models are trained to improve the quality of the audios generated. To do this, the dataset is preprocessed and subsequently partitioned into three portions, each with 200, 300, and 498 audios, respectively. Notably, the third portion contains two fewer audios due to data discarded during the preprocessing phase. Therefore, portion number 1 is used for generating new audios, portion number 2 for evaluating the transcriptor before and after training, and portion number 3 is utilized for training the voice cloner models, as illustrated in Figure 4. To perform the training of the voice cloner models, an additional preprocessing step is required, as explained in section IV-C. During this preprocessing, four audios were discarded from the 498 audios in portion number 3, leaving 494 audios to be used for training. Several trainings were conducted on the synthesizer and vocoder models of the voice cloner, using combinations of fine-tuning and retraining. Table III provides details about these trainings, showing the pre-trained models provided by the author as the default and the combinations of training performed. It also indicates the number of steps each model was trained for, highlighting the quantity of training for each combination. After training the models in various combinations, it was necessary to assess the quality of the generated audios for each combination. For this purpose, a qualitative analysis is conducted. A sample of 10 audios is selected, where one audio is used as a reference, and nine audios are generated using it as a reference, resulting in a total of 90 audios. The quality of the audios is evaluated manually and classified into three categories: **poor**, **reasonable**, and **good**. Additionally, a score is calculated for each model combination based on the received classifications, where **poor** corresponds to one point, **reasonable** corresponds to two points, and **good** corresponds to three points. In principle, these analyses are conducted on the training combinations where at least one of the models is retrained. As observed in the visualizations of Figure 5 and Table IV, the model combinations that were retrained and achieved better results are referred to as sys_zero_voc and sys_trained_zero_voc, while the results of the other combinations are extremely poor. In the next analysis, the combinati Fig. 4: Illustration of the step-by-step performed in Experiment 2 Fig. 5: Qualitative analysis of the retrained models results in the previous analysis are considered together with the other combinations that did not have their models retrained. Furthermore, during this first analysis, it was observed that the generated audios with a long duration tend to have poor quality. Therefore, the next analysis classifies the audio duration as standard or long, aiming to verify if audios with a long duration indeed tend to be poor. After the last qualitative analysis of the models, it can be observed in the visualizations of Figure 6 and Table V that the combinations of models that achieved better results are the ones called standard and sys_zero_voc. The combination of models called standard was already used in Experiment 1 (V-A) for audio generation, transcript training, and analysis of the results. Therefore, in this experiment, the combination of models called sys_zero_voc is used for further investigations. In addition, this last analysis allowed us to verify if long-duration audios tend to have poor quality. Thus, observing Figure 6, it can be affirmed that the majority of audios with long duration do indeed have poor quality. Therefore, the discarding of long-duration audios is valid, and it is done during the preprocessing of the generated audios, before they are used in the transcription model, as described in Section IV-B. The models from the combination named sys_zero_voc are then used in the voice cloner to generate new audios, based on the 200 reference audios from portion number 2 and with a limit quantity set to 52, resulting in a total of 10,400 audios. The limit quantity value of 52 is chosen aiming to generate an approximate number of audios similar to what was generated and used in experiment 1. The generated audios are then used in the training of the transcription model in different scenarios, varying its hyperparameters as conducted in experiment 1 (V-A). generated by the voice cloner with the new models, the transcriptions significantly worsened, and the WER metric increased by approximately six percent. A probable cause for the decline in results is the quality of the generated audios, which, even after several attempts to train the voice cloner models, continue to have poor quality. One option to improve the results is to change the voice cloner. However, for the investigations and experiments conducted in this work, a voice cloner capable of cloning a voice from a few seconds of a reference audio is required. As a result, Real-Time Voice Cloning [6] was the only option found with freely available code for use. Other options that claimed to have higher quality in voice cloning do not have their codes available due to the potential misuse of such technology. Additionally, some sources mention that the codes will only be disclosed once reliable detectors for audio generated from deepfake techniques are developed. Furthermore, the authors of the SV2TTS architecture [7], used in Real-Time Voice Cloning, point out that the most efficient and effective way to improve the quality of generated audios is to train the encoder model, as can be observed in the cloner's architecture in Figure 1. However, during the course of this work, it was not possible to train the encoder model because it requires a dataset where speaker information is available for each audio. Unfortunately, the dataset used in this work does not provide such information. Another factor that possibly influences the lack of improvement in the cloner after the training is that the audios in the dataset used in this work are noisy. They are extracted from YouTube14 videos recorded in various environments with different recording equipment. Furthermore, since most of the videos are educational, a significant portion of the speech in the audios contains technical language related to the taught content. Some examples of transcriptions from the audios, observed during the manual analysis, can be seen in Table VII. Footnote 14: YouTube — www.youtube.com These data, which contain more technical language, are not commonly found in datasets. Therefore, it is highly likely that the pre-trained models of the voice cloner and the transcription were not trained with such technical words. ## VI Conclusions and Future Work To conduct the investigations and experiments in this work, using deepfake audio as a data augmentation technique, we sought a dataset in the English language that exclusively featured the Indian accent. This dataset needed to contain pairs of audios with their respective transcriptions. Additionally, it was necessary to find a voice cloner capable of cloning voices from a few seconds of a reference audio. This allows for the augmentation of the chosen dataset. With the augmented dataset in hand, it was necessary to verify whether its utilization in training a transcript would result in transcriptions of higher quality. For this purpose, the transcription model called DeepSpeech was employed to conduct the investigations. By selecting a subset of the data, inferences were made to the transcript, and the quality of its generated transcriptions was measured using a metric called WER (Word Error Rate). Subsequently, after training the transcription model using the augmented data, the same subset was used to make new inferences, aiming to assess the quality of the transcriptions produced after training and determine whether there was an improvement in the results or not. With the experiments conducted in this work, no improvements were observed in the quality of the transcriptions generated after training the transcription model. Despite training the transcript in various scenarios, all of them showed a deterioration in transcription quality. One likely reason for these results is the quality of the audios generated by the voice cloner, as manual analyses of the audios revealed poor quality. Even after training some of the models used by the voice cloner, the quality of the generated audios remained unsatisfactory. Therefore, the audios generated with poor quality may be hindering the learning of the transcription model. In an attempt to achieve better results, a future work that can be conducted is improving the quality of the generated audios. For this purpose, one can seek better training of the voice cloner models by making changes to the hyperparameters or architectures of the synthesizer and vocoder. Additionally, it would be beneficial to find a dataset in the English language that specifically includes Indian accents and provides speaker identification. This would allow for the training of the encoder model of the voice cloner, thereby improving its performance. Additionally, as voice cloning models are constantly evolving, it is possible to use the framework developed throughout this work in conjunction with a new voice cloning model. This new voice cloning model should be suitable for conducting the experiments, and if it has better audio generation quality, it will likely yield better results. The dataset explored in this work has some characteristics that make it challenging to generate audios and transcriptions, such as background noise and technical language. One possible improvement for conducting the experiments is to find or create a dataset with a larger quantity of audios, where they have less noise and a less technical language.
2309.06687
Self-Refined Large Language Model as Automated Reward Function Designer for Deep Reinforcement Learning in Robotics
Although Deep Reinforcement Learning (DRL) has achieved notable success in numerous robotic applications, designing a high-performing reward function remains a challenging task that often requires substantial manual input. Recently, Large Language Models (LLMs) have been extensively adopted to address tasks demanding in-depth common-sense knowledge, such as reasoning and planning. Recognizing that reward function design is also inherently linked to such knowledge, LLM offers a promising potential in this context. Motivated by this, we propose in this work a novel LLM framework with a self-refinement mechanism for automated reward function design. The framework commences with the LLM formulating an initial reward function based on natural language inputs. Then, the performance of the reward function is assessed, and the results are presented back to the LLM for guiding its self-refinement process. We examine the performance of our proposed framework through a variety of continuous robotic control tasks across three diverse robotic systems. The results indicate that our LLM-designed reward functions are able to rival or even surpass manually designed reward functions, highlighting the efficacy and applicability of our approach.
Jiayang Song, Zhehua Zhou, Jiawei Liu, Chunrong Fang, Zhan Shu, Lei Ma
2023-09-13T02:56:56Z
http://arxiv.org/abs/2309.06687v2
Self-Refined Large Language Model as Automated Reward Function Designer for Deep Reinforcement Learning in Robotics ###### Abstract Although Deep Reinforcement Learning (DRL) has achieved notable success in numerous robotic applications, designing a high-performing reward function remains a challenging task that often requires substantial manual input. Recently, Large Language Models (LLMs) have been extensively adopted to address tasks demanding in-depth common-sense knowledge, such as reasoning and planning. Recognizing that reward function design is also inherently linked to such knowledge, LLM offers a promising potential in this context. Motivated by this, we propose in this work a novel LLM framework with a self-refinement mechanism for automated reward function design. The framework commences with the LLM formulating an initial reward function based on natural language inputs. Then, the performance of the reward function is assessed, and the results are presented back to the LLM for guiding its self-refinement process. We examine the performance of our proposed framework through a variety of continuous robotic control tasks across three diverse robotic systems. The results indicate that our LLM-designed reward functions are able to rival or even surpass manually designed reward functions, highlighting the efficacy and applicability of our approach. All codes and results relevant to this paper are available at [https://github.com/zhehuazhou/LLM_Reward_Design](https://github.com/zhehuazhou/LLM_Reward_Design). ## 1 Introduction Over the past years, substantial progress has been achieved in leveraging Deep Reinforcement Learning (DRL) to tackle a broad spectrum of complex challenges across diverse robotic domains, such as manipulation Nguyen and La (2019), navigation Zhu and Zhang (2021), locomotion Yue (2020), and aerial robotics Azar et al. (2021). However, despite these advancements, training high-performing DRL agents remains a challenging task Andrychowicz et al. (2020). A principal contributing factor to this complexity is the inherent difficulty in designing an effective reward function, which is vital and fundamental to DRL approaches Sutton et al. (1998). Conventional methods of reward function design predominantly rely on meticulous manual crafting Eschmann (2021). Recent research has introduced Automated Reinforcement Learning (AutoRL) approaches Parker-Holder et al. (2022), aiming to automate the hyperparameters and reward function tuning in DRL. These approaches commence with a predefined, parameterized reward function and subsequently fine-tune its parameters to identify an optimal reward function Chiang et al. (2019); Faust et al. (2019). However, instead of developing the reward function from scratch, AutoRL remains dependent on an initial parameterized function provided by human experts. The construction of such a function often demands domain-specific expertise and a significant investment of time and effort. In recent research, Large Language Models (LLMs) have been increasingly utilized for tasks that demand common-sense reasoning and extensive world knowledge Bommasani et al. (2021), spanning domains like natural language processing Brown et al. (2020), task planning Ahn et al. (2022), and reasoning Zelikman et al. (2022). The compelling outcomes from these studies reveal the ability of LLMs to emulate human cognitive processes and integrate a substantial degree of common-sense knowledge Petroni et al. (2019); Davison et al. (2019). Given that designing reward functions often also depends on such knowledge, researchers are currently exploring the potential of LLMs as reward function designers for DRL. Leveraging natural language instructions as input, LLMs are able to formulate effective reward functions for simple tasks in game environments with discrete action spaces Kwon et al. (2023). Moreover, their rich internalized knowledge about the world also aids in comprehending both user preferences and task requirements. However, as LLMs are essentially engineered to generate word sequences that align with human-like context, their efficacy and reliability in reward function design remain uncertain, especially for robotic control tasks that involve continuous action spaces. In this work, we investigate the possibility of employing LLM as an automated reward function designer for DRL-driven continuous robotic control tasks. Motivated by recent studies that demonstrate the capability of LLM for self-refinement Madan et al. (2023); Huang et al. (2022), we propose a novel self-refined LLM framework for reward function design. The framework consists of three steps (see Fig. 1): 1) _Initial design_, where the LLM accepts a natural language instruction and devises an initial reward function; 2) _Evaluation_, where the system behavior resulting from the training process using the designed reward function is assessed; 3) _Self-refinement loop_, where the evaluation feedback is provided to the LLM, guiding it to iteratively refine the reward function. To optimize results, the evaluation and self-refinement steps are repeated until either a predefined maximum number of iterations is reached, or the evaluation suggests satisfactory performance. We examine the performance of the proposed self-refined LLM framework across nine different tasks distributed among three diverse robotic systems. The results show that our approach is capable of generating reward functions that not only induce desired robotic behaviors but also rival or even exceed those meticulously hand-crafted reward functions. The contributions of this paper are threefold: Figure 1: Our proposed self-refine LLM framework for reward function design. It consists of three steps: _initial design_, _evaluation_, and _self-refinement loop_. A quadruped robot forward running task is used as an example here. A complete list of the prompts used in this work can be found in the appendix. * We explore the ability of LLM to design reward functions for DRL controllers. Diverging from many studies that leverage few-shot in-context learning when prompting the LLM, we employ the LLM as a zero-shot reward function designer. * We incorporate a self-refinement mechanism into the reward function design process to enhance its outcomes. * We highlight the effectiveness and applicability of our proposed approach through a variety of continuous robotic control tasks across diverse robotic systems. ## 2 Related Work **Reward Function Design with AutoRL** AutoRL extends Automated Machine Learning (AutoML) He et al. (2021) principles to address reinforcement learning challenges. Its primary focus is to automate the fine-tuning of both the architectures of neural networks and the hyperparameters of learning algorithms Parker-Holder et al. (2022). For reward function design, AutoRL utilizes evolutionary algorithms Real et al. (2019) to adjust the parameters of a predefined parameterized reward function, aiming to identify an optimal reward function. In Chiang et al. (2019), AutoRL is applied to modify the reward function for a navigation problem, while Faust et al. (2019) further expands the use of AutoRL in reward function optimization to multiple reinforcement learning benchmarks simulated in Mujoco Todorov et al. (2012). In essence, AutoRL can be considered a reward shaping technique Ng et al. (1999). However, due to its dependency on an initially hand-crafted parameterized reward function, AutoRL lacks the ability to formulate a reward function entirely from scratch. **Reward Function Design with LLM** Benefiting from its pre-trained common-sense knowledge, LLM offers the potential to alleviate the human effort required in formulating reward functions. Recent studies have revealed the capability of LLM in directing reward shaping approaches Mirchandani et al. (2021); Carta et al. (2022). In Yu et al. (2023), instead of creating a reward function for DRL, LLM is employed to determine objective functions for predefined model predictive controllers. State-of-the-art research demonstrates that for simpler tasks, such as normal-form games Costa-Gomes et al. (2001) with discrete action spaces, LLM can serve directly as a proxy reward function Kwon et al. (2023). Through processing natural language instructions, LLM seamlessly integrates task requirements and user preferences into reward functions Hu and Sadigh (2023). However, whether LLM is able to independently design a reward function from scratch for continuous robotic control tasks remains an open research question. ## 3 Preliminary **DRL Setup** We model the DRL problem as a Partially Observable Markov Decision Process (POMDP) Monahan (1982) \(\mathcal{M}(S,O,A,T,R,\gamma)\) with continuous state and action spaces. Given a state \(s\in S\), a DRL agent determines an action \(a\in A\). The system then transitions to a new state \(s^{\prime}\) according to the transition distribution function \(T(s^{\prime}|s,a)\), which results in an observation \(o\in O\). For a given reward function \(R:S\times A\rightarrow\mathbb{R}\), the training process of DRL aims to find a policy \(\hat{\pi}_{R}\) that maximizes the expected cumulative discounted reward \[\hat{\pi}_{R}=\arg\max_{\pi}\mathbb{E}\left[\sum_{t=0}^{\infty}\gamma^{t}R(s_ {t},a_{t})\right], \tag{1}\] where \(\gamma\) is the discount factor. **Reward Function Design** While the reward function \(R\) provides immediate feedback to the DRL agent, it often lacks human interpretability due to the inherent difficulty in directly associating numerical values with system behaviors. To analyze the performance of a policy, humans typically evaluate system trajectories \(\mathcal{T}\) generated by the trained DRL agent through a performance metric \(G(\mathcal{T})\). For example, in the humanoid walking task, the performance metric \(G(\mathcal{T})\) could be the maximum distance the humanoid can travel without falling. Therefore, the goal in designing a reward function is to determine an optimal reward function \(\hat{R}\) that, after the training process, results in a policy \(\hat{\pi}_{\hat{R}}\) that maximizes the performance metric \(G(\mathcal{T})\). Self-Refined LLM for Reward Function Design In this section, we introduce a self-refined LLM framework for automated reward function design. It contains three steps: _initial design_, _evaluation_, and _self-refinement loop_. ### Initial Design We first employ the LLM to formulate an initial reward function based on natural language input. To enhance the LLM's comprehension of the robotic control task, we segment the natural language prompt into four parts (see Fig. 1): * _Environment description_: we first describe the robotic system we are working with, e.g., a quadruped robot or a 7-DOF manipulator, and provide details regarding the environmental setup; * _Task description_: we then outline the control objectives of the task, along with any existing specific task requirements; * _Observable states_: we also provide a list of the observable states that are available for the reward function design; * _Rules_: finally, we explain the rules that the LLM should follow when designing the reward function. Specifically, we emphasize two rules: first, the reward function should be based solely on the observable states; second, the reward function should exclude elements that are challenging to quantify, such as specific target postures of the quadruped robot. Similar to many hand-crafted reward functions, the initial reward function formulated by the LLM is often given as a weighted combination of multiple individual reward components, i.e., we have \(R=\sum_{i=0}^{n}w_{i}r_{i}\). However, the initial weights \(w_{i}\) are usually unreliable and necessitate adjustments. We address this challenge by using our proposed self-refinement process. It is worth mentioning that while many studies leverage few-shot in-context learning Brown et al. (2020) to guide the LLM in generating responses in a desired manner, our approach utilizes the LLM as a zero-shot reward function designer, excluding examples in our prompts. The major reason is that, due to the inherent task-specificity of reward function design, finding universally applicable examples for a diverse array of robotic control tasks proves challenging. To ensure the performance of the designed reward function, we employ the subsequent evaluation and self-refinement processes. A complete list of the prompts used in our experiments is available in Appendix A. ### Evaluation After the LLM determines the reward function \(R\), we assess its efficacy via an evaluation process (see Fig. 1). Aiming to minimize human intervention, the evaluation is structured as an automated procedure. We begin by initiating a training process to obtain a trained optimal DRL policy \(\hat{\pi}_{R}\). Subsequently, we sample \(n_{t}\) trajectories \(\mathcal{T}_{i},i=1,\ldots,n_{t}\) of this trained policy \(\hat{\pi}_{R}\), each originating from a distinct, randomly selected initial state. Performance of the reward function \(R\) is then evaluated from the following three aspects: * _Training process:_ we first summarize the training process for the policy \(\hat{\pi}_{R}\) to evaluate the immediate effectiveness of the designed reward function \(R\). This summary includes information on whether the reward has converged, the average reward per training episode, and the average number of timesteps in each episode. * _Objective metrics_: we then represent the overarching performance metric \(G(\mathcal{T})\) with multiple individual task-specific objective metrics \(g_{k}(\mathcal{T}),k=1,\ldots,n_{g}\). Each objective metric \(g_{k}(\mathcal{T})\) addresses an aspect of the task requirements. For instance, in the quadruped robot's straight-forward walking task, two objective metrics could be employed: one assessing the forward distance the robot travels without toppling and another quantifying any unintended lateral movements. We then compute the average values of these objective metrics \(g_{k}(\mathcal{T})\) over all sampled trajectories. * _Success rate in task accomplishments_: in addition to the task-specific objective metrics, we also introduce the success rate \(\mathrm{SR}\) of the trained policy \(\hat{\pi}_{R}\) in accomplishing the designated control task as a general and task-agnostic criterion. For each control task, we define a success condition using Signal Temporal Logic (STL) donze2013learning to capture the core objective of the task. For example, the success condition for a quadruped robot walking task could be that the forward distance travelled without falling should exceed a predetermined threshold. A trajectory meeting the success condition is considered a success. The success rate \(\mathrm{SR}\) is determined across all sampled trajectories. As a conclusion of the evaluation, we finally categorize the overall performance of the designed reward function \(R\) as either '_good_' or '_bad_'. Given that the training process and objective metrics are intrinsically task-dependent, it is challenging to establish a universally applicable standard to assess different tasks based on these two criteria. Therefore, we rely solely on the success rate \(\mathrm{SR}\) for the overall assessment and reserve other details as guidance for the subsequent self-refinement process. If the success rate \(\mathrm{SR}\) exceeds a predefined threshold, the performance of the reward function is considered '_good_'. Otherwise, we label it as '_bad_' and initiate a self-refinement process to improve. ### Self-Refinement Loop To enhance the designed reward function, we employ a self-refinement process. It starts with the construction of a feedback prompt for the LLM based on evaluation results (see Fig. 1). To offer the LLM a clear and immediate comprehension of the performance of the reward function, we position the overall assessment at the beginning of the prompt, followed by detailed information of the training process, objective metrics, and success rate. Guided by this feedback and previous feedback history, the LLM attempts to develop an updated reward function. Details about all feedback prompts used in our experiments are presented in Appendix B. For finding an optimal reward function, we repeat the evaluation and self-refinement processes in a loop until either a predefined maximum number of iterations is reached, or the evaluation suggests Figure 2: Continuous robotic control tasks with three diverse robotic systems: robotic manipulator (Franka Emika panda emika2023learning), quadruped robot (Anymal AnyRobotics (2023)) and quadcopter (Crazyllie bitCraze (2023)). Simulations are conducted in NVIDIA Isaac sim NVIDIA2021. _'good'_ performance. The reward function, resulting from the self-refinement loop, is accepted as the final designed reward function. ## 5 Experimental Results ### Experimental Setup We evaluate the performance of our proposed framework in designing reward functions through nine distinct continuous robotic control tasks across three diverse robotic systems (see Fig. 2). Specifically, we employ the following tasks and systems that are also frequently referenced as benchmark challenges in DRL studies James et al. (2020); Zhu et al. (2020); Zhou et al. (2023): * _Robotic manipulator (Franka Emika Panda Emika (2023))_: 1. _Ball catching_: the manipulator needs to catch a ball that is thrown to it using a tool (Fig. 2a); 2. _Ball balancing_: the manipulator should keep a ball, which falls from above, centered on a tray held by its end-effector (Fig. 2b); 3. _Ball pushing_: the manipulator is required to push a ball towards a target hole on a table (Fig. 2c); * _Quadruped robot (Anymal AnyRobotics (2023))_: 4. _Velocity tracking_: the robot needs to walk at a specified velocity without toppling over (Fig. 2d); 5. _Running_: the robot should run straight forward as fast as possible without falling; 6. _Walking to target_: the robot has to walk to a predetermined position; * _Quadcopter (Crazyflie BitCraze (2023))_: 7. _Hovering_: the quadcopter should fly to and hover at a designated position (Fig. 2e); 8. _Flying through a wind field_: the quadcopter needs to reach a target while flying through a wind field; 9. _Velocity tracking_: the quadcopter should maintain a specified velocity during flight; For each task, we compare the reward functions obtained by using three different methods: 1) \(R_{\rm Initial}\), which is the LLM's initial design of the reward function based on natural language input; 2) \(R_{\rm Refined}\), which is the final reward function formulated by the proposed self-refined LLM framework; 3) \(R_{\rm Manual}\), which is a manually designed reward function sourced from existing literature or benchmarks. In the evaluation process, we utilize the Proximal Policy Optimization (PPO) Schulman et al. (2017) as the DRL algorithm to find the optimal policy \(\hat{\pi}_{R}\) for each reward function. To concentrate on analyzing the reward function, we use the same learning parameters and neural network architectures in the training processes for all three reward functions. These parameters are derived by fine-tuning based on the manually designed reward function to achieve its optimal performance. For each trained policy \(\hat{\pi}_{R}\), we sample \(n_{t}=100\) trajectories and compute the corresponding objective metrics and success rates. The success rate threshold for the overall assessment is set at \(95\%\), and the maximum number of self-refinement iterations is selected as 5. We simulate the robotic control tasks with NVIDIA Isaac Sim NVIDIA (2021, 2023) and employ GPT4 as the underlying LLM. All experiments are conducted on a laptop equipped with an Intel(r) Core(tm) i7-10870H CPU and an NVIDIA RTX 3080 Max-Q GPU with 16 GB VRAM. Further details regarding the experimental setup are given in Appendix C. ### Reward Function and Objective Metrics We use the quadruped robot forward running task as an example to illustrate the reward function design process via our proposed self-refined LLM framework. The observable states for this task are: the global positions of the robot's base \(p_{x},p_{y},p_{z}\); the linear velocities of the robot \(v_{x},v_{y},v_{z}\); the base rotations relative to the world frame \(\theta_{\rm roll},\theta_{\rm pitch},\theta_{\rm yaw}\), the angular velocities \(\dot{\theta}_{\rm roll},\dot{\theta}_{\rm pitch},\dot{\theta}_{\rm yaw}\), and the current action command for the 12 joints \(a_{i},i=1\ldots,12\). The LLM needs to determine which of these observable states should be incorporated into the reward function. The STL expression representing the success condition is given as \[\varphi\equiv\square_{[0.8,5]}(v_{x}\geq 2)\wedge\square_{[0,5]}((p_{y}\leq 2) \wedge(p_{z}\geq 0.5)), \tag{2}\] which indicates that following an initial acceleration phase lasting 0.8 seconds, the robot must always maintain a speed of at least \(v_{x}=2\) m/s until the simulation stops at \(t=5\) seconds. Meanwhile, the robot must restrict lateral deviation to under \(2\) m and cannot fall over. The objective metrics used in the evaluation process are: the average linear velocities \(g_{v_{x}},g_{v_{y}},g_{v_{z}}\); the average \(z\)-position \(g_{p_{z}}\); the average normalized action values \(g_{\mathrm{action}}\); and the average angular velocities \(g_{\theta_{\mathrm{roll}}},g_{\theta_{\mathrm{pitch}}},g_{\theta_{\mathrm{ yaw}}}\). For comparison, we employ a manually designed reward function based on NVIDIA (2023), which is given as \[R=1.5v_{x}+0.2r_{\mathrm{bal}}-0.5\frac{|p_{y}|}{2}-0.1|\dot{\theta}_{\mathrm{ yaw}}|, \tag{3}\] with \(r_{\mathrm{bal}}=1\) if \(p_{z}\geq 0.5\) and \(r_{\mathrm{bal}}=0\) otherwise. The LLM requires a total of two self-refinement iterations to identify a satisfactory reward function. Fig. 3 presents the reward functions in different iterations alongside their respective evaluation outcomes. Corresponding system behaviors are given in Fig. 4, which also illustrates the behavior produced by the manually designed reward function. Figure 4: System behaviors corresponding to reward functions in different self-refinement iterations, as well as the manually designed reward function. The time interval between each displayed point is set to 1s. Figure 3: Reward functions in different self-refinement iterations for the quadruped robot forward running task. Similar to a manual refinement process, the LLM adjusts the reward function by altering its weights or parameters or by modifying the structure of its components. Initially, the reward function contains four components, emphasizing forward velocity and balance. However, this initial design proves insufficient, as the robot overly prioritizes maintaining balance over running forward, leading to a low forward velocity and success rate (see Iteration 0 in Fig. 4). Responding to this feedback, our proposed framework then initiates a self-refinement iteration. It increases the weight attributed to forward velocity \(v_{x}\) and adjusts the penalty associated with lateral deviation. Furthermore, it introduces a penalty for large actions in reaction to the action metric \(g_{\rm action}\) contained in the feedback. This refinement enhances performance, elevating the success rate to 90%. However, the evaluation indicates that the robot still takes aggressive actions to achieve high velocity (e.g., \(t=3\)s of Iteration 1 in Fig. 4). Hence, in its second self-refinement iteration, the LLM increases the penalties for excessive actions and lateral deviations. The resulting reward function leads to a behavior that closely aligns with the behavior obtained from a manually designed reward function (see Iteration 2 and Manual in Fig. 4). The manually designed reward function yields the following evaluation metrics: \(\mathrm{SR}=95\%,g_{v_{x}}=3.748,g_{v_{y}}=-0.105,g_{v_{z}}=-0.223,g_{p_{z}}=0.609,g_{\rm action}=2.673,g_{\theta_{\rm roll}}=-0.071,g_{\theta_{\rm pitch}}=-0.0 19,g_{\theta_{\rm yaw}}=0.041\), which are also similar to those of the final reward function formulated by the LLM. This justifies the efficacy of the proposed self-refined LLM framework in designing reward functions for continuous robotic control tasks. Other control tasks exhibit a similar pattern as the quadruped robot forward running task. See Appendix D and the supplementary video for details on these tasks. ### Success Rates We further evaluate the success rates across all the tasks under consideration to assess the generalizability and applicability of our proposed approach. The results are summarized in Table 1. Detailed information regarding the employed success conditions for each task is presented in Appendix C. It can be observed that the initial reward function demonstrates a binary level of performance. For tasks with straightforward objectives, such as ball catching or balancing, the LLM is able to devise a high-performing reward function on its first attempt. Conversely, for more complex tasks that involve multiple objectives, e.g., ensuring a quadruped robot maintains a set velocity while walking straight and keeping balance, the initial reward function often registers a success rate of \(0\%\). In such cases, the LLM predominantly relies on feedback to understand the implications of its design, necessitating multiple self-refinement iterations. By leveraging the evaluation results, the LLM is capable of effectively revising its reward function design. As a result, it achieves success rates that match or even surpass those of manually designed reward functions for all examined tasks. However, the performance is affected by the intrinsic complexities of the task. For tasks demanding intricate reward function components, e.g., the quadruped robot walking to target task, the success rate diminishes, indicating a need for further self-refinement iterations or even detailed human feedback. Nevertheless, even in these challenging scenarios, our self-refined LLM framework consistently \begin{table} \begin{tabular}{c c|c c c|c} \hline \hline \multicolumn{2}{c}{\multirow{2}{*}{Robotic System}} & \multicolumn{4}{c}{Success Rate \(\mathrm{SR}\)} \\ & \multicolumn{1}{c|}{Task} & \(R_{\rm Initial}\) & \(R_{\rm Refined}\) & \(R_{\rm Manual}\) & Iter. \\ \hline \multirow{3}{*}{Manipulator} & Ball Catching & 100\(\%\) & 100\(\%\) & 100\(\%\) & 0 \\ & Ball Balancing & 100\(\%\) & 100\(\%\) & 98\(\%\) & 0 \\ & Ball Pushing & 0\(\%\) & 93\(\%\) & 95\(\%\) & 5 \\ \hline \multirow{3}{*}{Quadruped} & Velocity Tracking & 0\(\%\) & 96\(\%\) & 92\(\%\) & 3 \\ & Running & 10\(\%\) & 98\(\%\) & 95\(\%\) & 2 \\ & Walking to Target & 0\(\%\) & 85\(\%\) & 80\(\%\) & 5 \\ \hline \multirow{3}{*}{Quadcopter} & Hovering & 0\(\%\) & 98\(\%\) & 92\(\%\) & 2 \\ & Wind Field & 0\(\%\) & 100\(\%\) & 100\(\%\) & 4 \\ \cline{1-1} & Velocity Tracking & 0\(\%\) & 99\(\%\) & 91\(\%\) & 3 \\ \hline \hline \end{tabular} \end{table} Table 1: Success rates of different reward functions and the number of self-refinement iterations (Iter.) used for \(R_{\rm Refined}\). identifies reward functions that outperform manual designs. This illustrates the broad applicability of our approach across a wide range of continuous robotic control tasks. ## 6 Discussion **Learning Parameters and AutoRL** In our experiments, we observe that during the self-refinement process, the LLM often has to adjust the weights of reward components. While the LLM is able to determine a final reward function that yields desired system behavior, the weights it assigns are not guaranteed to be optimal. In other words, there might exist configurations that produce even better outcomes. One potential improvement would be to integrate the LLM with AutoRL. Once the LLM formulates the reward function, AutoRL could optimize its parameters using search-based approaches. In such a case, the LLM serves as an initial designer, offering a parameterized reward function to AutoRL. This strategy can further be extended to fine-tune learning parameters and neural network architectures. By identifying optimal parameters before each self-refinement iteration, the LLM can then focus on adjusting the structural components of the reward function. However, adopting this approach could greatly prolong the reward function design process. **Fine-tuned LLM** Recent studies indicate that fine-tuning the LLM for specific tasks can greatly enhance its performance Houlsby et al. (2019); Li and Liang (2021); Ouyang et al. (2022). Such a technique could also improve our approach. The complexity associated with comprehending control task requirements and formulating appropriate reward functions could potentially be alleviated by deploying an LLM specifically fine-tuned for reward function design, as opposed to a general-purpose model. Nonetheless, fine-tuning an LLM typically demands substantial resources, and garnering enough training data for reward function design might also be challenging. **Limitations** One major limitation of our approach is its inability to address nuanced aspects of desired system behaviors that are difficult to quantify through the automated evaluation process, such as the gait of a quadruped robot. Addressing this challenge often necessitates human intervention. By offering detailed human feedback, the LLM is capable of fine-tuning its outcome accordingly, as illustrated in Yu et al. (2023). Another limitation is the reliance of the LLM on its pre-trained common-sense knowledge. For tasks that are highly specialized or not represented in its training data, the LLM may struggle to devise an appropriate reward function. Under such circumstances, enhancing the natural language input prompt with more details about the specific robotic system and control task becomes essential. ## 7 Conclusion In this paper, we introduce a self-refined LLM framework as an automated reward function designer for DRL in continuous robotic control tasks. The framework operates in three steps: First, the LLM devises an initial reward function by using a natural language input. Second, an automated evaluation process is initiated to assess the performance of the designed reward function. Third, based on the evaluation results, a feedback prompt is provided to the LLM, guiding its self-refinement process of the reward function. We evaluate our proposed framework across nine diverse robotic control tasks, distributed among three distinct robotic systems. The results indicate that our approach is able to generate reward functions that are on par with, or even superior to, those manually designed ones. For future work, we plan to integrate the LLM with AutoRL techniques, enabling not only the reward function, but also all learning parameters to be designed autonomously.
2309.08872
PDFTriage: Question Answering over Long, Structured Documents
Large Language Models (LLMs) have issues with document question answering (QA) in situations where the document is unable to fit in the small context length of an LLM. To overcome this issue, most existing works focus on retrieving the relevant context from the document, representing them as plain text. However, documents such as PDFs, web pages, and presentations are naturally structured with different pages, tables, sections, and so on. Representing such structured documents as plain text is incongruous with the user's mental model of these documents with rich structure. When a system has to query the document for context, this incongruity is brought to the fore, and seemingly trivial questions can trip up the QA system. To bridge this fundamental gap in handling structured documents, we propose an approach called PDFTriage that enables models to retrieve the context based on either structure or content. Our experiments demonstrate the effectiveness of the proposed PDFTriage-augmented models across several classes of questions where existing retrieval-augmented LLMs fail. To facilitate further research on this fundamental problem, we release our benchmark dataset consisting of 900+ human-generated questions over 80 structured documents from 10 different categories of question types for document QA. Our code and datasets will be released soon on Github.
Jon Saad-Falcon, Joe Barrow, Alexa Siu, Ani Nenkova, David Seunghyun Yoon, Ryan A. Rossi, Franck Dernoncourt
2023-09-16T04:29:05Z
http://arxiv.org/abs/2309.08872v2
# PDFTriage: Question Answering over Long, Structured Documents ###### Abstract Large Language Models (LLMs) have issues with document question answering (QA) in situations where the document is unable to fit in the small context length of an LLM. To overcome this issue, most existing works focus on retrieving the relevant context from the document, representing them as plain text. However, documents such as PDFs, web pages, and presentations are naturally structured with different pages, tables, sections, and so on. Representing such structured documents as plain text is incongruuous with the user's mental model of these documents with rich structure. When a system has to query the document for context, this incongruity is brought to the fore, and seemingly trivial questions can trip up the QA system. To bridge this fundamental gap in handling structured documents, we propose an approach called _PDFTriage_ that enables models to retrieve the context based on either structure or content. Our experiments demonstrate the effectiveness of the proposed _PDFTriage-augmented_ models across several classes of questions where existing retrieval-augmented LLMs fail. To facilitate further research on this fundamental problem, we release our benchmark dataset consisting of 900+ human-generated questions over 80 structured documents from 10 different categories of question types for document QA. Our code and datasets will be released soon on Github. ## 1 Introduction When a document does not fit in the limited context window of an LLM, different strategies can be deployed to fetch relevant context. Current approaches often rely on a pre-retrieval step to fetch the relevant context from documents Pereira et al. (2023); Gao et al. (2022). These pre-retrieval steps tend to represent the document as plain text chunks, sharing some similarity with the user query and potentially containing the answer. However, many document types have rich structure, such as web pages, PDFs, presentations, and so on. For these structured documents, representing the document as plain text is often incongruuous with the user's mental model of a _structured document_. This can lead to questions that, to users, may be trivially answerable, but fail with common/current approaches to document QA using LLMs. For instance, consider the following two questions: **Q1** "Can you summarize the key takeaways from pages 5-7?" **Q2** "What year _[in table 3]_ has the maximum revenue?" In the first question, document structure is _explicitly referenced_ ("pages 5-7"). In the second question, document structure is _implicitly referenced_ ("_in table 3_"). In both cases, a representation of document structure is necessary to identify the salient context and answer the question. Considering the document as plain text discards the relevant structure needed to answer these questions. We propose addressing this simplification of documents by allowing models to retrieve the context based on either structure or content. Our approach, which we refer to as _PDFTriage_, gives models access to metadata about the structure of the document. We leverage document structure by augmenting prompts with both document structure metadata and a set of model-callable retrieval functions over various types of structure. For example, we introduce the fetch_pages(pages: list[int]) function, which allows the model to fetch a list of pages. We show that by providing the structure and the ability to issue queries over that structure, PDFTriage-augmented models can reliably answer several classes of questions that plain retrieval-augmented LLMs could not. In order to evaluate our approach, we construct a dataset of roughly 900 human-written questions over 90 documents, representing 10 different categories of questions that users might ask. Those categories include "document structure questions", "table reasoning questions", and "trick questions", among several others. We will release the dataset of questions, documents, model answers, and annotator preferences. In addition, we release the code and prompts used. The key contributions of this paper are: * We identify a gap in question answering over structured documents with current LLM approaches, namely treating documents as plain text rather than structured objects; * We release a dataset of tagged question types, along with model responses, in order to facilitate further research on this topic; and * We present a method of prompting the model, called _PDFTriage_, that improves the ability of an LLM to respond to questions over structured documents. The rest of the paper proceeds as follows: in Section 2, we identify the related works to this one, and identify the distinguishing features of our work; in Section 3 we outline the _PDFTriage_ approach, including the document representation, the new retrieval functions, and the prompting techniques; in Section 4 we outline how we constructed the evaluation dataset of human-written questions; in Section 5 we detail the experiments we run to support the above contributions; in Section 6 we list the key takeaways of those experiments; and, lastly, in Section 7 we describe the limitations of our current work and future directions. ## 2 Related Works ### Tool and Retrieval Augmented LLMs Tool-augmented LLMs have become increasingly popular as a way to enhance existing LLMs to utilize tools for responding to human instructions (Schick et al., 2023). ReAct (Yao et al., 2022) is a few-shot prompting approach that leverages the Wikipedia API to generate a sequence of API calls to solve a specific task. Such task-solving trajectories are shown to be more interpretable compared to baselines. Self-ask (Press et al., 2022) prompt provides the follow-up question explicitly before answering it, and for ease of parsing uses a specific scaffold such as "Follow-up question:" or "So the final answer is:". Toolformer (Schick et al., 2023) uses self-supervision to teach itself to use tools by leveraging the few-shot capabilities of an LM to obtain a sample of potential tool uses, which is then fine-tuned on a sample of its own generations based on those that improve the model's ability to predict future tokens. TALM (Parisi et al., 2022) augments LMs with non-differentiable tools using only text along with an iterative technique to bootstrap performance using only a few examples. Recently, Taskmatrix (Liang et al., 2023) and Gorilla (Patil et al., 2023) have focused on improving the ability of LLMs to handle millions of tools from a variety of applications. There have also been many works focused on benchmarks for tool-augmented LLMs (Li et al., 2023; Zhuang et al., 2023). These include API-Bank (Li et al., 2023), focused on evaluating LLMs' ability to plan, retrieve, and correctly execute step-by-step API calls for carrying out various tasks, and ToolQA (Zhuang et al., 2023) that focused on question-answering using external tools. Retrieval-augmented language models aim to enhance the reasoning capabilities of LLMs using external knowledge sources for retrieving related documents (Asai et al., 2022; Gao et al., 2022; Lin et al., 2023; Yu et al., 2023; Zhao et al., 2023; Feng et al., 2023). In particular, HyDE (Gao et al., 2022) generates a hypothetical document (capturing relevance patterns) by zero-shot instructing an instruction-following LLM, then encodes the document into an embedding vector via an unsupervised contrastively learned encoder, which is used to retrieve real documents that are similar to the generated document. More recently, Feng et al. (2023) proposed InteR that iteratively refines the inputs of search engines and LLMs for more accurate retrieval. In particular, InteR uses search engines to enhance the knowledge in queries using LLM-generated knowledge collections whereas LLMs improve prompt formulation by leveraging the retrieved documents from the search engine. For further details on augmented language models, see the recent survey (Mialon et al., 2023). ### Question Answering Much of the existing work in QA does not ground the questions in structured documents, instead primarily focusing on extractive QA tasks such as GLUE (Wang et al., 2018). For example, text-only documents in QA datasets, like SQuAD (Rajpurkar et al., 2016) and NaturalQuestions (Kwiatkowski et al., 2019), don't contain tables or figures. **Document Question Answering**. Several datasets have been constructed to benchmark different aspects of document-focused question-answering. DocVQA Mathew et al. (2021) is a visual question-answering dataset focused that uses document scans. A recent work by Landeghem et al. (2023) focused on a dataset for document understanding and evaluation called DUDE, which uses both scans and born-digital PDFs. Both DUDE and DocVQA have questions that can be answered short-form; DUDE answers average roughly 3.35 tokens and DocVQA tokens average 2.11 tokens. QASPER Dasigi et al. (2021) is a dataset focused on information-seeking questions and their answers from research papers, where the documents are parsed from raw LaTeXsources and the questions are primarily focused on document contents. The PDFTriage evaluation dataset seeks to expand on the question types in these datasets, getting questions that can reference the document structure or content, can be extractive or abstractive, and can require long-form answers or rewrites. ## 3 PDFTriage: Structured Retrieval from Document Metadata The _PDFTriage_ approach consists of three steps to answer a user's question, shown in Figure 1: 1. **Generate document metadata (Sec. 3.1):** Extract the structural elements of a document and convert them into readable metadata. 2. **LLM-based triage (Sec. 3.2):** Query the LLM to select the precise content (pages, sections, retrieved content) from the document. 3. **Answer using retrieved content (Sec. 3.3):** Based on the question and retrieved content, generate an answer. Figure 1: **Overview of the PDFTriage technique**: PDFTriage leverages a PDF’s structured metadata to implement a more precise and accurate document question-answering approach. It starts by generating a structured metadata representation of the document, extracting information surrounding section text, figure captions, headers, and tables. Next, given a query, a LLM-based Triage selects the document frame needed for answering the query and retrieves it directly from the selected page, section, figure, or table. Finally, the selected context and inputted query are processed by the LLM before the generated answer is outputted. ### Document Representation We consider _born-digital PDF documents_ as the structured documents that users will be interacting with. Using the Adobe Extract API, we convert the PDFs into an HTML-like tree, which allows us to extract sections, section titles, page information, tables, and figures.1 The Extract API generates a hierarchical tree of elements in the PDF, which includes section titles, tables, figures, paragraphs, and more. Each element contains metadata, such as its page and location. We can parse that tree to identify sections, section-levels, and headings, gather all the text on a certain page, or get the text around figures and tables. We map that structured information into a JSON type, that we use as the initial prompt for the LLM. The content is converted to markdown. An overview of this process is shown at the top of Figure 1. Footnote 1: [https://developer.adobe.com/document-services/apis/pdf-extract/](https://developer.adobe.com/document-services/apis/pdf-extract/) ### LLM Querying of Document PDFTriage utilizes five different functions in the approach: fetch_pages, fetch_sections, fetch_table, fetch_figure, and retrieve. As described in Table 2, each function allows the PDFFTriage system to gather precise information related to the given PDF document, centering around structured textual data in headers, subheaders, figures, tables, and section paragraphs. The functions are used in separate queries by the PDFTriage system for each question, synthesizing multiple pieces of information to arrive at the final answer. The functions are provided and called in separate chat turns via the OpenAI function calling API,2 though it would be possible to organize the prompting in a ReAct (Yao et al., 2022) or Toolformer (Schick et al., 2023) -like way. Footnote 2: [https://platform.openai.com/docs/api-reference](https://platform.openai.com/docs/api-reference) ### Question Answering To initialize PDFTriage for question-answering, we use the system prompt format of GPT-3.5 to input the following: You are an expert document question answering system. You answer questions by finding relevant content in the document and answering questions based on that content. Document: <textual metadata of document> Using user prompting, we then input the query with no additional formatting. Next, the PDFTriage system uses the functions established in Section 2 to query the document for any necessary information to answer the question. In each turn, PDFFTriage uses a singular function to gather the needed information before processing the retrieved context. In the final turn, the model outputs an answer to the question. For all of our experiments, we use the gpt-35-turbo-0613 model. ## 4 Dataset Construction To test the efficacy of PDFTriage, we constructed a document-focused set of question-answering tasks. Each task seeks to evaluate different aspects of document question-answering, analyzing reasoning across text, tables, and figures within a document. Additionally, we wanted to create questions ranging from single-step answering on an individual document page to multi-step reasoning across the whole document. \begin{table} \begin{tabular}{l r} \hline \hline **\# of Documents** & 82 \\ \hline **\# of Questions** & 908 \\ \hline Easy Questions & 393 \\ Medium Questions & 144 \\ Hard Questions & 266 \\ “Unsure” Questions & 105 \\ \hline \hline \end{tabular} \end{table} Table 1: Dataset statistics for the PDFTriage evaluation dataset. Figure 2: PDFTriage Document Distribution by Word Count We collected questions using Mechanical Turk.3 The goal of our question collection task was to collect real-world document-oriented questions for various professional settings. For our documents, we sampled 1000 documents from the common crawl to get visually-rich, professional documents from various domains, then subsampled 100 documents based on their reading level (Flesch, 1948). 4 By collecting a broad set of document-oriented questions, we built a robust set of tasks across industries for testing the PDFTriage technique. Footnote 3: [https://mturk.com](https://mturk.com) Footnote 4: [https://commoncrawl.org/](https://commoncrawl.org/) In order to collect a diverse set of questions, we generated our taxonomy of question types and then proceeded to collect a stratified sample across the types in the taxonomy. Each category highlights a different approach to document-oriented QA, covering multi-step reasoning that is not found in many other QA datasets. We asked annotators to read a document before writing a question. They were then tasked with writing a salient question in the specified category. For our taxonomy, we consider ten different categories along with their associated descriptions: 1. **Figure Questions** (6.5%): Ask a question about a figure in the document. 2. **Text Questions** (26.2%): Ask a question about the document. 3. **Table Reasoning** (7.4%): Ask a question about a table in the document. 4. **Structure Questions** (3.7%): Ask a question about the structure of the document. 5. **Summarization** (16.4%): Ask for a summary of parts of the document or the full document. 6. **Extraction** (21.2%): Ask for specific content to be extracted from the document. 7. **Rewrite** (5.2%): Ask for a rewrite of some text in the document. 8. **Outside Questions** (8.6%): Ask a question that can't be answered with just the document. 9. **Cross-page Tasks** (1.1%): Ask a question that needs multiple parts of the document to answer. 10. **Classification** (3.7%): Ask about the type of the document. In total, our dataset consists of 908 questions across 82 documents. On average a document contains 4,257 tokens of text, connected to headers, subheaders, section paragraphs, captions, and more. In Figure 2, we present the document distribution by word count. We provide detailed descriptions and examples of each of the classes in the appendix. ## 5 Experiments We outline the models and strategies used in our approach along with our baselines for comparison. The code and datasets for reproducing our results will be released soon on Github. ### PDFTriage For our primary experiment, we use our PDFTriage approach to answer various questions in the selected PDF document dataset. This strategy leverages the structure of PDFs and the interactive system functions capability of GPT-3.5 to extract answers more precisely and accurately than existing naive approaches. ### Retrieval Baselines Page Retrieval.For our first baseline, we index the pages of each individual document using _text-embedding-ada-002_ embeddings. Using cosine similarity, we retrieve the pages most similar to the query embedding. We then feed each page's text as context for answering the given question until we reach the context window limit for a model. \begin{table} \begin{tabular}{r l} \hline \hline **Function** & **Description** \\ \hline fetch\_pages & Get the text contained in the pages listed. \\ fetch\_sections & Get the text contained in the section listed. \\ fetch\_figure & Get the text contained in the figure caption listed. \\ fetch\_table & Get the text contained in the table caption listed. \\ retrieve & Issue a natural language query over the document, and fetch relevant chunks. \\ \hline \hline \end{tabular} \end{table} Table 2: PDFTriage Functions for Document QA. Chunk Retrieval.In our second baseline, we concatenate all the document's text before chunking it into 100-word pieces. We then index each chunk using _text-embedding-ada-002_ embeddings before using cosine similarity calculations to retrieve the chunks most similar to the query embedding. Finally, we feed each chunk's textual contents as context for answering the given question until we reach the context window limit for a model. Prompting.For both retrieval baselines, we use the following prompt to get an answer from GPT-3.5: You are an expert document question answering system. You answer questions by finding relevant content in the document and answering questions based on that content. Document: <retrieved pages/chunks> Question: <question> ### Human Evaluation To measure any difference between PDFTriage and the retrieval baselines, we established a human labeling study on Upwork. In the study, we hired 12 experienced English-speaking annotators to judge the answers generated by each system. Please see Appendix A to see the full annotation questions for each question-document and its generated answers (for the overview, we use a sample question) as well as demographic information about the annotators. Our questions seek to understand several key attributes of each question-document pair as well as the associated general questions: 1. The overall quality of the question, such as its difficulty, clarity, and information needed for answering it. Figure 3: **User Preferences between PDFTriage and Alternate Approaches**: Overall, PDFTriage-generated answers were favored the most by the users, claiming 50.8% of the top-ranked answers overall. Furthermore, PDFTriage answers ranked higher on certain multi-page tasks, such as structure questions and table reasoning, while ranking lower on generalized textual tasks, such as classification and text questions. However, across all the question categories, PDFTriage beat both the Page Retrieval and Chunk Retrieval approaches on a head-to-head ranking. 2. The category of the question, using the taxonomy in section 4. 3. The ranking of each generated answer for the given question-document pair. 4. The accuracy, informativeness, readability/understandability, and clarity of each generated answer. ## 6 Results and Analysis In Table 1, we present the annotated question difficulty of each question in our sample. Overall, the largest group of questions (43.3%) were categorized as Easy while roughly a third of questions were categorized as Hard for various reasons. In addition to question difficulty, we asked annotators to categorize questions by type using the same categories as Section 4. Our annotation framework results in a dataset that's diverse across both question types and question difficulties, covering textual sections, tables, figures, and headings as well as single-page and multi-page querying. The diversity of questions allows us to robustly evaluate multiple styles of document-centered QA, testing the efficacy of PDFTriage for different reasoning techniques. ### PDFTriage yields better answers than retrieval-based approaches. In our annotation study, we asked the annotators to rank PDFTriage compared to our two baselines, Page Retrieval and Chunk Retrieval (Section 5). In Figure 3, we found that annotators favored the PDFTriage answer over half of the time (50.7%) and favored the Chunk Retrieval approach over the Page Retrieval approach. When comparing different provided answers for the same question, PDFTriage performs substantially better than current alternatives, ranking higher than the alternate approaches across all the question types. ### PDFTriage improves answer quality, accuracy, readability, and informativeness In our annotation study, we also asked the annotators to score PDFTriage, Page Retrieval, and Chunk Retrieval answers across five major qualities: accuracy, informativeness, readability/understandability, and clarity. We hoped to better understand the strengths of each answer for users in document question-answering tasks. In Table 3, we show that PDFTriage answers score higher than Page Retrieval and Chunk Retrieval across all answer qualities except for Clarity. Crucially, PDFTriage had the highest scores for Overall Quality and Answer Accuracy. For annotator agreement, we calculated an average Cohen's kappa score of 0.584. In Appendix A, we provide a high-resolution breakdown of annotations for "Overall Quality" and "Accuracy" by question category. We find that PDFTriage tends to be stronger for categories like summarization, table reasoning, extraction, and figure questions which require multi-step reasoning across different parts of a document. Additionally, PDFTriage performs similarly to Page Retrieval and Chunk Retrieval on other more generalized reasoning tasks, such as text questions and classification. ### PDFTriage requires fewer retrieved tokens to produce better answers For the PDF document sample, the average token length of retrieved PDFTriage text is 1568 tokens (using the GPT-3.5 tokenizer). The average metadata length of textual inputs in document JSONs is 4,257 tokens (using the GPT-3.5 tokenizer). While PDFTriage utilizes more tokens than Page Retrieval (3611 tokens on average) and Chunk Retrieval (3934 tokens on average), the tokens are retrieved from multiple sections of the document that are non-consecutive. Furthermore, the sections used in Page Retrieval and Chunk Retrieval are often insufficient for answering the question, as indicated by lower answer quality scores on average for "Overall Quality" and "Accuracy". However, simply concatenating all the document's text together would not ultimately replace PDFTriage due to both context window limits and the need to perform multi-hop reasoning for document QA tasks. PDFTriage helps overcome this issue through the multi-stage querying of the document, retrieving and adding context as needed for different document QA tasks. \begin{table} \begin{tabular}{l c c c} \hline \hline & _PDFTriage_ & \begin{tabular}{c} _Page_ \\ _Retrieval_ \\ \end{tabular} & \begin{tabular}{c} _Chunk_ \\ _Retrieval_ \\ \end{tabular} \\ \hline Readability & **4.2** & 4.1 & 4.1 \\ Informativeness & **3.9** & 3.7 & 3.4 \\ Clarity & 2.0 & 2.1 & **2.3** \\ Accuracy & **3.8** & 3.6 & 3.4 \\ \hline Overall Quality & **3.9** & 3.8 & 3.6 \\ \hline \hline \end{tabular} \end{table} Table 3: Answer Quality Scoring ### PDFTriage performs consistently across document lengths We also wanted to calculate the correlation between PDFTriage performance and the length of the document overall. Between the human-annotated PDF-Triage answer score for "Overall Quality" and document length, we found a Pearson's correlation coefficient of -0.015. This indicates that document length has a negligible effect on the efficacy of PDFTriage, strengthening the generalizability of our technique to both short and long documents. The length of different document types seems to ultimately have no effect on overall performance. The ability of PDFTriage to query specific textual sections within the document prevents the need to ingest documents with excessively large contexts. It allows PDFTriage to connect disparate parts of a document for multi-page questions such as table reasoning, cross-page tasks, figure questions, and structure questions, prioritizing relevant context and minimizing irrelevant information. As a result, GPT-3 and other LLMs are better capable of handling the reduced context size and ultimately utilize less computational and financial resources for document QA tasks. ## 7 Future Work & Conclusions In this work, we present PDFTriage, a novel question-answering technique specialized for document-oriented tasks. We compare our approach to existing techniques for question-answering, such as page retrieval and chunk retrieval, to demonstrate the strengths of our approach. We find that PDFTriage offers superior performance to existing approaches. PDFTriage also proves effective across various document lengths and contexts used for retrieval. We are considering the following directions for future work: 1. Developing multi-modal approaches that incorporate table and figure information into GPT-4 question-answering for documents. 2. Incorporate question type in PDFTriage approach to improve efficiency and efficacy of the approach. \begin{table} \begin{tabular}{l l l} \hline \hline **Question** & **PDFTriage** & **Chunk Retrieval** \\ \hline \multirow{4}{*}{Which file formats are listed under the "Create Using This Format" section?} & Function used: fetch\_section() & The document does not provide information about the specific \\ & The file formats listed under the "Create Using This Format" section & file formats listed under the \\ & are: CSV, JPG, LPT, PDF, PNG, & “Create Using This Format” section. \\ \hline \multirow{4}{*}{Summarize the second paragraph in one long sentence.} & Function used: fetch\_section() & In her testimony, Michele \\ & The second paragraph explains the & Murrick, Legislative Advocate \\ & & \\ & writer’s support for H.B. 5540, & for the Connecticut Conference \\ & which aims to ban guns without & United Church of Christ, \\ & serial numbers and regulate those & expresses her support for \\ & sold in a form requiring assembly. & H B 5540 and H B 5542, two \\ & The writer also supports the... & acts concerning ghost guns, the \\ & & permit application process, and... \\ \hline \hline \end{tabular} \end{table} Table 4: A comparison of fetch\_section() being called successfully and unsuccessfully. Answers highlighted in red were considered incorrect. In the second example, both approaches are incorrect; the PDFTriage approach fetches the incorrect section, rather than just the first page, the chunk retrieval approach has no knowledge of document structure and paragraph order. Figure 4: PDFTriage Performance compared to Document Page Length (uses ”Overall Quality” scores)
2309.07599
Reply to "Comment on `Weak values and the past of a quantum particle' ''
We here reply to a recent comment by Vaidman [\href{https://journals.aps.org/prresearch/abstract/10.1103/PhysRevResearch.5.048001}{Phys. Rev. Res. 5, 048001 (2023)}] on our paper [\href{https://journals.aps.org/prresearch/abstract/10.1103/PhysRevResearch.5.023048}{Phys. Rev. Res. 5, 023048 (2023)}]. In his Comment, Vaidman first admits that he is just defining (assuming) the weak trace gives the presence of a particle -- however, in this case, he should use a term other than presence, as this already has a separate, intuitive meaning other than ``where a weak trace is''. Despite this admission, Vaidman then goes on to argue for this definition by appeal to ideas around an objectively-existing idea of presence. We show these appeals rely on their own conclusion -- that there is always a matter of fact about the location of a quantum particle.
Jonte R Hance, John Rarity, James Ladyman
2023-09-14T11:00:49Z
http://arxiv.org/abs/2309.07599v2
# Reply to Comment on "Weak values and the past of a quantum particle" ###### Abstract We here reply to a recent comment by Vaidman [Phys. Rev. Res. 5, 048001 (2023)] on our paper [Phys. Rev. Res. 5, 023048 (2023)]. In his Comment, Vaidman first admits that he is just defining (assuming) the weak trace gives the presence of a particle--however, in this case, he should use a term other than presence, as this already has a separate, intuitive meaning other than "where a weak trace is". Despite this admission, Vaidman then goes on to argue for this definition by appeal to ideas around an objectively-existing idea of presence. We show these appeals rely on their own conclusion--that there is always a matter of fact about the location of a quantum particle. In his Comment [1] on our recent paper [2], Vaidman seeks to clarify that he does not claim his weak trace approach identified the objectively-existing presence of particles in pre- and postselected scenarios; instead, he claims the weak trace approach defines the "presence of a quantum particle" as where it left a weak trace. We agree, this would be fine, if the idea of the presence of a particle did not already have a separate, intuitive meaning. This is why we are interested in the idea of the presence of a particle in the first place. If Vaidman wishes to define some term to mean "where a particle in a pre- and postselected scenario left a weak trace," he is free to do so, but such a term should be free of the implications that terms like "presence" possess. The only reason to use a term like "presence" is in appeal to some use of this term in another context--such as the conception of presence in classical physics. Therefore, Vaidman either needs to successfully argue that his "weak trace" corresponds to our intuitions around notions of "presence" (something normally defined either by states being measured as eigenstates of some position/path projection operator, or by appeal to a classical idea of presence), or he should use a different term, or at least clarify that his term refers to something separate to what we intuitively mean by presence. Despite initially arguing that the weak trace approach does not claim to identify any objectively-existing presence of particles in pre- and postselected scenarios, and just involves defining a weak trace being left along a given path as presence, Vaidman argues one should accept the definition the approach gives for such a presence by directly appealing to ideas around such an objectively-existing idea of presence. For instance, see his statement in the Comment that "the traces left on the environment that provide evidence of particle interactions have disconnected parts". This, while used to justify defining particle presence by weak trace, implicitly assumes particles must be present where, and only where, they leave a weak trace--he assumes the very thing he is trying to argue for. Further, Vaidman's attempt to appeal to our own criteria for using the classical conception of particle presence to rationalise his own approach misses out one key part of our analysis--that there is no need to always assign a particle a localised presence, in a classical fashion, at all times and all locations. Indeed, in some states (e.g., momentum eigenstates) this is by definition impossible according to the laws of quantum mechanics. This is in the same way that, for certain states, there is not a matter of fact about the number of particles in the state (e.g. coherent states). Our criteria were given as necessary (unless good reason is given) rather than sufficient (especially rather than individually sufficient) to assign particle presence (in a classical fashion). Vaidman ignores all but one of our criteria, then takes that one remaining criterion as a sufficient condition. Therefore, Vaidman's argument about our criterion (iii) and our criterion (ii) contradicting, and having to pick one for an approach to identifying the path of a particle, misinterprets our argument. Vaidman appeals to our criterion (iii)--that (classical) particles interact with other objects and/or fields local to their location. Yet, this does not mean only localised particles interact with other objects and/or fields local to their location, nor that a quantum particle's interaction with another object/field (e.g., the weak trace left on an environment) is sufficient to assign such a classical idea as presence to that particle at that location. Vaidman comments that "The fact that the weak value of the velocity of a particle can be larger than the speed of light (see Sec. VIII of [3]) does not contradict the special theory of relativity. The experiments involve postselection and their low probability of success prevents a superluminal change in the probability of finding a quantum particle." This misunderstands our point, which is that weak values seem to mean something different than standard classical properties, so should not be equated with classical properties. One would expect, by special relativity, anything we consider to be equivalent to the velocity of a particle (such as the propagation speed of a wave) would be limited to being below \(c\). Therefore, the fact that these experiments give weak values of velocity greater than \(c\), but show nothing which would lead us to question special relativity, that these weak values of velocity must not correspond to true velocities, but instead represent something else. Vaidman claims "The weak value approach helps to find quantum protocols which are "spooky" if analysed in classical terms." However, by "classical terms" he here means by the definition of particle presence introduced by the weak trace approach. Therefore, the weak trace approach just helps us find quantum protocols which are "spooky" if analysed by the weak trace approach, which seems tautological. Similarly, Vaidman claims the concept of "the local presence of a pre- and post-selected particle defined by the local trace it leaves on the environment" is useful. We are sceptical of this claim, and welcome any evidence that such a definition is in any way useful. _Acknowledgements -_ JRH acknowledges support from Hiroshima University's Phoenix Postdoctoral Fellowship for Research, and the University of York's EPSRC DTP grant EP/R513386/1. JGR and JRH acknowledge support from Quantum Communications Hub funded by EPSRC grants EP/M013472/1 and EP/T001011/1.
2305.19472
PlaSma: Making Small Language Models Better Procedural Knowledge Models for (Counterfactual) Planning
Procedural planning, which entails decomposing a high-level goal into a sequence of temporally ordered steps, is an important yet intricate task for machines. It involves integrating common-sense knowledge to reason about complex and often contextualized situations, e.g. ``scheduling a doctor's appointment without a phone''. While current approaches show encouraging results using large language models (LLMs), they are hindered by drawbacks such as costly API calls and reproducibility issues. In this paper, we advocate planning using smaller language models. We present PlaSma, a novel two-pronged approach to endow small language models with procedural knowledge and (constrained) language planning capabilities. More concretely, we develop symbolic procedural knowledge distillation to enhance the commonsense knowledge in small language models and an inference-time algorithm to facilitate more structured and accurate reasoning. In addition, we introduce a new related task, Replanning, that requires a revision of a plan to cope with a constrained situation. In both the planning and replanning settings, we show that orders-of-magnitude smaller models (770M-11B parameters) can compete and often surpass their larger teacher models' capabilities. Finally, we showcase successful application of PlaSma in an embodied environment, VirtualHome.
Faeze Brahman, Chandra Bhagavatula, Valentina Pyatkin, Jena D. Hwang, Xiang Lorraine Li, Hirona J. Arai, Soumya Sanyal, Keisuke Sakaguchi, Xiang Ren, Yejin Choi
2023-05-31T00:55:40Z
http://arxiv.org/abs/2305.19472v3
# Plasma: Making Small Language Models ###### Abstract Procedural planning, which entails decomposing a high-level goal into a sequence of temporally ordered steps, is an important yet intricate task for machines. It involves integrating common-sense knowledge to reason about complex contextualized situations that are often counterfactual, e.g. "scheduling a doctor's appointment without a phone". While current approaches show encouraging results using large language models (LLMs), they are hindered by drawbacks such as costly API calls and reproducibility issues. In this paper, we advocate planning using smaller language models. We present Plasma, a novel two-pronged approach to endow small language models with procedural knowledge and (counterfactual) planning capabilities. More concretely, we develop _symbolic procedural knowledge distillation_ to enhance the implicit knowledge in small language models and an _inference-time algorithm_ to facilitate more structured and accurate reasoning. In addition, we introduce a novel task, _Counterfactual Planning_, that requires a revision of a plan to cope with a counterfactual situation. In both the original and counterfactual setting, we show that orders-of-magnitude smaller models (770M-11B parameters) can compete and often surpass their larger teacher models' capabilities.1 Footnote 1: We make our dataset and code publicly available at: [https://github.com/allenai/PlaSma](https://github.com/allenai/PlaSma) ## 1 Introduction Powered by massive scale, large language models (LLMs) excel on many downstream tasks that require commonsense. One such task is _procedural planning_[27], a task that involves decomposing a high-level **goal** into a sequence of coherent, logical, and goal-oriented steps **(plan)** (e.g. "see a movie" \(\rightarrow\) "Look up movie showings", "Choose a movie" \(\ldots\)). Recent approaches model this task as a conditional text generation problem using LLMs [23; 11; 1]. Despite their reasonable performance on the task, their steep computational cost and inaccessibility hinder wider adoption of LLMs [24]. We present Plasma (Plan with Smail models), a novel two-pronged framework to impart planning abilities in small LMs. We achieve this through _symbolic procedural knowledge distillation_ to enhance the implicit knowledge in small LMs (Figure 1) and an _inference-time decoding algorithm_ to enable structured reasoning (Figure 2). We formulate _symbolic procedural knowledge distillation_[41; 3] in two stages: (i) Knowledge verbalization to generate procedural knowledge from an LLM, and (ii) Knowledge distillation to transfer LLM-generated knowledge to a smaller LM. In addition to the standard planning task, we introduce and verbalize knowledge for novel task formulations under counterfactual settings: _Counterfactual planning_ and _Revision_. These tasks enable a more realistic setting by requiring models to reason about contextually constrained situations in real-world applications; specifically, the model generates or revises a plan based on a given goal (e.g., "see a movie") while adhering to an additional **condition** (e.g., "at home"). Our knowledge verbalization process results in a large (counterfactual) procedural planning dataset, CoPlan, which is then used to train smaller models, PLASMA, using both task-specific and multi-task distillation. We observe that the standard next-token prediction objective in auto-regressive LMs (applied during distillation) does not equip them with sufficient causal and temporal reasoning abilities to generate high-quality plans, or a mechanism to rectify their mistakes in earlier steps. To address this challenge, we develop a _verifier-guided step-wise beam search_ to better leverage the multi-step structure of plans (resulting in PLASMA+). Concretely, we incorporate a step-wise verifier in our decoding process to guide PLASMA+ to generate more semantically coherent and temporally accurate plans. Through experiments, we show that our approach is effective at endowing smaller LMs with planning abilities. For the standard planning task, smaller student models (of varying sizes) achieve 17.57% relative improvements, on average, over their teacher. The best student model is comparable even to GPT-3, a model 16 times the student's size. Furthermore, we, for the first time, distill counterfactual planning abilities in small-size models, achieving 93% validity rate according to human evaluation. In a simulated environment [29], our model significantly outperforms previous work based on GPT-3 [11] on executability (by 17%) and correctness (by 25%). Taken together, our framework including symbolic procedural distillation, decoding-time algorithm, and the proposed tasks and the accompanying CoPlan dataset provide valuable resource and direction for advancing research in the field of procedural planning. ## 2 Small Language Models as Procedural Knowledge Models In this section, we discuss how to endow small students with procedural knowledge and (counterfactual) planning capabilities. We first describe our knowledge verbalization and distillation framework which we collectively refer to as Symbolic Procedural Knowledge Distillation (SS2.1, SS2.2). We then propose a strategy to enhance the reasoning capabilities of small students via a novel verifier-guided step-wise decoding algorithm (SS2.3). ### CoPlan: Procedural Knowledge Verbalization from Large Teachers Large language model can perform new tasks by adapting to a few in-context examples [4]. We thus leverage this emergent reasoning capabilities of LLM to circumvent the challenge of crowdsourcing supervised datasets at scale. We collect data targeting the following three tasks: 1. **Goal-based Planning (pl.)**, decomposing a high-level goal \(g\) into a sequence of temporally extended steps \(y=\{s_{t}\}_{t=1}^{T}\). Figure 1: Symbolic Procedural Knowledge Distillation. 2. **Counterfactual Planning (cp.)**, decomposing a high-level goal \(g\) into a sequence of temporally extended steps \(y=\{s_{t}\}_{t=1}^{T}\) while satisfying a given condition \(c\). 3. **Counterfactual Plan Revision (cpr.)**, rewriting an initial plan \(y\) to a given goal \(g\) into a new plan \(y^{\prime}\) in order to satisfy a given condition \(c\). Our knowledge verbalization pipeline shown in the left side of Figure 1 is a two-stage process: 1) instance generation through few-shot prompting, and 2) automatic data curation using a critic to filter out the low quality data. The process results in CoPlan, a quality dataset containing goals, plans, conditions, and counterfactual plans. Step 1. Data GenerationWe start by generating a large pool of goals \(\mathcal{G}\) with a diverse range of topics in a bootstrapping fashion. We initiate the seed goal pool with 100 goals generated by GPT-3 (text-curie-001) along with 5 example goals provided by the authors. With the seed goal pool, we iteratively expand it by GPT-3 with randomly selecting example goals for prompting. For each generated goal \(g\in\mathcal{G}\), we few-shot prompt a teacher model \(\mathcal{M}\) to generate a set of ordered steps, as a plan \(y\) to achieve the goal. The input to \(\mathcal{M}\), including instruction and few-shot examples, takes the format shown in Figure 7. Since LLMs can be sensitive to instruction, and/or few-shot examples [28, 21], we randomize the prompt by (i) manually creating a set of semantically similar instructions and each time randomly sample from the instruction set (ii) creating dynamic in-context examples for each input. We use a subset of the existing ProScript[34] and DeScript [39] datasets as our seed source to form in-context examples, \(\mathcal{P}=\{(g_{j},y_{j})\}_{j=1}^{M}\): \[y_{i}\sim\mathcal{M}(y_{i}|g_{i},\mathcal{P})\] The result is a pool of 140k pairs of goal and plans, \((g,y)\), generated from the teacher model. For the counterfactual setting, we also obtain conditions \(c\), and modified plans \(y^{\prime}\) from a teacher model \(\mathcal{M}\) through few-shot prompting. We manually design our prompts \(\mathcal{P}\) to collect natural language conditions concerning the environment the task is performed in such as Location ("the store is closed"), Equipment ("you don't have a sharp tool"), Safety ("the car breaks down") or user's specifications such as Physical Condition and Preference ("you have an injury"). For a given goal \(g_{i}\) and plan \(y_{i}\), we sample conditions: \[c_{i}\sim\mathcal{M}(c_{i}|g_{i},y_{i},\mathcal{P})\] Next, we few-shot prompt \(\mathcal{M}\) to rewrite an initial plan \(y\) for a given goal \(g\) such that it satisfies the requirement of a condition \(c\): \[y^{\prime}_{i}\sim\mathcal{M}(y^{\prime}_{i}|g_{i},y_{i},c_{i},\mathcal{P})\] The prompting templates and examples of conditions are shown in Figure 8 and Table 6. Step 2. Automatic Data CurationTo retain high-quality data for planning under the original and counterfactual settings, we filter out generated samples from Step 1, i.e. generated plans, conditions and counterfactuals, that are invalid or of low quality. A plan \(y\) is considered invalid if it contains an _illogical order_ of steps, is _off-topic_ (w.r.t the goal) or _incomplete_. Whereas a counterfactual plan \(y^{\prime}\) should not only satisfies these general criteria but should also adhere to the condition. To this end, we train separate supervised critic models to judge the quality of generated samples of different types. We collect human annotations of _valid vs. invalid_ samples on Amazon Mechanical Turk to train a RoBERTa-Large [17] as our critic models. All critics are binary classifiers which identify whether a tuple of either (goal, plan), (goal, plan, condition) or (goal, plan, condition, modified plan) is valid. We provide more details on annotation instructions, and hyper-parameter tuning in Appendix B.1 and B.2. Naturally, there is a trade-off between dataset size and precision. Following West et al. [41], we test several confidence thresholds at which the critic rejects a pair and choose the best values (0.65, 0.76, 0.82)2 according to precision-recall curves. After filtering out low quality data, our final CoPlan dataset consists of 2 main subsets including 57,794 (goal, plan) for the original **goal-based planning** task (\(\mathcal{D}^{pl.}\)), and 43,690 (goal, plan, condition, modified plan) for the **counterfactual** settings, (\(\mathcal{D}^{cp.}\) and \(\mathcal{D}^{cpr.}\)). On the original planning task, CoPlan is \(\times 11\) larger in scale than existing datasets [34, 39] while keeping the precision at 74%. On the proposed counterfactual settings, our dataset is to the best of our knowledge the first large-scale counterfactual procedural planning dataset. Analyses show that the CoPlan includes a diverse array of topics covered by goals (SSA.1) and conditions (SSA.2). ### PlaSma: Procedural Knowledge Distillation into Small Students After obtaining our procedural planning data CoPlan, we use it to fine-tune student models on the three different tasks. We consider both task-specific and multi-task distillation objectives to transfer generated procedural knowledge into the student models: Task-specific Distillation.Following the common practice, we use the standard autoregressive language modeling objective [32] to fine-tune separate student models for each task: \[\mathcal{L}(\theta)=\mathbb{E}_{(x,y)\sim D^{task}}\big{[}-\log p_{\theta}(y |\mathcal{T}(x))\big{]},\quad\text{for }\texttt{task}\in\{\text{pl},\text{cp},\text{cpr}.\} \tag{1}\] where \(\mathcal{T}(x)\) is a task-specific template for each task-specific input \(x\) (see right side of Figure 1). Multi-task Distillation.We also aim to improve the generalization of the student model by exploiting the knowledge contained in the three related tasks as an inductive bias [33, 40]. We thus minimize the joint loss: \[\mathcal{L}(\theta) =\mathbb{E}_{(g,y)\sim D^{pl}.}\big{[}-\log p_{\theta}(y| \mathcal{T}(g))\big{]} \tag{2}\] \[\qquad+\mathbb{E}_{(g,c,y)\sim D^{cp}.}\big{[}-\log p_{\theta}(y| \mathcal{T}(g,c))\big{]}+\mathbb{E}_{(g,c,y,y^{\prime})\sim D^{cp}.}\big{[}- \log p_{\theta}(y^{\prime}|\mathcal{T}(g,c,y))\big{]}\] We name this student PlaSma-Mul. ### PlaSma+: Advancing Student with Verifier-guided Decoding During inference, the student may generate logically and/or temporally ill-formed sequence of steps \(\mathbf{y}=\{s_{t}\}_{t=1}^{T}\) as it is only trained to maximize the next-token probability. For example, in Figure 2, it may generate "write a check" at step 3 with relatively high confidence due to a spurious correlation between "sales price" and "check". We mitigate this issue via step-wise guided decoding. Rather than generating plans greedily, we instead generate step-by-step by sampling several candidate next steps and searching for those with a high log-probability under both the distilled student and a verifier. The verifier is tasked to check for sequential ordering and semantic completeness. In an embodied setting, the verifier could be taken over by any affordance or safety module [1] that determines the executability of an action in a given environment. Step Verifier.We introduce an independent verifier, which is trained to check the validity of plan steps and encourage PlaSma to produce more temporally and causally valid plans. The verifier takes as input a goal, the plan-so-far and a candidate next step and outputs a continuous validity score \(p_{\text{verifier}}(s_{t}|g,s_{<t})\in[0,1]\). We implement the verifier by fine-tuning a RoBERTa model [18] to classify whether a candidate step is valid or invalid. For training data, we use steps from available human-written plans3 as positive Figure 2: Verifier-guided Step-wise Beam Search. For brevity, we only showcase with \(N=5\) and \(K=2\) for the first step and \(N=4\) and \(K=2\) for the second step. The scores are for illustration purposes only. examples (valid steps). However, since no negative examples are readily available, we automatically create a set of invalid steps as pseudo-negative examples. Inspired by the common errors made by models, we design perturbations over ground-truth plans to target sequential ordering, semantic completeness, topicality, and fluency. See Appendix B.3 for details. **Verifier-guided Step-wise Beam Search.** We illustrate our _verifier-guided decoding_ in Figure 2. The procedure generates a plan \(\mathbf{y}=(s_{1},...,s_{T})\) by sequentially sampling and pruning the next step candidate \(s_{t}\). Concretely, at each iteration4, it selects and expands a size-\(K\) beam of plan-so-far, \(Y_{t-1}=\{s_{<t}^{k}\}_{k=1}^{K}\), and generates \(N\) next-step candidates, Footnote 4: Iteration refers to a full step in a plan. \[Y_{t}=\cup_{s_{<t}\in Y_{t-1}}\{(s_{<t}||s_{t}^{n})\mid s_{t}^{n} \sim q(.|\mathcal{T}(x,s_{<t})\}_{n=1}^{N} \tag{3}\] where \(||\) is concatenation, \(x\) is a task-specific input, and \(q\) is a decoding algorithm. We encourage exploration at each step, by generating candidates using multiple decoding methods such as beam search, and nucleus sampling with temperature \(1.0\). To select the top-K scoring next-step candidates \(S_{t}^{*}\), we use a value function \(v(s_{\leq t})\rightarrow\mathbb{R}\) which returns the weighted sum of normalized sequence log-likelihood from the student model and the verifier validity score, \[S_{t}^{*}=\arg\text{top-K}_{s_{\leq t}\in Y_{t}}v(s_{\leq t}) \tag{4}\] \[v(s_{\leq t})=\alpha\log p_{\theta}(s_{\leq t})+(1-\alpha)\log p _{\text{verifier}}(s_{t}|g,s_{<t}) \tag{5}\] with \(\alpha\) controlling the impact of the distilled student and the verifier. The search ends when the beam contains \(K\) completed plans. We return the highest-scored plan as the final output. Our step-wise beam search strategy maintains a diverse set of candidate plans during the decoding process, allowing the model to explore multiple plausible paths before converging on a most promising one. ## 3 Experiments **Implementation Details.** While any model with few-shot capabilities could be used, we choose our teacher model \(\mathcal{M}\) to be GPT-3 text-curie-001[4] for collecting the goals and initial plans, and GPT-3 text-davinci-003 for collecting conditions and counterfactual plans.5 We sample data points from GPT-3 using nucleus sampling (\(p=0.98\)) and temperature of \(T=0.9\). For our student models, we try a range of model sizes in T5 family [33], such as T5-large, T5-3B, and T5-11B. Student models are trained using Huggingface Transformers [42]. Main experiments can be done on 2 GPUs with 48GB of memory. Footnote 5: In our preliminary experiment, we found text-davinci-003 (the strongest GPT-3 version at the time) to be helpful for the more challenging counterfactual data collection. During inference, we use a beam of size \(K=5\) for regular beam search, and \(N=10\) (next-step candidates), beam \(K=5\) and \(p=0.9\) for our verifier-guided step-wise decoding (see SS2.3). **Baselines.** For each task, we compare our distilled students with their corresponding teacher, zero-shot and few-shot variants of GPT-3 [4], CoCoGen[23] and human performance (when available). CoCoGen frames the planning task as a code generation task and use a pre-trained code LM (code-davinci-002) in a few-shot setting. Next, we present the experimental setup for each task, along with their results. ### Goal-based Planning In this section, we aim to study two key research questions through our experiments. Firstly, we seek to investigate the extent to which scale impacts the distillation of procedural knowledge. Secondly, we aim to examine whether the scale gap can be bridged through the use of multitasking and/or a novel decoding algorithm. In essence, we seek to determine whether small language models can perform procedural planning tasks with the same level of proficiency as large language models. **Evaluation Set.** For the original planning task, we use human-written plans from the test set of ProScript[34] dataset as our evaluation data. **Setup.** We compare several student models of varying scales (770M-11B) with the teacher model, text-curie-001, and extremely large scale models (175B). For all student models, we decode using both regular beam search (PlaSMA) and our verifier-guided step-wise beam search (PlaSMA+). **Does scale matter?** Larger models perform relatively better across all aspects. **Does multi-task distillation help bridge the scale gap?** As we observe, multi-task distillation almost always wins over its task-specific counterpart with the exception of the smallest student, PlaSMA (770M). We posit that very small student models might not have enough capacity to leverage the related tasks efficiently during multi-tasking. **Does verifier-guided decoding help bridge the scale gap?** Pairing models with our proposed verifier-guided step-wise decoding substantially improves performance across students of varying sizes over all aspects. Specifically, compared with regular beam search, our proposed decoding results in 7%-48% relative improvements in overall quality across different student sizes. The improvements achieved by the verifier-guided decoding is larger for smaller students. We showcase the comparisons with qualitative examples in Appendix Table 8. The best distilled students with 770M, 3B, and 11B parameters achieved respectively 14.13%, 16%, and 22.59% relative improvements over their teacher model (text-curie-001). Finally, our best distilled model (11B PLaSMA-Mul+) performs equally well as human and is competitive with orders-of-magnitude larger models (175B).6 Figure 3 visualizes how we bridge the scale gap using our multi-task distillation and verifier-guided step-wise decoding. Footnote 6: Pairwise annotator agreements (i.e., how often do two annotators agree on the answer) are 0.78, 0.84, and 0.80 for coverage, order and overall quality, respectively. **Effect of symbolic distillation.** In this experiment, we compare models trained/tested on human-written pairs of (goal, plan) from ProScript dataset [34], our model-generated dataset CoPlan, and the mix of both. Models are initialized with T5-11B. We generate plans using our proposed verifier-guided decoding for randomly sampled 50 and 150 goals from ProScript and CoPlan, respectively. We use the same human evaluation setup as before. Table 2 shows that training on our LLM-generated CoPlan dataset, consistently transfers better to human-written dataset, ProScript. Training on the mix of both datasets, however, achieves the best performance. Intuitively, we observe that models are in general better at tackling LLM-generated data. ### Counterfactual Planning and Revision Here, we seek to benchmark language models' planning abilities under constrained (contextually grounded) situations. This task goes beyond the original planning task, requiring models to produce novel linguistic alternatives to unseen situations. **Evaluation Set.** To create an evaluation set, we generate conditions and counterfactual plans for the test set of ProScript following Step 1 in SS2.1. We then only use human-verified tuples of (goal, plan, condition, counterfactual plan) as our test set for counterfactual planning and revision tasks. **Setup.** We compare 3B and 11B student models with GPT-3 Curie and the 175B teacher model, text-davinci-003 in zero/few-shot settings. During inference, we use our proposed verifier-guided step-wise beam search with \(\alpha=0.75\) to outweigh student model's probability over the verifier validity score.7 Footnote 7: We performed a hyperparameter search over \(\alpha=\{0.5,0.75,0.8\}\). **Metric.** We conduct human evaluation on the AMT platform. We generate (counterfactual) plans for 300 randomly sampled examples using each model. We ask 3 human annotators to rate each generated plan based on whether it contains the necessary steps to make the goal achievable _while satisfying the condition_. We provide 3 options for the annotators to pick from: **A**: The plan contains all the necessary steps to meet the requirements of the condition on the goal, **B**: The plan addresses the condition, but it is trivial and lacks thoughtfulness8, and **C**: The plan does NOT address the condition or does so very poorly. We take the majority vote for the final results. Details on crowd-sourcing human evaluation can be found in Appendix Figure 11. Footnote 8: An example of trivial modification is addressing the condition “you have no money” with adding an step “find money” in the plan. **Results.** Figure 4 depicts the results. Large students perform better on both tasks. In counterfactual planning, our 11B PlASma-Mul+ demonstrates a 93.33% success rate in producing high-quality plans while adhering to the given condition, which is comparable to the performance of the 175B parameter Davinci model in a zero-shot setting. Furthermore, our model generates slightly fewer low-quality plans, only 7 as opposed to 12 by Davinci. While multi-tasking seems to be helpful in (counterfactual) planning, this is not always the case for counterfactual revision. We hypothesize that the reason for this could be that the original and counterfactual planning tasks, which do not involve modifying an existing plan, may negatively impact the revision task. The best performance for the counterfactual plan revision is achieved by Davinci (90%) followed by PlASma+ (86.33%).9 We also collect additional feedback from annotators on the errors made by models. Results are reported in Appendix Table 11, showing "missing necessary steps" is the most prevalent mistakes. Footnote 9: Pairwise annotator agreements are 0.96 and 0.94 for counterfactual planning and revision, respectively. We provide qualitative examples of model generations across all three tasks in Table 4. More examples of (good and bad) generations according to human annotators are provided in Appendix Tables 9, 10. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{Test on \(\rightarrow\)} & \multicolumn{3}{c}{**ProScript**} & \multicolumn{3}{c}{**CoPlan**} \\ \cline{2-7} & \multirow{2}{*}{**Coverage**} & \multirow{2}{*}{**Order**} & \multirow{2}{*}{**Overall**} & \multirow{2}{*}{**Coverage**} & \multirow{2}{*}{**Order**} & **Overall** \\ \cline{1-1} \cline{5-7} & & & & & & **Quality** \\ \hline \multirow{2}{*}{**ProScript**} & 4.38 & 4.54 & 4.35 & 4.51 & 4.81 & 4.58 \\ & 4.55 & 4.74 & 4.63 & 4.72 & 4.86 & 4.73 \\ **Mix** & **4.77** & **4.88** & **4.65** & **4.77** & **4.88** & **4.78** \\ \hline \hline \end{tabular} \end{table} Table 2: Effect of symbolic knowledge distillation. The model trained on our CoPlan dataset transfers better to other dataset, ProScript. ### Application to Embodied Agents An important application enabled by PlaSMA is that of enabling an agent to plan according to a given high-level goal. We evaluate PlaSMA on the task of planning in the VirtualHome [29] environment. In this environment, agents can perform household activities, e.g. "paint ceiling", through programs, in the form of supported actions (42 in total) and arguments. For evaluation, we use their test set consisting of 88 goals (and corresponding gold programs). We compare our best student PlaSMA-Mul (11B) with Planner [11], a 1-shot GPT-3 (175B) model with several inference-time strategies to ensure executability in embodied environments. We follow their procedure to translate generated steps from natural language to steps executable in the environment. To apply our model to VirtualHome, we finetune PlaSMA-Mul on \(\sim\) 4K human labeled examples and also finetune the step verifier on the same data using the method described in Section 2.3. We show, in Table 3, that our model generates steps that are significantly more executable (according to automatic metric) and also more complete (according to human judges). More experimental details can be found in Appendix E. ## 4 Related Works **Procedural Planning** The problem of planning to accomplish a goal via sub-steps is widely studied in two contexts. One is script knowledge generation, which is a long-standing NLP problem [36]. Collecting script knowledge requires either human annotation [39], unsupervised feature-based extraction [5], or, more recently, methods that utilize task-specific fine-tuned LLMs [34] and pipeline-based approaches [35]. In addition, there is a line of procedural planning that involves planning with executable actions that can be executed by robots in real-life environments [11; 1; 43; 12]. Recent approaches view planning as a conditional text generation problem and use LLMs in a zero/few-shot prompting mode to tackle the task [23; 11; 1; 22]. Despite showing strong performance, their success Figure 4: Human evaluation results of 300 generations for counterfactual planning and revision tasks. Left: in counterfactual planning, our best student model PlaSMA-Mul+ (11B) with \(\times\)16 fewer parameters is on par with GPT-3 Davinci model. Right: in counterfactual revision, our best student model PlaSMA+ (11B) is able to generate good counterfactual plans 86.33% of the time. \begin{table} \begin{tabular}{l c c c} \hline \hline \multirow{2}{*}{**model**} & **Executability** & **LCS** & **Correctness** \\ & **(\%)** & **(\%)** & **(\%)** \\ \hline Planner (175B) [11] & 77.17 & 19.10 & 18.33 \\ \hline PlaSMA-Mul\({}^{PT}\) (11B) & 76.38 & 28.36 & 41.38 \\ PlaSMA-Mul+\({}^{FT}\) (11B) & **94.18** & **31.93** & **43.68** \\ \hline Human & 100 & N/A & 66.66 \\ \hline \hline \end{tabular} \end{table} Table 3: Human-evaluated correctness along with (automatic) executability and LCS scores on VirtualHome environment [29]. Steps generated by our model is more executable and correct for accomplishing the task. heavily relies on scale. However, in this paper, we seek to achieve comparable performance while using more parameter-efficient and accessible models. Symbolic Knowledge DistillationCrowd-sourcing human-written datasets at scale is both challenging and costly. Therefore, there has been a growing interest in using LLM-generated data to train smaller models. This approach which falls under the conceptual framework of symbolic knowledge distillation [41] has been applied to simpler classification tasks [37], reasoning [38; 10; 46; 7], as well as commonsense and general knowledge base construction [41; 3]. This approach not only achieves promising performance on smaller models but is also cost-efficient compared to pre-training smaller models from scratch [13]. In a concurrent work, Yuan et al. [45] proposed a similar approach to distill script knowledge from LLMs for constrained planning task. However, unlike our "conditions" which can take free-form format, their constraints are limited to specific types by extending an original goal with a modifier, intent or method. Decoding-time AlgorithmDecoding-time algorithm is an emerging approach for adapting language models' output for task-specific characteristics. Works in this line often focus on incorporating explicit lexical constraints at inference time so that the model is bounded with certain generation words [20; 19; 9; 26]. In addition to discrete lexical constraints, applying continuous optimization functions such as KL loss has also been found to be effective [30; 31; 15; 8]. Perhaps our approach is most similar to function-guided decoding methods. Krause et al. [14] and Yang et al. [44] fuse next-token probability with desired attributes' probabilities at inference using a discriminator model. These and related token-level beam search variants assume access to per-token logits and gradient updates. Our decoding method however only relies on model log-probabilities and a verifier to facilitate semantic and temporal constraints at a step level. ## 5 Conclusions and Future Work In this paper, we focus on procedural planning, a challenging task that involves decomposing high-level goals into ordered steps. We introduce PLASma as an effective approach that uses smaller and more accessible models. By leveraging symbolic procedural knowledge distillation and an inference-time algorithm, we have endowed smaller models with enhanced procedural knowledge and planning capabilities. Furthermore, we introduced the task of Counterfactual Planning, which involves generating/revising plans to accommodate realistic counterfactual scenarios. Our results demonstrate that significantly smaller models can effectively compete with and often outperform \begin{table} \begin{tabular}{l l l} \hline \hline **Goal** & **Condition** & **Initial Plan** & **Generated (Counterfactual) Plan** \\ \hline Empty list filter & & Find list filter \textgreater{} Remove list filter \textgreater{} Clean list filter \textgreater{} \textgreater{} \\ & & Replace list filter \\ \cline{2-3} & If the hint trap is & Unplug dryer \textgreater{} Open dryer \textgreater{} Locate list trap \textgreater{} Use heat-resistant glove or mitt to pick up limit trap \textgreater{} Pull lint trap \\ & & \begin{tabular}{l} Load clothes into dryer \textgreater{} Locate list trap \textgreater{} Load clothes into dryer \textgreater{} Locate list trap \textgreater{} Use gloved \\ Move hand towards lint trap \textgreater{} Grip int trap \\ with fingers \textgreater{} Pull lint trap out of dryer \textgreater{} Use \\ other hand to lift lift lint from lint trap \textgreater{} Empty lint \\ filter \\ \end{tabular} & \begin{tabular}{l} Load clothes into dryer \textgreater{} Locate list trap \textgreater{} Use gloved \\ load to move hand toward toward toward lint trap \textgreater{} Grip int trap with fingers \textgreater{} Pull lint trap out of dryer \textgreater{} Use \\ find to lift lint from lint trap \textgreater{} Remove \\ light trap from dryer \\ \end{tabular} & \begin{tabular}{l} Load clothes into dryer \textgreater{} Locate list trap \textgreater{} Use gloved \\ load to move hand their larger teacher models in both original and counterfactual settings. We hope our work sheds light on new directions towards developing smaller yet powerful multi-modal models for (counterfactual) procedural planning and reasoning. ## 6 Acknowledgements This work was funded in part by the DARPA MCS program through NIWC Pacific (N66001-19-2-4031), and the Allen Institute for AI. We also thank the Beaker Team at the Allen Institute for AI for helping with the compute infrastructure and OpenAI for providing access to the GPT-3 API.
2303.17909
Stringy scaling of n-point Regge string scattering amplitudes
We discover a stringy scaling behavior for a class of n-point Regge string scattering amplitudes (RSSA). The number of independent kinematics variables is found to be reduced by dim M.
Sheng-Hong Lai, Jen-Chi Lee, Yi Yang
2023-03-31T09:15:51Z
http://arxiv.org/abs/2303.17909v1
# Stringy scaling of \(n\)-point Regge string scattering amplitudes ###### Abstract We discover a _stringy scaling_ behavior for a class of \(n\)-point Regge string scattering amplitudes (RSSA). The number of independent kinematics variables is found to be reduced by dim\(\mathcal{M}\). ## I Introduction Recent development of string scattering amplitudes (SSA) has shown that a class of 4-point SSA form representations of the \(SL(K+3,C)\) group [1; 2]. These are SSA with three tachyons and one arbitrary string states \[\left|r_{n}^{T},r_{m}^{P},r_{l}^{L}\right\rangle=\prod_{n>0}\left(\alpha_{-n}^{ T}\right)^{T_{n}^{T}}\prod_{m>0}\left(\alpha_{-m}^{P}\right)^{T_{n}^{P}}\prod_{l>0} \left(\alpha_{-l}^{L}\right)^{T_{l}^{L}}\left|0,k\right\rangle \tag{1}\] where \(e^{P}=\frac{1}{M_{2}}(E_{2},\mathrm{k}_{2},0)=\frac{k_{2}}{M_{2}}\) is the momentum polarization, \(e^{L}=\frac{1}{M_{2}}(\mathrm{k}_{2},E_{2},0)\) is the longitudinal polarization and \(e^{T}=(0,0,1)\) is the transverse polarization on the \((2+1)\)-dimensional scattering plane. Note that SSA of three tachyons and one arbitrary string states with polarizations orthogonal to the scattering plane vanish. In addition to the mass level \(M_{2}^{2}=2(N-1)\) with \[N=\sum_{\begin{subarray}{c}n,m,l>0\\ \{r_{j}^{T}\neq 0\}\end{subarray}}\left(nr_{n}^{T}+mr_{m}^{P}+lr_{l}^{L}\right), \tag{2}\] another important index \(K\) was identified for the state in Eq.(1) [3] \[K=\sum_{\begin{subarray}{c}n,m,l>0\\ \{r_{j}^{T}\neq 0\}\end{subarray}}\left(n+m+l\right) \tag{3}\] where \(X=(T,P,L)\) and one has put \(r_{m}^{T}=r_{m}^{P}=r_{l}^{L}=1\) in Eq.(2) in the definition of \(K\) in Eq.(3). Intuitively, \(K\) counts the number of variety of the \(\alpha_{-j}^{X}\) oscillators in Eq.(1). The representation bases of the above subclass of 4-point SSA was soon extended to all 4-point SSA with arbitrary four string states, and eventually to all \(n\)-point SSA with arbitrary \(n\) string states [4; 5]. It is thus important to study whether other known interesting characteristics of the 4-point SSA can be similarly extended to the \(n\)-point SSA. One such characteristics of the 4-point SSA is the existence of infinite linear relations and their associated _constant ratios_, independent of the scattering angle \(\phi\), among hard SSA (HSSA) at each fixed mass level of the open bosonic string spectrum. These infinite linear relations and their associated constant ratios were first conjectured by Gross [6; 7] and later explicitly calculated by the method of decoupling of zero-norm states in [8; 9; 10; 11; 12]. Indeed, in one of the authors' previous publications [13], we discovered a general _stringy scaling_ behavior for all \(n\)-point HSSA to all string loop orders. For the simplest case of \(n=4\), the stringy scaling behavior reduces to the infinite linear relations and the constant ratios of HSSA at each mass level mentioned above. For this case, the ratios are independent of 1 scattering angle \(\phi\) and thus the number of independent kinematics variable reduced from 1 to 0 with dim\(\mathcal{M}=1-0=1\). In general higher \(n\)-point HSSA, the stringy scaling behavior implies that the number of independent kinematics variables of the ratios reduced by dim\(\mathcal{M}\)[13]. See the definition of dim\(\mathcal{M}\) in Eq.(35) and Eq.(40). As a result, the linear relations and their associated constant ratios of 4-point HSSA persist only in the parameter spaces \(\mathcal{M}\) for the cases of higher \(n\)-point HSSA [13]. See the example of constant ratios calculated among 6-point HSSA in Eq.(39). In this paper, we will extend our calculation of stringy scaling behavior of HSSA to the case of Regge SSA (RSSA). We will demonstrate a stringy scaling behavior for a class of \(n\)-point RSSA, and the number of independent kinematics variables is again found to be reduced by dim\(\mathcal{M}\). This paper is organized as following. In section II, we review and give a detailed calculation of the stringy scaling behavior of HSSA [13]. In section III, we give a saddle point calculation of the hard stringy scaling to justify the zero norm state (ZNS) calculation in section II. Section IV and V are the main parts of this paper and we extend the calculation of HSSA to the stringy scaling of RSSA. We will derive a stringy scaling behavior for a class of \(n\)-point RSSA with arbitrary \(n\) in section V. A brief conclusion was given in section VI. ## II The Hard Stringy Scaling A brief report on stringy scaling of \(n\)-point hard string scattering amplitudes (HSSA) was recently given in [13]. In this section, we will first give a detailed calculation of hard string scaling behavior. This can also be served as a preparartion for the calculation of stringy scaling of Regge string scattering amplitudes (RSSA) to be discussed in section IV and V. ### Stringy scaling of \(4\)-point HSSA The first stringy scaling was conjectured by Gross in 1988 [6] which claimed that all 4-point HSSA (\(E\to\infty\), fixed \(\phi\)) at each fixed mass level share the same functional form. That is, all HSSA at each fixed mass level are proportional to each other with _constant_ ratios independent of the scattering angle \(\phi\). To show this remarkable behavior, the starting point is to apply the 4-point \(l\)-loop stringy on-shell Ward identities [8; 9] \[\left\langle V_{1}\chi V_{3}V_{4}\right\rangle_{l-loop}=0 \tag{1}\] in the hard scattering limit. In Eq.(1) \(V_{j}\) above can be any string vertex and the second vertex \(\chi\) is the vertex of a zero-norm state (ZNS). In the hard scattering limit, components of polarization orthogonal to the scattering plane are subleading order in energy. On the other hand, it can be shown that at each fixed mass level \(M^{2}=2(N-1)\) only states of the following form [11; 12] (in the hard scattering limit \(e^{P}\simeq e^{L}\)) \[\left|N,2m,q\right\rangle=\left(\alpha_{-1}^{T}\right)^{N-2m-2q}\left(\alpha_ {-1}^{L}\right)^{2m}\left(\alpha_{-2}^{L}\right)^{q}\left|0;k\right\rangle \tag{2}\] Figure 1: Kinematic variables in the center of mass frame are leading order in energy. There are two types of physical ZNS in the old covariant first quantized open bosonic string spectrum: [14] \[\text{Type I}:L_{-1}\left|y\right\rangle,\text{ where }L_{1}\left|y\right\rangle=L_{2} \left|y\right\rangle=0,\text{ }L_{0}\left|y\right\rangle=0; \tag{2.3}\] \[\text{Type II}:(L_{-2}+\frac{3}{2}L_{-1}^{2})\left|\widetilde{y}\right\rangle, \text{ where }L_{1}\left|\widetilde{y}\right\rangle=L_{2}\left|\widetilde{y}\right\rangle=0,\text{ }(L_{0}+1)\left|\widetilde{y}\right\rangle=0.(D=26\text{ {\bf only}}). \tag{2.4}\] **(1)** We first consider \(\chi\) to be the type I hard ZNS (HZNS) calculated from Type I ZNS \[L_{-1}|N-1,2m-1,q\rangle =(M\alpha_{-1}^{L}+\alpha_{-2}^{L}\alpha_{1}^{L}+\underbrace{ \alpha_{-2}^{T}\alpha_{1}^{T}+\alpha_{-3}\cdot\alpha_{2}+\cdots}_{irrelevant})|N-1,2m-1,q\rangle\] \[\simeq M|N,2m,q\rangle+(2m-1)|N,2m-2,q+1\rangle \tag{2.5}\] where many terms are omitted because they are not of the form of Eq.(2.2). This implies the following relation among 4-point amplitudes \[\mathcal{T}^{(N,2m,q)}=-\frac{2m-1}{M}\mathcal{T}^{(N,2m-2,q+1)}. \tag{2.6}\] Using this relation repeatedly, we get \[\mathcal{T}^{(N,2m,q)}=\frac{(2m-1)!!}{(-M)^{m}}\mathcal{T}^{(N,0,m+q)}. \tag{2.7}\] **(2)** Next, we consider another class of HZNS calculated from type II ZNS \[L_{-2}|N-2,0,q\rangle =(\frac{1}{2}\alpha_{-1}^{T}\alpha_{-1}^{T}+M\alpha_{-2}^{L}+ \underbrace{\alpha_{-3}\cdot\alpha_{1}+\cdots}_{irrelevant})|N-2,0,q\rangle\] \[\simeq\frac{1}{2}|N,0,q\rangle+M|N,0,q+1\rangle. \tag{2.8}\] Again, irrelevant terms are omitted here. From this we deduce that \[\mathcal{T}^{(N,0,q+1)}=-\frac{1}{2M}\mathcal{T}^{(N,0,q)}, \tag{2.9}\] which leads to \[\mathcal{T}^{(N,0,q)}=\frac{1}{(-2M)^{q}}\mathcal{T}^{(N,0,0)}. \tag{2.10}\] In conclusion, the decoupling of ZNS in Eq.(2.7) and Eq.(2.10) leads to constant ratios among 4-point HSSA [8; 9; 11; 12] \[\frac{\mathcal{T}^{(N,2m,q)}}{\mathcal{T}^{(N,0,0)}}=\frac{(2m)!}{m!}\left( \frac{-1}{2M}\right)^{2m+q}.(\textbf{independent of }\phi\text{!}) \tag{2.11}\] In Eq.(2.11) \(\mathcal{T}^{(N,2m,q)}\) is the 4-point HSSA of any string vertex \(V_{j}\) with \(j=1,3,4\) and \(V_{2}\) is the high energy state in Eq.(2.2); while \(\mathcal{T}^{(N,0,0)}\) is the 4-point HSSA of any string vertex \(V_{j}\) with \(j=1,3,4\), and \(V_{2}\) is the leading Regge trajectory string state at mass level \(N\). Note that we have omitted the tensor indice of \(V_{j}\) with \(j=1,3,4\) and keep only those of \(V_{2}\) in \(\mathcal{T}^{(N,2m,q)}\). ### Examples of \(4\)-point stringy scaling #### ii.2.1 Bosonic open string Since the ratios of the amplitudes in Eq.(11) are independent of the choices of \(V_{1}\), \(V_{3}\) and \(V_{4}\), we choose them to be tachyons and \(V_{2}\) to be Eq.(2). On the other hand, since the ratios are independent of the loop order, we choose to calculate HSSA of \(l=0\) loop. An explicit amplitude calculation for \(M^{2}=4\), \(6\) and \(8\) gives [8; 9; 11; 12] \[{\cal T}_{TTT}:{\cal T}_{(LLT)}:{\cal T}_{(LT)}:{\cal T}_{[LT]}=8:1:-1:-1, \tag{12}\] \[{\cal T}_{(TTTT)}:{\cal T}_{(TTLL)}:{\cal T}_{(LLLL)}:{\cal T}_{ TT,L}:{\cal T}_{(TTL)}:{\cal T}_{(LLL)}:{\cal T}_{(LLL)}\] \[=16:\frac{4}{3}:\frac{1}{3}:-\frac{2\sqrt{6}}{3}:-\frac{4\sqrt{6 }}{9}:-\frac{\sqrt{6}}{9}:\frac{2}{3} \tag{13}\] and \[{\cal T}_{(TTTTT)}:{\cal T}_{(TTTL)}:{\cal T}_{(TTTL)}:{\cal T}_ {(TLLL)}:{\cal T}_{(TLLL)}:{\cal T}_{(TLLL)}:{\cal T}_{(TLL)}:{\cal T}_{TLL}:{ \cal T}_{TLL,L}:{\cal T}_{TTT,L}\] \[=32:\sqrt{2}:2:\frac{3\sqrt{2}}{16}:\frac{3}{8}:\frac{1}{3}: \frac{2}{3}:\frac{\sqrt{2}}{16}:3\sqrt{2}, \tag{14}\] respectively. These are all remarkably consistent with Eq.(11) of ZNS calculation [15; 16]. It is important to note that for subleading order amplitudes, they are in general _not_ proportional to each other. For \(M^{2}=4\), for example, one gets \(6\) subleading order amplitudes and \(4\) linear relations (on-shell Ward identities) in the ZNS calculation. An explicit subleading order amplitude calculation gives [8; 9] \[{\cal T}^{2}_{LLL} \sim -4E^{8}\sin\phi\cos\phi,\] \[{\cal T}^{2}_{LTT} \sim -8E^{8}\sin^{2}\phi\cos\phi, \tag{15}\] which show that the proportional coefficients do depend on the scattering angle \(\phi\). #### ii.2.2 Bosonic closed string and D-particle For closed string scatterings [17; 18], one can use the KLT formula [19], which expresses the relation between tree amplitudes of closed and two channels of open string (\(\alpha^{\prime}_{\rm closed}=4\alpha^{\prime}_{\rm open}=2\)), to obtain the closed string ratios which are the tensor product of two open string ratios in Eq.(11). On the other hand, it is interesting to find that the ratios of hard closed string D-particle scatterings are again given by the tensor product of two open string ratios [20] \[\frac{T^{\left(N;2m,2m^{{}^{\prime}};q,q^{{}^{\prime}}\right)}_{SD}}{T^{ \left(N;0,0;0,0\right)}_{SD}}=\left(-\frac{1}{M_{2}}\right)^{2(m+m^{{}^{ \prime}})+q+q^{{}^{\prime}}}\left(\frac{1}{2}\right)^{m+m^{{}^{\prime}}+q+q^{ {}^{\prime}}}(2m-1)!!(2m^{\prime}-1)!!, \tag{16}\] which came as a surprise since there is no physical picture for open string D-particle tree scattering amplitudes and thus no factorization for closed string D-particle scatterings into two channels of open string D-particle scatterings, and hence no KLT-like formula there. However, these ratios are consistent with the decoupling of high energy ZNS calculation. #### ii.2.3 Stringy scaling of Superstring It turned out to be nontrivial to extend the linear relations and their associated constant ratios of the HSSA of bosonic string to the case of \(10D\) open superstring. First of all, in addition to the NS-sector, there are massive fermionic states in the R-sector whose vertex operators are still unknown except the leading Regge trajectory states in the spectrum [21]. So the only known complete vertex operators so far are those for the mass level \(M^{2}=2\)[22] which contains no off leading massive Regge trajectory fermionic string states. Secondly, in the NS-sector of \(M^{2}=2\) it was surprised to note that [24] there exists no "inter-particle gauge transformation" induced by bosonic ZNS for the two positive-norm physical propagating states, the symmetric spin three and the anti-symmetric spin two states. However, the 4-point HSSA among these two positive-norm states are still related and are indeed again proportional to each others. Presumably, this is due to the massive spacetime SUSY and the existence of spacetime massive fermion string scattering amplitudes of the R-sector of the theory [23]. Thirdly, it was noted that for the HSSA of the NS sector of superstring, there existed leading order HSSA with polarizations orthogonal to the scattering plane [24]. This was due to the "worldsheet fermion exchange" [25] in the correlation functions and was argued to be related to the HSSA of massive spacetime fermion of R-sector of the theory [23]. The first calculation of the 4-point superstringy scaling was performed for the NS-sector of \(10D\) open superstring theory. There are four classes of HSSA of superstring which are all proportional to each other [25] \[|N,2m,q\rangle\otimes\left|b_{-\frac{1}{2}}^{P}\right\rangle =\left(-\frac{1}{2M_{2}}\right)^{q+m}\frac{(2m-1)!!}{\left(-M_{2} \right)^{m}}\left|N,0,0\right\rangle\otimes\left|b_{-\frac{3}{2}}^{P}\right\rangle, \tag{17}\] \[|N+1,2m+1,q\rangle\otimes\left|b_{-\frac{1}{2}}^{P}\right\rangle =\left(-\frac{1}{2M_{2}}\right)^{q+m}\frac{(2m+1)!!}{\left(-M_{2 }\right)^{m+1}}\left|N,0,0\right\rangle\otimes\left|b_{-\frac{3}{2}}^{P}\right\rangle,\] (18) \[|N+1,2m,q\rangle\otimes\left|b_{-\frac{1}{2}}^{T}\right\rangle =\left(-\frac{1}{2M_{2}}\right)^{q+m}\frac{(2m-1)!!}{\left(-M_{2 }\right)^{m-1}}\left|N,0,0\right\rangle\otimes\left|b_{-\frac{3}{2}}^{P}\right\rangle,\] (19) \[|N-1,2m,q-1\rangle\otimes\left|b_{-\frac{1}{2}}^{T}b_{-\frac{1}{2 }}^{P}\right\rangle =\left(-\frac{1}{2M_{2}}\right)^{q+m}\frac{(2m-1)!!}{\left(-M_{2 }\right)^{m}}\left|N,0,0\right\rangle\otimes\left|b_{-\frac{3}{2}}^{P}\right\rangle. \tag{20}\] Note that, in order to simplify the notation, we have only shown the second state of the four point functions to represent the scattering amplitudes on both sides of each equation above. Eqs.(17) to (20) are thus the SUSY generalization of Eq.(11) for the bosonic string. Moreover, a recent calculation showed that [23] among \(2^{4}\times 2^{4}=256\) 4-point polarized fermion SSA (PFSSA) in the R-sector of \(M^{2}=2\) states, only 16 of them are of leading order in energy and all of them share the same functional form in the hard scattering limit. On the other hand, the ratios of the _complete_ 4-point HSSA in the NS sector of mass level \(M^{2}=2\) which include HSSA with polarizations orthogonal to the scattering plane are [24] \[\left\langle b_{\frac{1}{2}}^{T},\alpha_{-1}^{T}b_{\frac{1}{2}}^{ T}\right\rangle :\left\langle b_{\frac{1}{2}}^{T},\left(2b_{\frac{1}{2}}^{L}\alpha_{-1}^{L}-Mb_ {\frac{3}{2}}^{L}\right)\right\rangle:\left\langle b_{\frac{1}{2}}^{T},\alpha_ {-1}^{T}b_{\frac{1}{2}}^{T_{j}}\right\rangle:\left\langle b_{\frac{1}{2}}^{T},b _{\frac{1}{2}}^{L}b_{\frac{1}{2}}^{T_{j-1}}\right\rangle \tag{22}\] \[=-2k_{3}^{T}E^{2}:-2(\frac{2}{M^{2}}+1)k_{3}^{T}E^{2}:\delta_{ij }2k_{3}^{T}E^{2}:\delta_{lk}\frac{-2k_{3}^{T}E^{2}}{M}\] \[=1:2:-\delta_{ij}:\frac{\delta_{lk}}{\sqrt{2}}.\ \ (\ i,j,k,l=3,4,5,...,9)\] where we have, for simplicity, omitted the last two tachyon vertices in the notation of each HSSA in Eq.(22). In sum, in the NS sector one gets \(1+1+7+7=16\) HSSA in Eq.(22). This result agrees with those of 16 hard massive PFSSA in the R-sector calculated recently [23]. #### ii.1.4 Field theory On the other hand, in field theory, as an example, the leading order process of the elastic scattering of a spin-\(\frac{1}{2}\) particle by a spin-0 particle such as \(e^{-}\pi^{+}\longrightarrow e^{-}\pi^{+}\), the non-vanishing amplitudes were shown to be [26] \[{\cal T}\ (e_{R}^{-}\pi^{+}\longrightarrow e_{R}^{-}\pi^{+})={\cal T}\ (e_{L}^{-}\pi^{+} \longrightarrow e_{L}^{-}\pi^{+})\sim\ \cos\frac{\phi}{2}, \tag{23}\] \[{\cal T}\ (e_{R}^{-}\pi^{+}\longrightarrow e_{L}^{-}\pi^{+})={\cal T} \ (e_{L}^{-}\pi^{+}\longrightarrow e_{R}^{-}\pi^{+})\sim\ \sin\frac{\phi}{2}, \tag{24}\] which are _not_ proportional to each other. In QED, as another example, for the leading order process of \(e^{-}e^{+}\longrightarrow\mu^{-}\mu^{+}\), there are 4 non-vanishing among 16 hard polarized amplitudes [27] \[{\cal T}\ (e_{R}^{-}e_{L}^{+}\longrightarrow\mu_{R}^{-}\mu_{L}^{+})={\cal T }\ (e_{L}^{-}e_{R}^{+}\longrightarrow\mu_{L}^{-}\mu_{R}^{+})\sim\ (1+\cos\theta)=2\ \cos^{2}\frac{\phi}{2}, \tag{25}\] \[{\cal T}\ (e_{R}^{-}e_{L}^{+}\longrightarrow\mu_{L}^{-}\mu_{R}^{+})={\cal T }\ (e_{L}^{-}e_{R}^{+}\longrightarrow\mu_{R}^{-}\mu_{L}^{+})\sim\ (1-\cos\theta)=2\ \sin^{2}\frac{\phi}{2}, \tag{26}\] and they are _not_ all proportional to each other. ### Stringy scaling of higher point (\(n\geq 5\)) HSSA It is tempted to extend the stringy scaling behavior of 4-point SSA derived in the previous subsection to the higher point SSA. The \(n\)-point stringy on-shell Ward identities can be written as \[\left\langle V_{1}\chi V_{3}\cdots V_{n}\right\rangle_{l-loop}=0 \tag{27}\] where \(\chi\) again is the vertex of a ZNS. We begin the discussion with a simple kinematics regime on the scattering plane. #### ii.3.1 On the scattering plane In the hard scattering limit on the scattering plane, the space part of momenta \(k_{j}\) ( \(j=3,4,\cdots,n\)) form a closed 1-chain with \((n-2)\) sides due to momentum conservation. It turned out that all the 4-point calculation in the previous subsection persist and one ends up with Eq.(11) again [13]. However, while for \(n=4\) the _ratios_ are independent of 1 scattering angle \(\phi\), for \(n=5\), the ratios are independent of 3 kinematics variables (2 angles and 1 fixed ratio of two infinite energies) or, for simplicity, 3 scattering "angles". For \(n=6\), there are 5 scattering "angles" etc.. #### ii.3.2 Out of the scattering plane The general high energy states at each fixed mass level \(M^{2}=2(N-1)\) can be written as [13] \[\left|\left\{p_{i}\right\},2m,q\right\rangle=\left(\alpha_{-1}^{T_{1}}\right) ^{N+p_{1}}\left(\alpha_{-1}^{T_{2}}\right)^{P_{2}}\cdots\left(\alpha_{-1}^{T_ {r}}\right)^{P_{r}}\left(\alpha_{-1}^{L}\right)^{2m}\left(\alpha_{-2}^{L} \right)^{q}\left|0;k\right\rangle \tag{28}\] where \(\sum_{i=1}^{r}p_{i}=-2(m+q)\) with \(r\leq 24\). In Eq.(28), \(T_{j}\) is the \(j\)th transverse direction orthogonal to \(k_{2}\). For higher dimensional scattering space, one generalizes the transverse polarization \(e^{T}=(0,0,1)\) to \(e^{\hat{T}}=(0,0,\vec{\omega})\) where \[\omega_{i}=\cos\theta_{i}\prod_{\sigma=1}^{i-1}\sin\theta_{\sigma}\mbox{with }i=1,\cdots,r,\ \theta_{r}=0 \tag{29}\] are the solid angles in the transverse space spanned by 24 transverse directions \(e^{T_{i}}\). Note that \(\alpha_{-1}^{\hat{T}}=\alpha_{-1}\cdot e^{\hat{T}}\) etc. With \(\left(\alpha_{-1}^{T_{i}}\right)=\left(\alpha_{-1}^{\hat{T}}\right)\omega_{i}\), we easily obtain \[\left(\alpha_{-1}^{T_{1}}\right)^{N+p_{1}}\left(\alpha_{-1}^{T_{ 1}}\right)^{P_{2}}\cdots\left(\alpha_{-1}^{T_{r}}\right)^{p_{r}}\left(\alpha_{ -1}^{L}\right)^{2m}\left(\alpha_{-2}^{L}\right)^{q}\left|0;k\right\rangle\] \[=\left(\omega_{1}^{N}\prod_{i=1}^{r}\omega_{i}^{p_{i}}\right) \left(\alpha_{-1}^{\hat{T}}\right)^{N-2m-2q}\left(\alpha_{-1}^{L}\right)^{2m} \left(\alpha_{-2}^{L}\right)^{q}\left|0;k\right\rangle, \tag{30}\] which leads to the ratios of \(n\)-point HSSA [13] \[\frac{\mathcal{T}^{\{\{\{p_{i}\},2m,q\}}}}{\mathcal{T}^{\{\{\{0_{i}\},0,0\}}} }=\frac{(2m)!}{m!}\left(\frac{-1}{2M}\right)^{2m+q}\prod_{i=1}^{r}\omega_{i}^ {p_{i}} \tag{31}\] where \(\mathcal{T}^{\{\{0_{i}\},0,0\}}\) is the HSSA of leading Regge trajectory state at mass level \(M^{2}=2(N-1)\). It is important to note that the number of kinematics variables dependence in the ratios of Eq.(31) reduced. This stringy scaling behavior of \(n\)-point (\(n\geq 5\)) HSSA is the generalization of that of 4-point HSSA in Eq.(11). Since the result of ZNS calculation in Eq.(31) is based on the stringy Ward identity in Eq.(27), The ratios calculated in Eq.(31) are valid to all string loop orders. ### Degree of Stringy Scaling We see in the previous section that for the simple case with \(n=4\) and \(r=1\), one has two variables, \(s\) and \(t\) (or \(E\), \(\phi\)). The ratios of all HSSA are independent of the scattering angle \(\phi\) and we will call the degree of the scaling \(\mathrm{dim}\mathcal{M}=1\). The dependence of the number of kinematics variable reduced from \(1\) to \(0\) and we have \(1-0=\mathrm{dim}\mathcal{M}=1\). (see the definition of \(\mathcal{M}\) below) For the general \(n\)-point HSSA with \(r\leq 24\), \(d=r+2\), we have \(k_{j}\) vector with \(j=1,\cdots,n\) and \(k_{j}\in R^{d-1,1}\). The number of kinematics variables is \(n\left(d-1\right)-\frac{d\left(d+1\right)}{2}\). Indeed, as \(p=E\rightarrow\infty\), that implies \(q_{j}\rightarrow\infty\) in the hard limit, we define the \(26\)-dimensional momenta in the CM frame to be \[k_{1} =\left(E,-E,0^{\tau}\right),\] \[k_{2} =\left(E,+E,0^{\tau}\right),\] \[\vdots\] \[k_{j} =\left(-q_{j},-q_{j}\Omega_{1}^{j},-q_{j}\Omega_{2}^{j},\cdots,- q_{j}\Omega_{r}^{j},-q_{j}\Omega_{r+1}^{j}\right) \tag{32}\] where \(j=3,4,\cdots,n\), and \[\Omega_{i}^{j}=\cos\phi_{i}^{j}\prod_{\sigma=1}^{i-1}\sin\phi_{\sigma}^{j}\ \mathrm{with}\ \phi_{j-1}^{j}=0,\ \phi_{i>r}^{j}=0\ \mathrm{and}\ r\leq\min\left\{n-3,24\right\} \tag{33}\] are the solid angles in the \(\left(j-2\right)\)-dimensional spherical space with \(\sum_{i=1}^{j-2}\left(\Omega_{i}^{j}\right)^{2}=1\). In Eq.(32), \(0^{\tau}\) denotes the \(r\)-dimensional null vector. The condition \(\phi_{j-1}^{j}=0\) in Eq.(33) was chosen to fix the frame by using the rotational symmetry. The independent kinematics variables can be chosen to be some \(\varphi_{i}^{j}\) and some fixed ratios of infinite \(q_{j}\). For the kinematics parameter space \(\mathcal{M}\) defined by [13] \[\omega_{j}\left(\mathrm{kinematics\ parameters\ with}\ E\rightarrow\infty\right)= \mathrm{fixed\ constant}\ \ (j=2,\cdots,r), \tag{34}\] we can count the dimension of \(\mathcal{M}\) to be [13] \[\mathrm{dim}\mathcal{M}=n\left(d-1\right)-\frac{d\left(d+1\right)}{2}-1-(r-1) =\frac{\left(r+1\right)\left(2n-r-6\right)}{2} \tag{35}\] where \(r=d-2\) is the number of transverse directions \(e^{T_{i}}\). In sum, the ratios among \(n\)-point \(HSSA\) with \(r\leq 24\) are constants and independent of the scattering "angles" in the kinematic regime \(\mathcal{M}\). #### ii.2.1 Examples (1). For \(n=5\) and \(r=2\), \(d=r+2=4\) and one has \(n\left(d-1\right)-\frac{d\left(d+1\right)}{2}=5\) parameters (\(r_{1}\) is the ratio of two infinite energies) \[E,\phi_{2}^{3},\phi_{2}^{4},\phi_{3}^{4},r_{1}. \tag{36}\] In the hard scattering limit \(E\rightarrow\infty\), for \(\theta_{1}=fixed\) we get \(\mathrm{dim}\mathcal{M}=3\). (2). For \(n=6\) and \(r=3\), the ratios of \(6\)-point HSSA depends only on \(2\) variables \(\theta_{1}\) and \(\theta_{2}\) instead of \(8\) "angles" and \(\mathrm{dim}\mathcal{M}=6\). For this case, \(\mathcal{M}\) is defined by \[\theta_{j}\left(8\ \mathrm{kinematics\ parameters}\right)=\mathrm{fixed\ constant},\ \ j=1,2, \tag{37}\] and the ratios [13] \[\frac{\mathcal{T}^{\left(\left\{p_{1},p_{2},p_{3}\right\}\right),2m,q\right)} }{\mathcal{T}^{\left(\left\{0,0,0\right\},0,0\right)}}=\frac{\left(2m\right)! }{m!}\left(\frac{-1}{2M}\right)^{2m+q}\left(\cos\theta_{1}\right)^{p_{1}}\left( \sin\theta_{1}\cos\theta_{2}\right)^{p_{2}}\left(\sin\theta_{1}\sin\theta_{2} \right)^{p_{3}} \tag{38}\] are independent of kinematics parameters in the space \(\mathcal{M}\). For example, for say \(\theta_{1}=\frac{\pi}{4}\) and \(\theta_{2}=\frac{\pi}{6}\), we get the ratios among \(6\)-point HSSA \[\frac{\mathcal{T}^{\left(\left\{p_{1},p_{2},p_{3}\right\}\right),2m,q\right)} }{\mathcal{T}^{\left(\left\{0,0,0\right\},0,0\right)}}=\left(-\frac{1}{M} \right)^{2m+q}\left(2m-1\right)!!\left(\frac{1}{2}\right)^{p_{2}+p_{3}}\left( \sqrt{3}\right)^{p_{3}}. \tag{39}\] These ratios for higher point HSSA are one example of generalization of previous ratios calculated in Eq.(11) for the case \(4\)-point HSSA. General cases In general, in the hard scattering limit, the number of scattering "angles" dependence on ratios of \(n\)-point \(HSSA\) with \(r\leq 24\) reduces by dim\(\mathcal{M}\). For a given \((n,r)\), we can calculate some examples of dim\(\mathcal{M}\)[13] \[\begin{array}{ccccc}\text{dim}\mathcal{M}&r=1&r=2&r=3&r=4\\ n=4&1&&\\ n=5&3&3&&\\ n=6&5&6&6&&\\ n=7&7&9&10&10\end{array}. \tag{40}\] Note that for the \(n=4\) and \(r=1\) case, one obtains the previous 4-point case in Eq.(11). ## III Saddle point calculation To justify the ZNS calculation in Eq.(11) and Eq.(31), we use the saddle point calculation to explicitly calculate the HSSA. Since the ratios are independent of the choices of \(V_{j}\) (\(J=1,3,4\cdots,n\)), we choose them to be tachyons and \(V_{2}\) to be the high energy state in Eq.(2). On the other hand, since the ratios are independent of the loop order, we choose to calculate \(l=0\) loop. We begin with the 4-point case [11; 12]. ### The four point calculation The \(t-u\) channel contribution to the stringy amplitude at tree level is (after \(SL(2,R)\) fixing) \[\mathcal{T}^{(N,2m,q)} =\int_{1}^{\infty}dxx^{(1,2)}(1-x)^{(2,3)}\left[\frac{e^{T}\cdot k _{1}}{x}-\frac{e^{T}\cdot k_{3}}{1-x}\right]^{N-2m-2q}\] \[\cdot\left[\frac{e^{P}\cdot k_{1}}{x}-\frac{e^{P}\cdot k_{3}}{1- x}\right]^{2m}\left[-\frac{e^{P}\cdot k_{1}}{x^{2}}-\frac{e^{P}\cdot k_{3}}{(1-x )^{2}}\right]^{q} \tag{41}\] where \((1,2)=k_{1}\cdot k_{2}\) etc. In order to apply the saddle-point method, we rewrite the amplitude above into the following form \[\mathcal{T}^{(N,2m,q)}(K)=\int_{1}^{\infty}dx\ u(x)e^{-Kf(x)}, \tag{42}\] where \[K \equiv-(1,2)\rightarrow\frac{s}{2}\to 2E^{2}, \tag{43}\] \[\tau \equiv-\frac{(2,3)}{(1,2)}\rightarrow-\frac{t}{s}\rightarrow\sin ^{2}\frac{\phi}{2},\] (44) \[f(x) \equiv\ln x-\tau\ln(1-x),\] (45) \[u(x) \equiv\left[\frac{(1,2)}{M}\right]^{2m+q}(1-x)^{-N+2m+2q}( \underline{f^{\prime}})^{2m}(f^{\prime\prime})^{q}(-e^{T}\cdot k_{3})^{N-2m-2 q}. \tag{46}\] The saddle-point for the integration of moduli, \(x=x_{0}\), is defined by \[f^{\prime}(x_{0})=0, \tag{47}\] and we have \[x_{0}=\frac{1}{1-\tau}=\sec^{2}\frac{\phi}{2},\hskip 28.452756pt1-x_{0}=-\frac{ \tau}{1-\tau},\hskip 28.452756ptf^{\prime\prime}(x_{0})=(1-\tau)^{3}\tau^{-1}. \tag{48}\] Due to the factor \((f^{\prime})^{2m}\) in Eq.(46), it is easy to see that [11; 12] \[u(x_{0})=u^{\prime}(x_{0})=....=u^{(2m-1)}(x_{0})=0, \tag{49}\] \[u^{(2m)}(x_{0})=\left[\frac{(1,2)}{M}\right]^{2m+q}(1-x_{0})^{-N+2m+2q}(2m)!(f_{0} ^{\prime\prime})^{2m+q}(-e^{T}\cdot k_{3})^{N-2m-2q}. \tag{3.10}\] With these inputs, one can easily evaluate the Gaussian integral associated with the four-point amplitudes [11; 12] \[\int_{1}^{\infty}dx\ u(x)e^{-Kf(x)}\] \[=\sqrt{\frac{2\pi}{Kf_{0}^{\prime\prime}}}e^{-Kf_{0}}\left[\frac{ u_{0}^{(2m)}}{2^{m}\ m!\ (f_{0}^{\prime\prime})^{m}\ K^{m}}+O(\frac{1}{K^{m+1}})\right]\] \[=\sqrt{\frac{2\pi}{Kf_{0}^{\prime\prime}}}e^{-Kf_{0}}\left[(-1)^ {N-q}\frac{2^{N-2m-q}(2m)!}{m!\ M^{2m+q}}\ \tau^{-\frac{N}{2}}(1-\tau)^{\frac{3N}{2}}E^{N}+O(E^{N-2})\right]. \tag{3.11}\] This result shows explicitly that with one tensor and three tachyons, the energy and angle dependence for the four-point HSS amplitudes only depend on the level \(N\)[11; 12] \[\lim_{E\rightarrow\infty}\frac{\mathcal{T}^{(N,2m,q)}}{\mathcal{ T}^{(N,0,0)}} =\frac{(-1)^{q}(2m)!}{m!(2M)^{2m+q}}\] \[=(-\frac{2m-1}{M})....(-\frac{3}{M})(-\frac{1}{M})(-\frac{1}{2M} )^{m+q}, \tag{3.12}\] which is remarkably consistent with calculation of decoupling of high energy ZNS obtained in Eq.(2.11). ### The \(n\)-point HSSA with \(r=1\) To illustrate the \(n\)-point HSSA calculation, we begin with \(n\)-point HSSA with \(r=1\). We want to calculate \(n\)-point HSSA with \((n-1)\) tachyons and \(1\) high energy state in Eq.(2.2). With the change of variables \(z_{i}=\frac{x_{i}}{x_{i+1}}\) or \(x_{i}=z_{i}\cdots z_{n-2}\), the HSSA can be written as \[\mathcal{T}^{(\{p_{i}\},m,q)} =\int_{0}^{1}dx_{n-2}\cdots\ \int_{0}^{x_{4}}dx_{3}\int_{0}^{x_{3}}dx_{2}ue^{-Kf}\] \[=\int_{0}^{1}dz_{n-2}\cdots\ \int_{0}^{1}dz_{3}\int_{0}^{1}dz_{2} \begin{vmatrix}z_{3}\cdots z_{n-2}&z_{2}z_{4}\cdots z_{n-2}&\cdots&z_{2}\cdots z _{n-3}\\ 0&z_{4}\cdots z_{n-2}&\cdots&\\ &&\ddots&\\ 0&0&\cdots&1\end{vmatrix}ue^{-Kf}\] \[=\left(\prod_{i=3}^{n-2}\int_{0}^{1}dz_{i}\ z_{i}^{i-2-N}\right) \int_{0}^{1}dz_{2}ue^{-Kf} \tag{3.13}\] where \[f\left(x_{i}\right) =-\underset{i<j}{\sum}\frac{k_{i}\cdot k_{j}}{K}\ln\left(x_{j}-x_{ i}\right)=-\underset{i<j}{\sum}\frac{k_{i}\cdot k_{j}}{K}\ln\left(z_{j}\cdots z_{n-2}- z_{i}\cdots z_{n-2}\right)\] \[=-\underset{i<j}{\sum}\frac{k_{i}\cdot k_{j}}{K}\left[\ln(z_{j} \cdots z_{n-2})+\ln\left(1-z_{i}\cdots z_{j-1}\right)\right],\ K=-k_{1}\cdot k _{2}, \tag{3.14}\] \[u\left(x_{i}\right) =\left(k^{T}\right)^{N-2m-q}\underbrace{\left(k^{L}\right)^{2m} }\left(k^{\prime L}\right)^{q}.\left(k^{\prime L}=\frac{\partial k^{L}}{ \partial x_{2}}\right) \tag{3.15}\] In Eq.(3.15), we have defined \[k=\underset{i\neq 2,n}{\sum}\frac{k_{i}}{x_{i}-x_{2}}=\underset{i\neq 2,n}{ \sum}\frac{k_{i}}{z_{i}\cdots z_{n-2}-z_{2}\cdots z_{n-2}}, \tag{3.16}\] and \(k_{\perp}=|k_{\perp}|\sum_{i=1}^{r}e^{T_{i}}\omega_{i}=|k_{\perp}|\,e^{T}\). The saddle points \((\tilde{z}_{2},\cdots,\tilde{z}_{n-2})\) are the solution of \[\frac{\partial f}{\partial z_{2}}=0,\;\cdots,\,\frac{\partial f}{\partial z_{n- 2}}=0. \tag{3.17}\] Note that Eq.(3.17) implies \[\tilde{k}^{L}=\frac{\tilde{k}\cdot k_{2}}{M}=\frac{k_{12}}{M}\left.\frac{ \partial f}{\partial x_{2}}\right|_{z_{i}=\tilde{z}_{i}}=\frac{k_{12}}{M}\left. \frac{\partial z_{j}}{\partial x_{2}}\frac{\partial f}{\partial z_{j}}\right|_ {z_{i}=\tilde{z}_{i}}=0\;,\;\left|\tilde{k}\right|=\left|\tilde{k}_{\perp} \right|. \tag{3.18}\] We also define \[f_{2}\equiv\frac{\partial f}{\partial z_{2}},\,f_{22}\equiv\frac{\partial^{2} f}{\partial z_{2}^{2}},\,\tilde{f}=f\left(\tilde{z}_{2},\cdots,\tilde{z}_{n-2} \right),\,\tilde{f}_{22}=\left.\frac{\partial^{2}f}{\partial z_{2}^{2}} \right|_{(\tilde{z}_{2},\cdots,\tilde{z}_{n-2})}. \tag{3.19}\] In view of the factor \(\left(k^{L}\right)^{2m}\) in Eq.(3.15) and Eq.(3.18), all up to \((2m)\)-order differentiations of \(u\) function in Eq.(3.15) at the saddle point vanish except [13] \[\frac{\partial^{2m}u}{\partial z_{2}^{2m}}\bigg{|}_{(\tilde{z}_{2 },\cdots,\tilde{z}_{n-2})} =\left(\frac{k_{12}}{M}\right)^{2m+q}\left(-\sum_{i\neq 2,n} \frac{k_{i}^{T}}{\tilde{x}_{i}-\tilde{x}_{2}}\right)^{N-2m-2q}\left(2m\right)! \left(\tilde{f}_{22}\right)^{q+2m}\] \[=\left(\frac{k_{12}}{M}\right)^{2m+q}\left(\tilde{k}^{T}\right)^{ N-2m-2q}\left(2m\right)!\left(\tilde{f}_{22}\right)^{q+2m}. \tag{3.20}\] Finally, with the saddle point, we can calculate the HSSA to be [13] \[\mathcal{T}^{(N,2m,2q)} =\left(\prod_{i=3}^{n-2}\int_{0}^{1}dz_{i}\;z_{i}^{i-2-N}\right) \int_{0}^{1}dz_{2}\left(\frac{\partial^{2m}\tilde{u}}{\partial z_{2}^{2m}} \frac{\left(z_{2}-\tilde{z}_{2}\right)^{2m}}{\left(2m\right)!}\right)e^{-Kf} \tag{3.21}\] \[\simeq\frac{1}{\left(2m\right)!}\frac{\partial^{2m}\tilde{u}}{ \partial z_{2}^{2m}}\left(\prod_{i=3}^{n-2}\tilde{z}_{i}^{i-2-N}\right)\int_{0 }^{1}dz_{2}\left(z_{2}-\tilde{z}_{2}\right)^{2m}e^{-Kf(z_{2})}\] (3.22) \[=\frac{2\sqrt{\pi}}{m!}\left(\prod_{i=3}^{n-2}\tilde{z}_{i}^{i-2- N}\right)\frac{e^{-K\tilde{f}}}{\left|\tilde{k}\right|^{2m+1}}\left.\frac{ \partial^{2m}u}{\partial z_{2}^{2m}}\right|_{z_{i}=\tilde{z}_{i}}\] (3.23) \[=2\sqrt{\pi}e^{-K\tilde{f}}\left|\tilde{k}\right|^{N-1}\left(\prod _{i=3}^{n-2}\tilde{z}_{i}^{i-2-N}\right)\frac{\left(2m\right)!}{m!}\left(\frac {-1}{2M}\right)^{2m+q}\left(\frac{2K\tilde{f}_{22}}{\left(\sum_{i\neq 2,n}\frac{k_{i}^{T}}{ \tilde{x}_{i}-\tilde{x}_{2}}\right)^{2}}\right)^{m+q}\] (3.24) \[=2\sqrt{\pi}e^{-K\tilde{f}}\left|\tilde{k}\right|^{N-1}\left(\prod _{i=3}^{n-2}\tilde{z}_{i}^{i-2-N}\right)\frac{\left(2m\right)!}{m!}\left(\frac {-1}{2M}\right)^{2m+q}\left(\frac{2K\tilde{f}_{22}}{\left(\sum_{i\neq 2,n}\frac{k_{i}^{T}}{ \tilde{x}_{i}-\tilde{x}_{2}}\right)^{2}}\right)^{m+q} \tag{3.25}\] where \(f\left(z_{2}\right)=f\left(z_{2},\tilde{z}_{3},\cdots,\tilde{z}_{n-2}\right)\). The ratios of \(n\)-point \(HSSA\) with \(r=1\) is \[\frac{\mathcal{T}^{(N,m,q)}}{\mathcal{T}^{(N,0,0)}} =\frac{\left(2m\right)!}{m!}\left(\frac{-1}{2M}\right)^{2m+q} \left(\frac{2K\tilde{f}_{22}}{\left(\sum_{i\neq 2,n}\frac{k_{i}^{T}}{\tilde{x}_{i}- \tilde{x}_{2}}\right)^{2}}\right)^{m+q} \tag{3.26}\] \[=\frac{\left(2m\right)!}{m!}\left(\frac{-1}{2M}\right)^{2m+q} \tag{3.27}\] where the second equality followed from the calculation of decoupling of ZNS in Eq.(2.11). This suggests the identity \[\frac{2K\tilde{f}_{22}}{\left(\sum_{i\neq 2,n}\frac{k_{i}^{T}}{\tilde{x}_{i}- \tilde{x}_{2}}\right)^{2}}=1. \tag{3.28}\] For the case of \(n=4\), one can easily solve the saddle point \(\tilde{z}_{2}=\sec^{2}\frac{\phi}{2}\) to verify the identity. We have also proved the identity for \(n=5\) by using maple numerically. Similar proof can be done by maple for the case of \(n=6\). ### The \(n\)-point HSSA with \(r=2\) Now we calculate the case of \(n\)-point HSSA with \(r=2\). We want to calculate \(n\)-point HSSA with \((n-1)\) tachyons and \(1\) high energy state \[\left(\alpha_{-1}^{T_{1}}\right)^{N+p_{1}}\left(\alpha_{-1}^{T_{2}}\right)^{p_{ 2}}\left(\alpha_{-1}^{L}\right)^{2m}\left(\alpha_{-2}^{L}\right)^{q}\left|0;k \right\rangle,\;\;p_{1}+p_{2}=-2(m+q). \tag{3.29}\] The ratios of \(n\)-point HSSA with \(r=2\) can be similarly calculated to be \[\frac{\mathcal{T}^{(p_{1},p_{2},m,q)}}{\mathcal{T}^{(N,0,0,0)}} =\frac{(2m)!}{m!}\left(\frac{-1}{2M}\right)^{2m+q}\frac{\left(2K \tilde{f}_{22}\right)^{m+q}}{\left(\sum_{i\neq 2,n}\frac{k_{i}^{T_{1}}}{\tilde{x}_{i}- \tilde{x}_{2}}\right)^{2m+2q+p_{2}}\left(\sum_{i\neq 2,n}\frac{k_{i}^{T_{2}}}{ \tilde{x}_{i}-\tilde{x}_{2}}\right)^{-p_{2}}}\] \[=\frac{(2m)!}{m!}\left(\frac{-1}{2M}\right)^{2m+q}\frac{\left( \frac{\sum_{i\neq 2,n}\frac{k_{i}^{T_{2}}}{\tilde{x}_{i}-\tilde{x}_{2}}}{ \sum_{i\neq 2,n}\frac{k_{i}^{T_{1}}}{\tilde{x}_{i}-\tilde{x}_{2}}}\right)^{p_{2}}}{ \left(\frac{\sum_{i\neq 2,n}\frac{k_{i}^{T_{1}}}{\tilde{x}_{i}-\tilde{x}_{2}}}{ \sqrt{2K\tilde{f}_{22}}}\right)^{2m+2q}}. \tag{3.30}\] On the other hand, the decoupling of ZNS calculated in Eq.(2.31) gives \[\frac{\mathcal{T}^{(p_{1},p_{2},m,q)}}{\mathcal{T}^{(N,0,0,0)}}=\frac{(2m)!}{m!}\left(\frac{-1}{2M}\right)^{2m+q}\omega_{1}^{p_{1}}\omega_{2}^{p_{2}}=\frac{ (2m)!}{m!}\left(\frac{-1}{2M}\right)^{2m+q}\frac{(\tan\theta_{1})^{p_{2}}}{( \cos\theta_{1})^{2m+2q}}. \tag{3.31}\] Eq.(3.30) and Eq.(3.31) can be identified for any \(p_{2}\), \(m\) and \(q\) if \[\left(\sum_{i\neq 2,n}\frac{k_{i}^{T_{1}}}{\tilde{x}_{i}-\tilde{x}_{2}}\right) =\sqrt{2K\tilde{f}_{22}}\cos\theta_{1},\;\left(\sum_{i\neq 2,n}\frac{k_{i}^{T _{2}}}{\tilde{x}_{i}-\tilde{x}_{2}}\right)=\sqrt{2K\tilde{f}_{22}}\sin\theta_{1}, \tag{3.32}\] which implies the identity \[\left(\sum_{i\neq 2,n}\frac{k_{i}^{T_{1}}}{\tilde{x}_{i}-\tilde{x}_{2}}\right)^{2 }+\left(\sum_{i\neq 2,n}\frac{k_{i}^{T_{2}}}{\tilde{x}_{i}-\tilde{x}_{2}} \right)^{2}=2K\tilde{f}_{22}. \tag{3.33}\] It is not surprising that Eq.(3.33) is a generalization of Eq.(3.28) to two transverse directions \(T_{1}\) and \(T_{2}\). ### The \(n\)-point HSSA with \(r\leq 24\) It is now easy to generalize Eq.(3.33) to any \(r\) (number of \(T_{i}\)) with \(r\leq 24\) \[\left(\sum_{i\neq 2,n}\frac{k_{i}^{T_{1}}}{\tilde{x}_{i}-\tilde{x}_{2}}\right)^{2 }+\left(\sum_{i\neq 2,n}\frac{k_{i}^{T_{2}}}{\tilde{x}_{i}-\tilde{x}_{2}} \right)^{2}+\cdots+\left(\sum_{i\neq 2,n}\frac{k_{i}^{T_{r}}}{\tilde{x}_{i}- \tilde{x}_{2}}\right)^{2}=2K\tilde{f}_{22}. \tag{3.34}\] By using Eq.(3.16) and Eq.(3.18), we see that the key identity Eq.(3.34) can be written as [13] \[\tilde{k}^{2}+2M\tilde{k}^{\prime L}=0. \tag{3.35}\] The ratios in Eq.(2.31) are thus proved by the saddle point method. ## IV Stringy scaling of Regge string scattering amplitudes Another important high-energy regime of 4-point SSA is the fixed momentum transfer regime which contains complementary information of the theory. That is in the kinematic regime \[s\rightarrow\infty,\qquad\sqrt{-t}=\text{fixed},\;\;\;(\text{but }\sqrt{-t}\neq\infty). \tag{4.1}\] In this regime, the number of high-energy SSA is much more numerous than that of the fixed angle regime. One of the reason is that in contrast to the identification \(e^{P}\simeq e^{L}\) in the hard scattering limit, \(e^{P}\)_does not_ approach to \(e^{L}\) in the Regge scattering limit. For example, at mass level \(M^{2}=4\) of open bosonic string, there are only 4 HSSA while there are 22 RSSA [28; 29]. On the other hand, in the Regge regime both the saddle-point method and the method of decoupling of zero-norm states adopted in the calculation of fixed angle regime do not apply. The complete leading order high-energy open string states in the Regge regime at each fixed mass level \(N=\sum_{n,m,l>0}np_{n}+mq_{m}+lr_{l}\) are \[|v_{n},q_{m},r_{l}\rangle=\prod_{n>0}(\alpha_{-n}^{T})^{v_{n}}\prod_{m>0}( \alpha_{-m}^{P})^{q_{m}}\prod_{l>0}(\alpha_{-l}^{L})^{r_{l}}|0,k\rangle. \tag{10}\] It turned out that the 4-pont RSSA of three tachyons and states in Eq.(10) are NOT proportional to each other, and the ratios are \(t\)-dependent functions. However, it was shown that for the RSSA \(A^{(N,2m,q)}\) with \(v_{1}=N-m-q\), \(r_{1}=2m\) and \(r_{2}=q\) and all others 0 in Eq.(10), one can extract the ratios of hard string scatterings in Eq.(11) from \(A^{(N,2m,q)}\)[30; 31; 32]. It is thus reasonable to expect that for the \(n\)-point (\(n\geq 5\)) RSSA with \(n-1\) tachyons and some subset of the high-energy states in Eq.(28), the RSSA show similar stringy scaling behavior as in Eq.(31) of HSSA. In this paper, we will consider a class of \(n\)-point (\(n\geq 5\)) RSSA with \(n-1\) tachyons and one high-energy state at mass level \(N\) \[\left|\left\{p_{i}\right\},0,0\right\rangle=\left(\alpha_{-1}^{T_{1}}\right)^ {N+p_{1}}\left(\alpha_{-1}^{T_{2}}\right)^{p_{2}}\cdots\left(\alpha_{-1}^{T_{ -}}\right)^{p_{r}}\left|0;k\right\rangle, \tag{11}\] which is obtained by setting \(m=q=0\) in Eq.(28). We will show that these RSSA show stringy scaling behavior for arbitrary \(n\) similar to that we obtained for the HSSA in Eq.(31). There are many different Regge regimes for the \(n\)-point (\(n\geq 5\)) RSSA. To specify the Regge regime, we first discuss the system of kinematics variables we will use. The standard kinematics variables commonly adopted for the \(n\)-point scatterings can be defined as following. One first defines the \((n-3)\)\(s\) variables \[s_{12}=-\left(k_{1}+k_{2}\right)^{2},s_{123}=-\left(k_{1}+k_{2}+k_{3}\right)^ {2},\cdots,s_{1,\cdots,n-2}=-\left(k_{1}+\cdots+k_{n-2}\right)^{2}, \tag{12}\] and then defines the \(\frac{(n-2)(n-3)}{2}\)\(t\) variables \[t_{23} =-\left(k_{2}+k_{3}\right)^{2},t_{24}=-\left(k_{2}+k_{4}\right)^{ 2},\cdots,t_{2,n-1}=-\left(k_{2}+k_{n-1}\right)^{2},\] \[t_{34} =-\left(k_{3}+k_{4}\right)^{2},\cdots,t_{3,n-1}=-\left(k_{3}+k_{ n-1}\right)^{2},\] \[\vdots\] \[t_{n-2,n-1} =-\left(k_{n-2}+k_{n-1}\right)^{2}, \tag{13}\] which amount to \(\frac{n(n-3)}{2}\) independent kinematics variables. For our purpose in the calculation of this paper, we will adopt another system of independent kinematics variables. We use the notation \(k_{ij}\equiv k_{i}\cdot k_{j}\) to define the following \(\frac{n(n-3)}{2}\) independent kinematics variables \[k_{12},k_{13},k_{14},\cdots,k_{1,n-2},\] \[k_{23},k_{24},k_{25},\cdots,k_{2,n-1},\] \[k_{34},k_{35},\cdots,k_{3,n-1},\] \[\vdots\] \[k_{n-3,n-2},k_{n-3,n-1},\] \[k_{n-2,n-1}. \tag{14}\] For later use, we also define \[k_{1,\cdots,i-1,i}=k_{1,\cdots,i-1}+\sum_{j=1}^{i-1}k_{ji}, \tag{15}\] which means, for example, \[k_{123}=k_{12}+k_{13}+k_{23},k_{1234}=k_{123}+k_{14}+k_{24}+k_{34},k_{12345}=k _{1234}+k_{15}+k_{25}+k_{35}+k_{45}. \tag{16}\] ### The \(5\)-point and \(6\)-point Regge stringy scaling Let's begin with the calculation of \(5\)-point RSSA with \(r=2\) in Eq.(4.3). The kinematics are \[k_{1} =\left(\sqrt{p^{2}+M_{1}^{2}},-p,0,0\right),\] \[k_{2} =\left(\sqrt{p^{2}+M_{2}^{2}},p,0,0\right),\] \[k_{3} =\left(-\sqrt{q_{3}^{2}+M_{3}^{2}},-q_{3}\cos\phi_{1}^{3},-q_{3} \sin\phi_{1}^{3},0\right),\] \[k_{4} =\left(-\sqrt{q_{4}^{2}+M_{4}^{2}},-q_{4}\cos\phi_{1}^{4},-q_{4} \sin\phi_{1}^{4}\cos\phi_{2}^{4},-q_{4}\sin\phi_{1}^{4}\sin\phi_{2}^{4}\right),\] \[k_{5} =\left(-\sqrt{q_{5}^{2}+M_{5}^{2}},-q_{5}\cos\phi_{1}^{5},-q_{5} \sin\phi_{1}^{5}\cos\phi_{2}^{5},-q_{5}\sin\phi_{1}^{5}\sin\phi_{2}^{5}\right). \tag{4.9}\] During the calculation, we will keep record of the notations used for each step so that eventually we can generalize the calculation to the case of \(n\)-point RSSA. The amplitude of state \[\left(\alpha_{-1}^{T_{1}}\right)^{N+p_{1}}\left(\alpha_{-1}^{T_{2}}\right)^{p _{2}}\left|0,k\right\rangle,p_{1}+p_{2}=0 \tag{4.10}\] and \(4\) tachyon states can be written as \[A^{\{p_{1},p_{2}\},0,0} =\int_{0}^{1}dx_{3}\int_{0}^{x_{3}}dx_{2}\times x_{2}^{k_{12}}x_{ 3}^{k_{13}}\left(x_{3}-x_{2}\right)^{k_{23}}\left(1-x_{2}\right)^{k_{24}}\left( 1-x_{3}\right)^{k_{34}}\] \[\times\left[\frac{k_{3}^{T_{1}}}{x_{3}-x_{2}}+\frac{k_{4}^{T_{1} }}{1-x_{2}}\right]^{N+p_{1}}\left[\underbrace{\frac{k_{3}^{T_{2}}}{x_{3}-x_{2} }}_{0}+\frac{k_{4}^{T_{2}}}{1-x_{2}}\right]^{p_{2}}. \tag{4.11}\] One can easily find that \(k_{3}^{T_{2}}=0\). After doing the change of variables \[x_{2}=z_{2}z_{3},x_{3}=z_{3}, \tag{4.12}\] we can rewrite the above \(5\)-point amplitude as following \[A^{\{p_{1},p_{2}\},0,0} =\int_{0}^{1}dz_{3}\int_{0}^{1}dz_{2}z_{2}^{k_{12}}z_{3}^{k_{123}+ 1}\left(1-z_{2}\right)^{k_{23}}\left(1-z_{2}z_{3}\right)^{k_{24}}\left(1-z_{3} \right)^{k_{34}}\] \[\times\left[\frac{k_{3}^{T_{1}}}{z_{3}-z_{2}z_{3}}+\frac{k_{4}^{T_ {1}}}{1-z_{2}z_{3}}\right]^{N+p_{1}}\left[\frac{k_{4}^{T_{2}}}{1-z_{2}z_{3}} \right]^{p_{2}} \tag{4.13}\] where we have defined \(k_{123}=k_{12}+k_{23}+k_{13}\). Next, let's perform the binomial expansion on the bracket to obtain \[A^{\{p_{1},p_{2}\},0,0}=\sum_{J_{1}^{1}+J_{2}^{1}=N+p_{1}}\frac {(N+p_{1})!}{J_{1}^{11}J_{2}^{1}!}\left(k_{3}^{T_{1}}\right)^{J_{1}^{1}}\left( k_{4}^{T_{1}}\right)^{J_{2}^{1}}\left(k_{4}^{T_{2}}\right)^{p_{2}}\] \[\times\int_{0}^{1}dz_{3}\int_{0}^{1}dz_{2}z_{2}^{k_{12}}z_{3}^{k_{ 123}+1-J_{1}^{1}}\left(1-z_{2}\right)^{k_{23}-J_{1}^{1}}\left(1-z_{2}z_{3} \right)^{k_{24}-J_{2}^{1}-p_{2}}\left(1-z_{3}\right)^{k_{34}}. \tag{4.14}\] For the next step, we expand the crossing term \(\left(1-z_{2}z_{3}\right)^{k_{24}-J_{2}^{1}-p_{2}}\) to obtain \[A^{\{p_{1},p_{2}\},0,0}=\sum_{J_{1}^{1}+J_{2}^{1}=N+p_{1}}\frac{( N+p_{1})!}{J_{1}^{11}J_{2}^{1}!}\left(k_{3}^{T_{1}}\right)^{J_{1}^{1}}\left(k_{4}^{T_ {1}}\right)^{J_{2}^{1}}\left(k_{4}^{T_{2}}\right)^{p_{2}}\] \[\times\sum_{m_{23}}\frac{\left(-k_{24}+p_{2}+J_{2}^{1}\right)_{m_{ 23}}}{m_{23}!}\int_{0}^{1}dz_{2}z_{2}^{k_{12}+m_{23}}\left(1-z_{2}\right)^{k_{ 23}-J_{1}^{1}}\int_{0}^{1}dz_{3}z_{3}^{k_{123}+1-J_{1}^{1}+m_{23}}\left(1-z_{3} \right)^{k_{34}} \tag{4.15}\] where the subscripts of \(m_{23}\) keep record of the subscripts \(z_{2}z_{3}\) in \((1-z_{2}z_{3})^{k_{24}-J_{2}^{1}-p_{2}}\). After the integration, the amplitude can be written as \[A^{\{p_{1},p_{2}\},0,0} =\sum_{J_{1}^{1}+J_{2}^{1}=N+p_{1}}\frac{\left(N+p_{1}\right)!}{J _{1}^{1}!J_{2}^{1}!}\left(k_{3}^{T_{1}}\right)^{J_{1}^{1}}\left(k_{4}^{T_{1}} \right)^{J_{2}^{1}}\left(k_{4}^{T_{2}}\right)^{p_{2}}\] \[\times\sum_{m_{23}}\frac{\left(-k_{24}+p_{2}+J_{2}^{1}\right)_{m_{ 23}}}{m_{23}!}\frac{\Gamma\left(k_{12}+1+m_{23}\right)\Gamma\left(k_{23}+1-J_{ 1}^{1}\right)}{\Gamma\left(k_{12}+k_{23}+2+m_{23}-J_{1}^{1}\right)}\] \[\times\frac{\Gamma\left(k_{123}+2+m_{23}-J_{1}^{1}\right)\Gamma \left(k_{34}+1\right)}{\Gamma\left(k_{123}+k_{34}+3+m_{23}-J_{1}^{1}\right)}. \tag{4.16}\] Now we choose to work on the Regge regime defined by \[k_{123}\sim s,k_{34}\sim s,k_{123}+k_{34}\sim t \tag{4.17}\] where \(s\rightarrow\infty\) and \(t=\) fixed. (we will use these notations to define a Regge regime for the rest of the paper) In this Regge regime, the amplitude can be approximated as \[A^{\{p_{1},p_{2}\},0,0} \sim\sum_{J_{1}^{1}+J_{2}^{1}=N+p_{1}}\frac{\left(N+p_{1}\right)! }{J_{1}^{1}!J_{2}^{1}!}\left(k_{3}^{T_{1}}\right)^{J_{1}^{1}}\left(k_{4}^{T_{1 }}\right)^{J_{2}^{1}}\left(k_{4}^{T_{2}}\right)^{p_{2}}\] \[\times\sum_{m_{23}}\frac{\left(-k_{24}+p_{2}+J_{2}^{1}\right)_{m_{ 23}}}{m_{23}!}\frac{\Gamma\left(k_{12}+1+m_{23}\right)\Gamma\left(k_{23}+1-J_{ 1}^{1}\right)}{\Gamma\left(k_{12}+k_{23}+2+m_{23}-J_{1}^{1}\right)}\] \[\times\frac{\left(k_{123}\right)^{m_{23}-J_{1}^{1}}\Gamma\left(k_ {123}+2\right)\Gamma\left(k_{34}+1\right)}{\left(k_{123}+k_{34}+3\right)_{m_{ 23}-J_{1}^{1}}\Gamma\left(k_{123}+k_{34}+3\right)}. \tag{4.18}\] The leading power of \(k_{123}\) occurs when \(J_{1}^{1}=0\) which means \(J_{2}^{1}=N+p_{1}\). Since \(p_{1}+p_{2}=0\), the leading term of the RSSA is \[A^{\{p_{1},p_{2}\},0,0} \sim\left(k_{4}^{T_{1}}\right)^{N+p_{1}}\left(k_{4}^{T_{2}}\right) ^{p_{2}}\sum_{m_{23}}\frac{\left(-k_{24}+N\right)_{m_{23}}}{m_{23}!}\frac{ \Gamma\left(k_{12}+1+m_{23}\right)\Gamma\left(k_{23}+1\right)}{\Gamma\left(k_ {12}+k_{23}+2+m_{23}\right)}\] \[\times\frac{\left(k_{123}\right)^{m_{23}}\Gamma\left(k_{123}+2 \right)\Gamma\left(k_{34}+1\right)}{\left(k_{123}+k_{34}+3\right)_{m_{23}} \Gamma\left(k_{123}+k_{34}+3\right)}. \tag{4.19}\] The ratio of \(A^{\{p_{1},p_{2}\},0,0}\) and \(A^{\{0,0\},0,0}\) can be easily calculated to be \[\frac{A^{\{p_{1},p_{2}\},0,0}}{A^{\{0,0\},0,0}} =\frac{\left(k_{4}^{T_{4}}\right)^{N+p_{1}}\left(k_{4}^{T_{2}} \right)^{p_{2}}}{\left(k_{4}^{T_{1}}\right)^{N}}=\left(k_{4}^{T_{1}}\right)^{p_ {1}}\left(k_{4}^{T_{2}}\right)^{p_{2}}=\left(-q_{4}\sin\phi_{1}^{4}\cos\phi_{2 }^{4}\right)^{p_{1}}\left(-q_{4}\sin\phi_{1}^{4}\sin\phi_{2}^{4}\right)^{p_{2}}\] \[=\left(\cos\theta_{1}\right)^{p_{1}}\left(\sin\theta_{1}\right)^{p _{2}}=\left(\omega_{1}\right)^{p_{1}}\left(\omega_{2}\right)^{p_{2}}, \tag{4.20}\] which is the same as Eq.(2.31) with \(m=q=0\) and \(r=2\). Let's now calculate the 6-point RSSA with \(r=3\) in Eq.(4.3). The kinematics are \[k_{1} =\left(\sqrt{p^{2}+M_{1}^{2}},-p,0,0,0\right),\] \[k_{2} =\left(\sqrt{p^{2}+M_{2}^{2}},p,0,0,0\right),\] \[k_{3} =\left(-\sqrt{q_{3}^{2}+M_{3}^{2}},-q_{3}\cos\phi_{1}^{3},-q_{3} \sin\phi_{1}^{3},0,0\right),\] \[k_{4} =\left(-\sqrt{q_{4}^{2}+M_{4}^{2}},-q_{4}\cos\phi_{1}^{4},-q_{4} \sin\phi_{1}^{4}\cos\phi_{2}^{4},-q_{4}\sin\phi_{1}^{4}\sin\phi_{2}^{4},0 \right),\] \[k_{5} =\left(-\sqrt{q_{5}^{2}+M_{5}^{2}},-q_{5}\cos\phi_{1}^{5},-q_{5} \sin\phi_{1}^{5}\cos\phi_{2}^{5},-q_{5}\sin\phi_{1}^{5}\sin\phi_{2}^{5}\cos \phi_{3}^{5},-q_{5}\sin\phi_{1}^{5}\sin\phi_{2}^{5}\sin\phi_{3}^{5}\right),\] \[k_{6} =\left(-\sqrt{q_{5}^{2}+M_{5}^{2}},-q_{6}\cos\phi_{1}^{6},-q_{6} \sin\phi_{1}^{6}\cos\phi_{2}^{6},-q_{6}\sin\phi_{1}^{6}\sin\phi_{2}^{6}\cos \phi_{3}^{6},-q_{6}\sin\phi_{1}^{6}\sin\phi_{2}^{6}\sin\phi_{3}^{6}\right). \tag{4.21}\] The amplitude of state \[\left(\alpha_{-1}^{T_{1}}\right)^{N+p_{1}}\left(\alpha_{-1}^{T_{2}}\right)^{p_{2} }\left(\alpha_{-1}^{T_{3}}\right)^{p_{3}}\left|0,k\right\rangle,p_{1}+p_{2}+p_{ 3}=0 \tag{4.22}\] and 5 tachyon states is \[A^{\{p_{1},p_{2},p_{3}\},0,0}\] \[=\int_{0}^{1}dx_{4}\int_{0}^{x_{4}}dx_{3}\int_{0}^{x_{3}}dx_{2}\ x_{2}^{k_{12}}x_{3}^{k_{13}}x_{4}^{k_{14}} \left(x_{3}-x_{2}\right)^{k_{23}}\left(x_{4}-x_{2}\right)^{k_{24}}\left(1-x_{2 }\right)^{k_{25}}\left(x_{4}-x_{3}\right)^{k_{34}}\left(1-x_{3}\right)^{k_{35}} \left(1-x_{4}\right)^{k_{45}}\] \[\times\left[\frac{k_{3}^{T_{1}}}{x_{3}-x_{2}}+\frac{k_{4}^{T_{1} }}{x_{4}-x_{2}}+\underbrace{\frac{k_{5}^{T_{1}}}{1}-x_{2}}_{x_{5}}\right]^{N +p_{1}}\left[\frac{k_{3}^{T_{2}}}{\frac{x_{3}-x_{2}}{-0}}+\frac{k_{4}^{T_{2}} }{x_{4}-x_{2}}+\frac{k_{5}^{T_{2}}}{1-x_{2}}_{x}\right]^{p_{2}}\left[\frac{k_ {3}^{T_{3}}}{\underbrace{x_{3}-x_{2}}_{-0}}+\underbrace{\frac{k_{4}^{T_{3}}}{ x_{4}-x_{2}}}_{-0}+\frac{k_{5}^{T_{3}}}{1-x_{2}}\right]^{p_{3}}. \tag{4.23}\] Since \(k_{3}^{T_{2}}=k_{3}^{T_{3}}=k_{4}^{T_{3}}=0\), we can rewrite the amplitude as \[A^{\{p_{1},p_{2},p_{3}\},0,0} =\int_{0}^{1}dx_{4}\int_{0}^{x_{4}}dx_{3}\int_{0}^{x_{3}}dx_{2}\ x_ {2}^{k_{12}}x_{3}^{k_{13}}x_{4}^{k_{14}}\left(x_{3}-x_{2}\right)^{k_{23}} \left(x_{4}-x_{2}\right)^{k_{24}}\left(1-x_{2}\right)^{k_{25}}\left(x_{4}-x_{3 }\right)^{k_{34}}\left(1-x_{3}\right)^{k_{35}}\left(1-x_{4}\right)^{k_{45}}\] \[\times\left[\frac{k_{3}^{T_{1}}}{x_{3}-x_{2}}+\frac{k_{4}^{T_{1} }}{x_{4}-x_{2}}+\frac{k_{5}^{T_{1}}}{1-x_{2}}_{x}\right]^{N+p_{1}}\left[\frac {k_{4}^{T_{2}}}{x_{4}-x_{2}}+\frac{k_{5}^{T_{2}}}{1-x_{2}}_{x}\right]^{p_{2}} \left[\frac{k_{5}^{T_{3}}}{1-x_{2}}_{x}\right]^{p_{3}}. \tag{4.24}\] We can do the following change of variables \[x_{i}=z_{i}\cdots z_{n-2}, \tag{4.25}\] or \[x_{2}=z_{2}z_{3}z_{4},x_{3}=z_{3}z_{4},x_{4}=z_{4} \tag{4.26}\] to obtain \[A^{\{p_{1},p_{2},p_{3}\},0,0} =\int_{0}^{1}dz_{4}\int_{0}^{1}dz_{3}\int_{0}^{1}dz_{2}\times z_ {2}^{k_{12}}z_{3}^{k_{123}+1}z_{4}^{k_{1234}+2}\left(1-z_{2}\right)^{k_{23}} \left(1-z_{3}\right)^{k_{34}}\left(1-z_{4}\right)^{k_{45}}\] \[\times\left(1-z_{2}z_{3}\right)^{k_{24}}\left(1-z_{2}z_{3}z_{4} \right)^{k_{25}}\left(1-z_{3}z_{4}\right)^{k_{35}}\] \[\times\left[\frac{k_{3}^{T_{1}}}{z_{3}z_{4}-z_{2}z_{3}z_{4}}+ \frac{k_{4}^{T_{1}}}{z_{4}-z_{2}z_{3}z_{4}}+\frac{k_{5}^{T_{1}}}{1-z_{2}z_{3} z_{4}}\right]^{N+p_{1}}\left[\frac{k_{4}^{T_{2}}}{z_{4}-z_{2}z_{3}z_{4}}+ \frac{k_{5}^{T_{2}}}{1-z_{2}z_{3}z_{4}}\right]^{p_{2}}\left[\frac{k_{5}^{T_{3} }}{1-z_{2}z_{3}z_{4}}\right]^{p_{3}} \tag{4.27}\] where we have defined \[k_{123}=k_{12}+k_{13}+k_{23},k_{1234}=k_{12}+k_{13}+k_{14}+k_{23}+k_{24}+k_{34}. \tag{4.28}\] Next, let's perform the binomial expansion on the brackets to obtain \[A^{\{p_{1},p_{2},p_{3}\},0,0} =\int_{0}^{1}dz_{4}\int_{0}^{1}dz_{3}\int_{0}^{1}dz_{2}\times z_ {2}^{k_{12}}z_{3}^{k_{123}+1}z_{4}^{k_{1234}+2}\left(1-z_{2}\right)^{k_{23}} \left(1-z_{3}\right)^{k_{34}}\left(1-z_{4}\right)^{k_{45}}\] \[\times\left(1-z_{2}z_{3}\right)^{k_{24}}\left(1-z_{2}z_{3}z_{4} \right)^{k_{25}}\left(1-z_{3}z_{4}\right)^{k_{35}}\] \[\times\sum_{J_{1}^{1}+J_{2}^{1}+J_{3}^{2}=N+p_{1}}^{N+p_{1}}\frac{ \left(N+p_{1}\right)!}{J_{1}^{1}!J_{2}^{1}!J_{3}^{1}!}\left(\frac{k_{3}^{T_{1}}}{ z_{3}z_{4}-z_{2}z_{3}z_{4}}\right)^{J_{1}^{1}}\left(\frac{k_{4}^{T_{1}}}{z_{4}-z_{2}z_{3}z_{4}} \right)^{J_{2}^{1}}\left(\frac{k_{5}^{T_{1}}}{1-z_{2}z_{3}z_{4}}\right)^{J_{3}^{ 1}}\] \[\times\sum_{J_{1}^{2}+J_{2}^{2}=p_{2}}^{p_{2}}\frac{p_{2}!}{J_{1}^ {2}!J_{2}^{2}!}\left(\frac{k_{4}^{T_{2}}}{z_{4}-z_{2}z_{3}z_{4}}\right)^{J_{1}^ {2}}\left(\frac{k_{5}^{T_{2}}}{1-z_{2}z_{3}z_{4}}\right)^{J_{2}^{2}}\left[ \frac{k_{5}^{T_{3}}}{1-z_{2}z_{3}z_{4}}\right]^{p_{3}} \tag{4.29}\] where \(J_{1}^{1}\),\(J_{2}^{1}\),\(J_{3}^{1}\),\(J_{1}^{2}\),\(J_{2}^{2}\) are non-negative integers with \(J_{1}^{1}+J_{2}^{1}=N+p_{1}\) and \(J_{1}^{2}+J_{2}^{2}=p_{2}\). We then rearrange the above equation \[A^{\{p_{1},p_{2},p_{3}\},0,0} =\sum_{J_{1}^{1}+J_{2}^{1}+J_{3}^{1}=N+p_{1}}^{N+p_{1}}\frac{(N+p_ {1})!}{J_{1}^{1}!J_{2}^{1}!J_{3}^{1}!}\left(k_{3}^{T_{1}}\right)^{J_{1}^{1}} \left(k_{4}^{T_{1}}\right)^{J_{2}^{1}}\left(k_{5}^{T_{1}}\right)^{J_{3}^{1}} \sum_{J_{1}^{2}+J_{2}^{2}=p_{2}}^{p_{2}!}\frac{p_{2}!}{J_{1}^{2}!J_{2}^{2}!} \left(k_{4}^{T_{2}}\right)^{J_{1}^{2}}\left(k_{5}^{T_{2}}\right)^{J_{2}^{2}} \left(k_{5}^{T_{3}}\right)^{p_{3}}\] \[\times\int_{0}^{1}dz_{4}\int_{0}^{1}dz_{3}\int_{0}^{1}dz_{2}\times z _{2}^{k_{12}}z_{3}^{k_{123}+1-J_{1}^{1}}z_{4}^{k_{1234}+2-J_{1}^{1}-\left(J_{2 }^{1}+J_{2}^{1}\right)}\left(1-z_{2}\right)^{k_{23}-J_{1}^{1}}\left(1-z_{3} \right)^{k_{34}}\left(1-z_{4}\right)^{k_{45}}\] \[\times\left(1-z_{2}z_{3}\right)^{k_{24}-\left(J_{2}^{1}+J_{1}^{2} \right)}\left(1-z_{2}z_{3}z_{4}\right)^{k_{25}-\left(J_{3}^{1}+J_{2}^{2}+p_{3} \right)}\left(1-z_{3}z_{4}\right)^{k_{35}}, \tag{4.30}\] and expand the crossing terms to obain \[A^{\{p_{1},p_{2},p_{3}\},0,0} =\sum_{J_{1}^{1}+J_{2}^{1}+J_{3}^{1}=N+p_{1}}^{N+p_{1}}\frac{(N+p _{1})!}{J_{1}^{1}!J_{2}^{1}!J_{3}^{1}!}\left(k_{3}^{T_{1}}\right)^{J_{1}^{1}} \left(k_{4}^{T_{1}}\right)^{J_{2}^{1}}\left(k_{5}^{T_{1}}\right)^{J_{3}^{1}} \sum_{J_{2}^{1}+J_{2}^{2}=p_{2}}^{p_{2}!}\frac{p_{2}!}{J_{1}^{2}!J_{2}^{2}!} \left(k_{4}^{T_{2}}\right)^{J_{1}^{2}}\left(k_{5}^{T_{2}}\right)^{J_{2}^{2}} \left(k_{5}^{T_{3}}\right)^{P_{3}}\] \[\times\int_{0}^{1}dz_{4}\int_{0}^{1}dz_{3}\int_{0}^{1}dz_{2}\times z _{2}^{k_{12}}z_{2}^{k_{123}+1-J_{1}^{1}}z_{4}^{k_{1234}+2-J_{1}^{1}-\left(J_{2 }^{1}+J_{2}^{1}\right)}\left(1-z_{2}\right)^{k_{23}-J_{1}^{1}}\left(1-z_{3} \right)^{k_{34}}\left(1-z_{4}\right)^{k_{45}}\] \[\times\sum_{m_{23}=0}\frac{\left[-k_{24}+\left(J_{2}^{1}+J_{1}^{2} \right)\right]_{m_{23}}}{m_{23}!}\left(z_{2}z_{3}\right)^{m_{23}}\sum_{m_{24}=0 }\frac{\left[-k_{25}+\left(J_{3}^{1}+J_{2}^{2}+p_{3}\right)\right]_{m_{24}}}{m _{24}!}\left(z_{2}z_{3}z_{4}\right)^{m_{24}}\] \[\times\sum_{m_{34}=0}\frac{\left[-k_{35}\right]_{m_{34}}}{m_{34}!} \left(z_{3}z_{4}\right)^{m_{34}} \tag{4.31}\] where, for example, the subscripts of \(m_{24}\) keep record of the first and the last subscripts of \((z_{2}z_{3}z_{4})\) etc.. We rearrange the above equation again \[A^{\{p_{1},p_{2},p_{3}\},0,0} =\sum_{J_{1}^{1}+J_{2}^{1}+J_{3}^{1}=N+p_{1}}^{N+p_{1}}\frac{(N+p_ {1})!}{J_{1}^{1}!J_{2}^{1}!J_{3}^{1}!}\left(k_{3}^{T_{1}}\right)^{J_{1}^{1}} \left(k_{4}^{T_{1}}\right)^{J_{2}^{1}}\left(k_{5}^{T_{1}}\right)^{J_{3}^{1}} \sum_{J_{2}^{1}+J_{2}^{2}=p_{2}}^{N+p_{1}}\left(k_{4}^{T_{2}}\right)^{J_{1}^{2 }}\left(k_{5}^{T_{2}}\right)^{J_{2}^{2}}\left(k_{5}^{T_{3}}\right)^{P_{3}}\] \[\times\sum_{m_{23}=0}\frac{\left[-k_{24}+\left(J_{2}^{1}+J_{1}^{2} \right)\right]_{m_{23}}}{m_{23}!}\sum_{m_{24}=0}\frac{\left[-k_{25}+\left(J_{3} ^{1}+J_{2}^{2}+p_{3}\right)\right]_{m_{24}}}{m_{24}!}\sum_{m_{34}=0}\frac{ \left[-k_{35}\right]_{m_{34}}}{m_{34}!}\] \[\times\int_{0}^{1}dz_{2}z_{2}^{k_{12}+m_{23}+m_{24}}\left(1-z_{2} \right)^{k_{23}-J_{1}^{1}}\] \[\times\int_{0}^{1}dz_{3}z_{3}^{k_{123}+1-J_{1}^{1}+m_{23}+m_{24}+m _{34}}\left(1-z_{3}\right)^{k_{34}}\] \[\times\int_{0}^{1}dz_{4}z_{4}^{k_{1234}+2-J_{1}^{1}-\left(J_{2}^{1 }+J_{2}^{2}\right)+m_{24}+m_{34}}\left(1-z_{4}\right)^{k_{45}}, \tag{4.32}\] and perform the integration to obtain \[A^{\{p_{1},p_{2},p_{3}\},0,0} =\sum_{J_{1}^{1}+J_{2}^{1}+J_{3}^{1}=N+p_{1}}^{N+p_{1}}\frac{(N+p_ {1})!}{J_{1}^{1}!J_{2}^{1}!J_{3}^{1}!}\left(k_{3}^{T_{1}}\right)^{J_{1}^{1}} \left(k_{4}^{T_{1}}\right)^{J_{2}^{1}}\left(k_{5}^{T_{1}}\right)^{J_{3}^{1}} \sum_{J_{2}^{1}+J_{2}^{2}=p_{2}}^{p_{2}!}\frac{p_{2}!}{J_{1}^{2}!J_{2}^{2}!} \left(k_{4}^{T_{2}}\right)^{J_{1}^{2}}\left(k_{5}^{T_{2}}\right)^{J_{2}^{2}} \left(k_{5}^{T_{3}}\right)^{P_{3}}\] \[\times\sum_{m_{23}=0}\frac{\left[-k_{24}+\left(J_{2}^{1}+J_{1}^{2} \right)\right]_{m_{23}}}{m_{23}!}\sum_{m_{24}=0}\frac{\left[-k_{25}+\left(J_{3} ^{1}+J_{2}^{2}+p_{3}\right)\right]_{m_{24}}}{m_{24}!}\sum_{m_{34}=0}\frac{\left[-k _{35}\right]_{m_{34}}}{m_{34}!}\] \[\times\frac{\Gamma\left(k_{12}+1+m_{23}+m_{24}\right)\Gamma \left(k_{23}+1-J_{1}^{1}\right)}{\Gamma\left(k_{12}+k_{23}+2-J_{1}^{1}+m_{23} +m_{24}\right)}\] \[\times\frac{\Gamma\left(k_{123}+2-J_{1}^{1}+m_{23}+m_{24}+m_ Now we choose to work on the Regge regime defined by \[k_{1234}\sim s,k_{1234}+k_{23}\sim t. \tag{4.34}\] In this Regge regime, the amplitude can be approximated as \[A^{\{p_{1},p_{2},p_{3}\},0,0} \sim\sum_{J_{1}^{1}+J_{2}^{1}+J_{3}^{1}=N+p_{1}}^{N+p_{1}}\frac{(N +p_{1})!}{J_{1}^{1}!J_{2}^{1}!J_{3}^{1}!}\left(k_{3}^{T_{1}}\right)^{J_{1}^{1}} \left(k_{4}^{T_{1}}\right)^{J_{2}^{1}}\left(k_{5}^{T_{1}}\right)^{J_{3}^{1}} \sum_{J_{1}^{2}+J_{2}^{2}=p_{2}}^{p_{2}!}\frac{p_{2}!}{J_{1}^{2}!J_{2}^{2}!} \left(k_{4}^{T_{2}}\right)^{J_{1}^{2}}\left(k_{5}^{T_{2}}\right)^{J_{2}^{2}} \left(k_{5}^{T_{3}}\right)^{p_{3}}\] \[\times\sum_{m_{23}=0}\frac{\left[-k_{24}+\left(J_{2}^{1}+J_{1}^{2 }\right)\right]_{m_{23}}}{m_{23}!}\sum_{m_{24}=0}\frac{\left[-k_{25}+\left(J_ {3}^{1}+J_{2}^{2}+p_{3}\right)\right]_{m_{24}}}{m_{24}!}\sum_{m_{34}=0}\frac{ \left[-k_{35}\right]_{m_{34}}}{m_{34}!}\] \[\times\frac{\Gamma\left(k_{12}+1+m_{23}+m_{24}\right)\Gamma \left(k_{23}+1-J_{1}^{1}\right)}{\Gamma\left(k_{12}+k_{23}+2-J_{1}^{1}+m_{23}+ m_{24}\right)}\] \[\times\frac{\Gamma\left(k_{123}+2-J_{1}^{1}+m_{23}+m_{24}+m_{34} \right)\Gamma\left(k_{34}+1\right)}{\Gamma\left(k_{123}+k_{34}+3-J_{1}^{1}+m_{ 23}+m_{24}+m_{34}\right)}\] \[\times\frac{\left(k_{1234}\right)^{-\left(J_{1}^{1}+J_{2}^{2}+J_{ 1}^{2}\right)+m_{24}+m_{34}}}{\left(k_{1234}+k_{23}+4\right)_{-\left(J_{1}^{1} +J_{2}^{1}+J_{1}^{2}\right)+m_{24}+m_{34}}}\frac{\Gamma\left(k_{1234}+3 \right)\Gamma\left(k_{45}+1\right)}{\Gamma\left(k_{1234}+k_{23}+4\right)}. \tag{4.35}\] We can now take \(J_{1}^{1}=J_{2}^{1}=J_{1}^{2}=0\) to extract the leading order term in \(k_{1234}\). This implies \(J_{3}^{1}=N+p_{1}\) and \(J_{2}^{2}=p_{2}\) which give \[A^{\{p_{1},p_{2},p_{3}\},0,0} \sim\left(k_{5}^{T_{1}}\right)^{N+p_{1}}\left(k_{5}^{T_{2}}\right) ^{P_{2}}\left(k_{5}^{T_{3}}\right)^{p_{3}}\] \[\times\sum_{m_{23}=0}\frac{\left[-k_{24}\right]_{m_{23}}}{m_{23}! }\sum_{m_{24}=0}\frac{\left[-k_{25}+N\right]_{m_{24}}}{m_{24}!}\sum_{m_{34}=0} \frac{\left[-k_{35}\right]_{m_{34}}}{m_{34}!}\] \[\times\frac{\Gamma\left(k_{12}+1+m_{23}+m_{24}\right)\Gamma \left(k_{23}+1\right)}{\Gamma\left(k_{12}+k_{23}+2+m_{23}+m_{24}\right)}\] \[\times\frac{\Gamma\left(k_{123}+2+m_{23}+m_{24}+m_{34}\right) \Gamma\left(k_{34}+1\right)}{\Gamma\left(k_{123}+k_{34}+3+m_{23}+m_{24}+m_{34} \right)}\] \[\times\frac{\left(k_{1234}\right)^{m_{24}+m_{34}}}{\left(k_{1234}+ k_{23}+4\right)_{m_{24}+m_{34}}}\frac{\Gamma\left(k_{1234}+3\right)\Gamma \left(k_{45}+1\right)}{\Gamma\left(k_{1234}+k_{23}+4\right)}. \tag{4.36}\] Finally, the ratios of the 7-point RSSA can be easily calculated to be \[\frac{A^{\{p_{1},p_{2},p_{3}\},0,0}}{A^{\{0,0,0\},0,0,0}} =\left(k_{5}^{T_{1}}\right)^{p_{1}}\left(k_{5}^{T_{3}}\right)^{p_ {2}}\left(k_{5}^{T_{3}}\right)^{p_{3}}\] \[=\left(\cos\phi_{2}^{5}\right)^{p_{1}}\left(\sin\phi_{2}^{5}\cos \phi_{3}^{5}\right)^{p_{2}}\left(\sin\phi_{2}^{5}\sin\phi_{3}^{5}\right)^{p_{3}}\] \[=\left(\cos\theta_{1}\right)^{p_{1}}\left(\sin\theta_{1}\cos\theta _{2}\right)^{p_{2}}\left(\sin\theta_{1}\sin\theta_{2}\right)^{p_{3}}\] \[=\left(\omega_{1}\right)^{p_{1}}\left(\omega_{2}\right)^{p_{2}} \left(\omega_{3}\right)^{p_{3}}, \tag{4.37}\] which is the same as Eq.(2.31) with \(m=q=0\) and \(r=3\). ### The \(7\)-point Regge stringy scaling In this section we calculate the \(7\)-point RSSA with \(r=4\) in Eq.(4.3). The kinematics are \[k_{1} =\left(\sqrt{p^{2}+M_{1}^{2}},-p,0,0,0,0\right),\] \[k_{2} =\left(\sqrt{p^{2}+M_{2}^{2}},p,0,0,0,0\right),\] \[k_{3} =\left(-\sqrt{q_{3}^{2}+M_{3}^{2}},-q_{3}\cos\phi_{1}^{3},-q_{3} \sin\phi_{1}^{3},0,0,0\right),\] \[k_{4} =\left(-\sqrt{q_{4}^{2}+M_{4}^{2}},-q_{4}\cos\phi_{1}^{4},-q_{4} \sin\phi_{1}^{4}\cos\phi_{2}^{4},-q_{4}\sin\phi_{1}^{4}\sin\phi_{2}^{4},0,0 \right),\] \[k_{5} =\left(-\sqrt{q_{5}^{2}+M_{5}^{2}},-q_{5}\cos\phi_{1}^{5},-q_{5} \sin\phi_{1}^{5}\cos\phi_{2}^{5},-q_{5}\sin\phi_{1}^{5}\sin\phi_{2}^{5}\cos \phi_{3}^{5},-q_{5}\sin\phi_{1}^{5}\sin\phi_{2}^{5}\sin\phi_{3}^{5},0\right),\] \[k_{6} =\left(\begin{array}{c}-\sqrt{q_{5}^{2}+M_{5}^{2}},-q_{6}\cos \phi_{1}^{6},-q_{6}\sin\phi_{1}^{6}\cos\phi_{2}^{6},-q_{6}\sin\phi_{1}^{6}\sin \phi_{2}^{6}\cos\phi_{3}^{6},\\ -q_{6}\sin\phi_{1}^{6}\sin\phi_{2}^{6}\sin\phi_{3}^{6}\cos\phi_{4}^{6},-q_{6} \sin\phi_{1}^{6}\sin\phi_{2}^{6}\sin\phi_{3}^{6}\sin\phi_{4}^{6}\\ \end{array}\right),\] \[k_{7} =\left(\begin{array}{c}-\sqrt{q_{5}^{2}+M_{5}^{2}},-q_{7}\cos \phi_{1}^{7},-q_{7}\sin\phi_{1}^{7}\cos\phi_{2}^{7},-q_{7}\sin\phi_{1}^{7}\sin \phi_{2}^{7}\cos\phi_{3}^{7},\\ -q_{7}\sin\phi_{1}^{7}\sin\phi_{2}^{7}\sin\phi_{3}^{7}\cos\phi_{4}^{7},-q_{7} \sin\phi_{1}^{7}\sin\phi_{2}^{7}\sin\phi_{3}^{7}\sin\phi_{4}^{7}\end{array} \right). \tag{4.38}\] The tensor state we are going to consider is \[\left(\alpha_{-1}^{T_{1}}\right)^{N+p_{1}}\left(\alpha_{-1}^{T_{2}}\right)^{p _{2}}\left(\alpha_{-1}^{T_{3}}\right)^{p_{3}}\left(\alpha_{-1}^{T_{4}}\right)^ {p_{4}}\left|0,k\right\rangle,p_{1}+p_{2}+p_{3}+p_{4}=0. \tag{4.39}\] We will use the notation defined in Eq.(4.6), so we have the following \(\frac{7(7-3)}{2}=14\) independent kinematics variables \[k_{12},k_{13},k_{14},k_{15},k_{23},k_{24},k_{25},k_{26},k_{34},k_{35},k_{36},k _{45},k_{46},k_{56}. \tag{4.40}\] The RSSA of one tensor state and \(6\) tachyon states is \[A^{\{p_{1},p_{2},p_{3},p_{4}\},0,0} =\int_{0}^{1}dx_{5}\int_{0}^{x_{5}}dx_{4}\int_{0}^{x_{4}}dx_{3} \int_{0}^{x_{3}}dx_{2}\cdot x_{2}^{k_{12}}x_{3}^{k_{13}}x_{4}^{k_{14}}x_{5}^{k_ {15}}\left(x_{3}-x_{2}\right)^{k_{23}}\left(x_{4}-x_{2}\right)^{k_{24}}\left(x_ {5}-x_{2}\right)^{k_{25}}\left(1-x_{2}\right)^{k_{26}}\] \[\times\left(x_{4}-x_{3}\right)^{k_{34}}\left(x_{5}-x_{3}\right)^{k _{35}}\left(1-x_{3}\right)^{k_{36}}\left(x_{5}-x_{4}\right)^{k_{45}}\left(1-x_ {4}\right)^{k_{46}}\left(1-x_{5}\right)^{k_{56}}\] \[\times\left[\frac{k_{3}^{T_{1}}}{x_{3}-x_{2}}+\frac{k_{4}^{T_{1}} }{x_{4}-x_{2}}+\frac{k_{5}^{T_{1}}}{x_{5}-x_{2}}+\frac{k_{6}^{T_{1}}}{1-x_{2}} \right]^{N+p_{1}}\] \[\times\left[\frac{k_{4}^{T_{2}}}{x_{4}-x_{2}}+\frac{k_{5}^{T_{2}} }{x_{5}-x_{2}}+\frac{k_{6}^{T_{2}}}{1-x_{2}}\right]^{p_{2}}\] \[\times\left[\frac{k_{5}^{T_{3}}}{x_{5}-x_{2}}+\frac{k_{6}^{T_{3}} }{1-x_{2}}\right]^{p_{3}}\left[\frac{k_{6}^{T_{4}}}{1-x_{2}}\right]^{p_{4}}. \tag{4.41}\] Note that \(k_{3}^{T_{2}},k_{3}^{T_{3}},k_{3}^{T_{4}},k_{4}^{T_{3}},k_{4}^{T_{4}},k_{5}^{T_ {4}}\) are all zeros. Let us make the following change of variables \[x_{2}=z_{2}z_{3}z_{4}z_{5},x_{3}=z_{3}z_{4}z_{5},x_{4}=z_{4}z_{5},x_{5}=z_{5}, x_{6}=z_{6}=1 \tag{4.42}\] to obtain \[A^{\{p_{1},p_{2},p_{3},p_{4}\},0,0} =\int_{0}^{1}dz_{5}\int_{0}^{1}dz_{4}\int_{0}^{1}dz_{3}\int_{0}^{1} dz_{2}\cdot\left(z_{3}z_{4}^{2}z_{5}^{3}\right)\left(z_{2}z_{3}z_{4}z_{5}\right)^{k_{12 }}\left(z_{3}z_{4}z_{5}\right)^{k_{13}}\left(z_{4}z_{5}\right)^{k_{14}}z_{5}^{ k_{15}}\] \[\times\left(z_{3}z_{4}z_{5}-z_{2}z_{3}z_{4}z_{5}\right)^{k_{23}} \left(z_{4}z_{5}-z_{2}z_{3}z_{4}z_{5}\right)^{k_{24}}\left(z_{5}-z_{2}z_{3}z_{4 }z_{5}\right)^{k_{25}}\left(1-z_{2}z_{3}z_{4}z_{5}\right)^{k_{26}}\] \[\times\left(z_{4}z_{5}-z_{3}z_{4}z_{5}\right)^{k_{34}}\left(z_{5}- z_{3}z_{4}z_{5}\right)^{k_{35}}\left(1-z_{3}z_{4}z_{5}\right)^{k_{36}}\left(z_{5}- z_{4}z_{5}\right)^{k_{45}}\left(1-z_{4}z_{5}\right)^{k_{46}}\left(1-z_{5} \right)^{k_{56}}\] \[\times\left[\frac{k_{3}^{T_{1}}}{z_{3}z_{4}z_{5}-z_{2}z_{3}z_{4}z _{5}}+\frac{k_{4}^{T_{1}}}{z_{4}z_{5}-z_{2}z_{3}z_{4}z_{5}}+\frac{k_{5}^{T_{1 }}}{z_{5}-z_{2}z_{3}z_{4}z_{5}}+\frac{k_{6}^{T_{1}}}{1-z_{2}z_{3}z_{4}z_{5}} \right]^{N+p_{1}}\] \[\times\left[\frac{k_{4}^{T_{2}}}{z_{4}z_{5}-z_{2}z_{3}z_{4}z_{5}} +\frac{k_{5}^{T_{2}}}{z_{5}-z_{2}z_{3}z_{4}z_{5}}+\frac{k_{6}^{T_{2}}}{1-z_{2 }z_{3}z_{4}z_{5}}\right]^{p_{2}}\] \[\times\left[\frac{k_{5}^{T_{3}}}{z_{5}-z_{2}z_{3}z_{4}z_{5}}+ \frac{k_{6}^{T_{3}}}{1-z_{2}z_{3}z_{4}z_{5}}\right]^{p_{3}}\left[\frac{k_{6}^{ T_{4}}}{1-z_{2}z_{3}z_{4}z_{5}}\right]^{p_{4}}. \tag{4.43}\] We use the definition in Eq.(4.7) to obtain \[k_{123}=k_{12}+k_{13}+k_{23},k_{1234}=k_{123}+k_{14}+k_{24}+k_{34},k_{12345}=k_ {1234}+k_{15}+k_{25}+k_{35}+k_{45}. \tag{4.44}\] After some calculation, we get \[A^{\{p_{1},p_{2},p_{3},p_{4}\},0,0} =\int_{0}^{1}dz_{5}\int_{0}^{1}dz_{4}\int_{0}^{1}dz_{3}\int_{0}^{1 }dz_{2}\cdot z_{2}^{k_{12}}z_{3}^{k_{123}+1}z_{4}^{k_{1234}+2}z_{5}^{k_{12345}+3}\] \[\times\left(1-z_{2}\right)^{k_{23}}\left(1-z_{2}z_{3}\right)^{k_{ 24}}\left(1-z_{2}z_{3}z_{4}\right)^{k_{25}}\left(1-z_{2}z_{3}z_{4}z_{5}\right) ^{k_{26}}\] \[\times\left(1-z_{3}\right)^{k_{34}}\left(1-z_{3}z_{4}\right)^{k_{ 35}}\left(1-z_{3}z_{4}z_{5}\right)^{k_{36}}\] \[\times\left(1-z_{4}\right)^{k_{45}}\left(1-z_{4}z_{5}\right)^{k_{ 46}}\] \[\times\left(1-z_{5}\right)^{k_{56}}\] \[\times\left[\frac{k_{3}^{T_{1}}}{z_{3}z_{4}z_{5}-z_{2}z_{3}z_{4}z_{ 5}}+\frac{k_{4}^{T_{1}}}{z_{4}z_{5}-z_{2}z_{3}z_{4}z_{5}}+\frac{k_{5}^{T_{1}}}{ z_{5}-z_{2}z_{3}z_{4}z_{5}}+\frac{k_{6}^{T_{1}}}{1-z_{2}z_{3}z_{4}z_{5}} \right]^{N+p_{1}}\] \[\times\left[\frac{k_{4}^{T_{2}}}{z_{4}z_{5}-z_{2}z_{3}z_{4}z_{5}} +\frac{k_{5}^{T_{2}}}{z_{5}-z_{2}z_{3}z_{4}z_{5}}+\frac{k_{6}^{T_{2}}}{1-z_{2} z_{3}z_{4}z_{5}}\right]^{p_{2}}\] \[\times\left[\frac{k_{5}^{T_{3}}}{z_{5}-z_{2}z_{3}z_{4}z_{5}}+ \frac{k_{6}^{T_{3}}}{1-z_{2}z_{3}z_{4}z_{5}}\right]^{p_{3}}\left[\frac{k_{6}^{ T_{4}}}{1-z_{2}z_{3}z_{4}z_{5}}\right]^{p_{4}}. \tag{4.45}\] The next step is to expand the brackets to get \[A^{\{p_{1},p_{2},p_{3},p_{4}\},0,0}\] \[=\int_{0}^{1}dz_{5}\int_{0}^{1}dz_{4}\int_{0}^{1}dz_{3}\int_{0}^{1} dz_{2}\cdot z_{2}^{k_{12}}z_{3}^{k_{123}+1}z_{4}^{k_{1234}+2}z_{5}^{k_{12345}+3}\] \[\times\left(1-z_{2}\right)^{k_{23}}\left(1-z_{2}z_{3}\right)^{k_{2 4}}\left(1-z_{2}z_{3}z_{4}\right)^{k_{25}}\left(1-z_{2}z_{3}z_{4}z_{5}\right)^ {k_{26}}\] \[\times\left(1-z_{3}\right)^{k_{34}}\left(1-z_{3}z_{4}\right)^{k_{3 5}}\left(1-z_{3}z_{4}z_{5}\right)^{k_{36}}\] \[\times\left(1-z_{4}\right)^{k_{45}}\left(1-z_{4}z_{5}\right)^{k_{46}}\] \[\times\left(1-z_{5}\right)^{k_{86}}\] \[\times\sum_{J_{1}^{1}+J_{2}^{1}+J_{3}^{1}+J_{4}^{1}=N+p_{1}} \frac{(N+p_{1})!}{J_{1}^{1}!J_{2}^{1}!J_{3}^{1}!J_{4}^{1}!}\left(\frac{k_{3}^ {T_{1}}}{z_{3}z_{4}z_{5}-z_{2}z_{3}z_{4}z_{5}}\right)^{J_{1}^{1}}\left(\frac{k _{4}^{T_{1}}}{z_{4}z_{5}-z_{2}z_{3}z_{4}z_{5}}\right)^{J_{2}^{1}}\left(\frac{k _{5}^{T_{1}}}{z_{5}-z_{2}z_{3}z_{4}z_{5}}\right)^{J_{3}^{1}}\left(\frac{k_{6}^ {T_{1}}}{1-z_{2}z_{3}z_{4}z_{5}}\right)^{J_{4}^{1}}\] \[\times\sum_{J_{1}^{2}+J_{2}^{2}+J_{3}^{2}=p_{2}}^{p_{2}}\frac{P_{ 2}!}{J_{1}^{2}!J_{2}^{2}!J_{3}^{2}!}\left(\frac{k_{4}^{T_{2}}}{z_{4}z_{5}-z_{2 }z_{3}z_{4}z_{5}}\right)^{J_{1}^{2}}\left(\frac{k_{5}^{T_{2}}}{z_{5}-z_{2}z_{3} z_{4}z_{5}}\right)^{J_{2}^{2}}\left(\frac{k_{6}^{T_{2}}}{1-z_{2}z_{3}z_{4}z_{5}} \right)^{J_{3}^{2}}\] \[\times\sum_{J_{1}^{3}+J_{2}^{2}=p_{3}}^{p_{2}}\frac{P_{3}!}{J_{1} ^{3}!J_{1}^{3}!}\left(\frac{k_{5}^{T_{3}}}{z_{5}-z_{2}z_{3}z_{4}z_{5}}\right)^ {J_{1}^{3}}\left(\frac{k_{6}^{T_{3}}}{1-z_{2}z_{3}z_{4}z_{5}}\right)^{J_{2}^{3}}\] \[\times\left(\frac{k_{6}^{T_{4}}}{1-z_{2}z_{3}z_{4}z_{5}}\right)^{ p_{4}} \tag{4.46}\] where \(J_{1}^{1},J_{2}^{1},J_{3}^{1},J_{4}^{1},J_{2}^{1},J_{2}^{2},J_{3}^{2},J_{3}^{3},J_{1}^{3},J_{2}^{3}\) are non-negative integers with \(J_{1}^{1}+J_{2}^{1}+J_{3}^{1}+J_{4}^{1}=N+p_{1}\), \(J_{1}^{2}+J_{2}^{2}+J_{3}^{2}=p_{2}\) and \(J_{1}^{3}+J_{2}^{3}=p_{3}\). Let us rearrange the above equation as \[A^{\{p_{1},p_{2},p_{3},p_{4}\},0,0} =\sum_{J_{1}^{1}+J_{2}^{1}+J_{3}^{1}+J_{4}^{1}=N+p_{1}}^{N+p_{1}} \frac{(N+p_{1})!}{J_{1}^{1}!J_{2}^{1}!J_{3}^{1}!J_{4}!}\left(k_{3}^{T_{1}} \right)^{J_{1}^{1}}\left(k_{4}^{T_{1}}\right)^{J_{2}^{1}}\left(k_{5}^{T_{1}} \right)^{J_{3}^{1}}\left(k_{6}^{T_{1}}\right)^{J_{4}^{1}}\] \[\times\sum_{J_{1}^{2}+J_{2}^{2}+J_{3}^{2}=p_{2}}^{p_{2}}\frac{P_{ 2}!}{J_{1}^{2}!J_{2}^{2}!J_{3}^{2}!}\left(k_{4}^{T_{2}}\right)^{J_{1}^{2}} \left(k_{5}^{T_{2}}\right)^{J_{2}^{2}}\left(k_{6}^{T_{2}}\right)^{J_{3}^{2}}\] \[\times\sum_{J_{1}^{3}+J_{2}^{2}=p_{3}}^{p_{2}}\frac{P_{3}!}{J_{1} ^{3}!J_{2}^{1}!}\left(k_{5}^{T_{3}}\right)^{J_{1}^{3}}\left(k_{6}^{T_{3}} \right)^{J_{2}^{3}}\left(k_{6}^{T_{4}}\right)^{p_{4}}.\] \[\times\int_{0}^{1}dz_{5}\int_{0}^{1}dz_{4}\int_{0}^{1}dz_{3}\int_ {0}^{1}dz_{2}\cdot z_{2}^{k_{12}}z_{3}^{k_{123}+1}z_{4}^{k_{1234}+2}z_{5}^{k_{12 345}+3}\] \[\times\left(1-z_{2}\right)^{k_{23}}\left(1-z_{2}z_{3}\right)^{k_{2 4}}\left(1-z_{2}z_{3}z_{4}\right)^{k_{25}}\left(1-z_{2}z_{3}z_{4}z_{5}\right)^{ k_{26}}\] \[\times\left(1-z_{3}\right)^{k_{34}}\left(1-z_{3}z_{4}\right)^{k_{3 5}}\left(1-z_{3}z_{4}z_{5}\right)^{k_{36}}\] \[\times\left(1-z_{4}\right)^{k_{45}}\left(1-z_{4}z_{5}\right)^{k_{ 46}}\] \[\times\left(1-z_{5}\right)^{k_{56}}\] \[\times\left(z_{3}z_{4}z_{5}-z_{2}z_{3}z_{4}z_{5}\right)^{-\left(J _{1}^{1}\right)}\left(z_{4}z_{5}-z_{2}z_{3}z_{4}z_{5}\right)^{-\left(J_{2}^{1}+J _{2}^{2}\right)}\] \[\times\left(z_{5}-z_{2}z_{3}z_{4}z_{5}\right)^{-\left(J_{4}^{1}+J _{2}^{2}+J_{3}^{1}\right)}\left(1-z_{2}z_{3}z_{4}z_{5}\right)^{-\left(J_{4}^{1}+J _{3}^{2}+J_{2}^{3}+p_{4}\right)}, \tag{4.47}\] which means \[A^{\{p_{1},p_{2},p_{3},p_{4}\},0,0} =\sum_{J_{1}^{1}+J_{2}^{1}+J_{3}^{1}+J_{4}^{1}=N+p_{1}}^{N+p_{1}} \frac{(N+p_{1})!}{J_{1}^{1}!J_{2}^{1}!J_{3}^{1}!J_{4}^{1}!}\left(k_{3}^{T_{1}} \right)^{J_{1}^{1}}\left(k_{4}^{T_{1}}\right)^{J_{2}^{1}}\left(k_{5}^{T_{1}} \right)^{J_{3}^{1}}\left(k_{6}^{T_{1}}\right)^{J_{4}^{1}}\] \[\times\sum_{J_{1}^{1}+J_{2}^{1}+J_{3}^{2}=p_{2}}^{p_{2}}\frac{P_{2 }!}{J_{1}^{2}!J_{2}^{2}!J_{3}^{2}!}\left(k_{4}^{T_{2}}\right)^{J_{1}^{2}}\left( k_{5}^{T_{2}}\right)^{J_{2}^{2}}\left(k_{6}^{T_{2}}\right)^{J_{3}^{2}}\] \[\times\sum_{J_{1}^{1}+J_{2}^{1}=p_{3}}^{p_{2}}\frac{P_{3}!}{J_{1} ^{3}!J_{2}^{3}!}\left(k_{5}^{T_{3}}\right)^{J_{1}^{3}}\left(k_{6}^{T_{3}} \right)^{J_{2}^{3}}\left(k_{6}^{T_{4}}\right)^{p_{4}}.\] \[\times\int_{0}^{1}dz_{5}\int_{0}^{1}dz_{4}\int_{0}^{1}dz_{3}\int_ {0}^{1}dz_{2}\cdot z_{2}^{k_{12}}z_{3}^{k_{123}+1-\left(J_{1}^{1}\right)}z_{4} ^{k_{1234}+2-\left(J_{1}^{1}+J_{2}^{1}+J_{1}^{2}\right)}z_{5}^{k_{12345}+3- \left(J_{1}^{1}+J_{2}^{1}+J_{1}^{2}+J_{3}^{1}+J_{2}^{2}+J_{1}^{3}\right)}\] \[\times\left(1-z_{2}\right)^{k_{23}-J_{1}^{1}}\left(1-z_{2}z_{3} \right)^{k_{24}-\left(J_{2}^{1}+J_{1}^{2}\right)}\left(1-z_{2}z_{3}z_{4} \right)^{k_{25}-\left(J_{3}^{1}+J_{2}^{2}+J_{1}^{3}\right)}\left(1-z_{2}z_{3} z_{4}z_{5}\right)^{k_{26}-\left(J_{4}^{1}+J_{3}^{2}+J_{2}^{3}+p_{4}\right)}\] \[\times\left(1-z_{3}\right)^{k_{34}}\left(1-z_{3}z_{4}\right)^{k_{3 5}}\left(1-z_{3}z_{4}z_{5}\right)^{k_{36}}\] \[\times\left(1-z_{4}\right)^{k_{45}}\left(1-z_{4}z_{5}\right)^{k_{ 46}}\] \[\times\left(1-z_{5}\right)^{k_{56}}. \tag{4.48}\] Then we expand the crossing terms to get \[A^{\{p_{1},p_{2},p_{3},p_{4}\},0,0}\] \[=\sum_{J_{1}^{1}+J_{2}^{1}+J_{3}^{2}+J_{4}^{1}=N+p_{1}}^{N+p_{1}} \frac{(N+p_{1})!}{J_{1}^{1}!J_{2}^{1}!J_{3}^{1}!J_{4}^{1}!}\left(k_{3}^{T_{1}} \right)^{J_{1}^{1}}\left(k_{4}^{T_{1}}\right)^{J_{2}^{1}}\left(k_{5}^{T_{1}} \right)^{J_{3}^{1}}\left(k_{6}^{T_{1}}\right)^{J_{4}^{1}}\] \[\times\sum_{J_{1}^{2}+J_{2}^{2}+J_{3}^{2}=p_{2}}^{p_{2}}\frac{P_{2 }!}{J_{1}^{2}!J_{2}^{2}!J_{3}^{2}!}\left(k_{4}^{T_{2}}\right)^{J_{1}^{2}}\left( k_{5}^{T_{2}}\right)^{J_{2}^{2}}\left(k_{6}^{T_{2}}\right)^{J_{3}^{2}}\] \[\times\sum_{J_{1}^{3}+J_{2}^{3}=p_{3}}^{p_{2}}\frac{P_{3}!}{J_{1} ^{3}!J_{2}^{3}!}\left(k_{5}^{T_{3}}\right)^{J_{1}^{3}}\left(k_{6}^{T_{3}} \right)^{J_{2}^{3}}\left(k_{6}^{T_{4}}\right)^{p_{4}}.\] \[\times\int_{0}^{1}dz_{5}\int_{0}^{1}dz_{4}\int_{0}^{1}dz_{3}\int_ {0}^{1}dz_{2}\cdot z_{2}^{k_{12}}z_{3}^{k_{123}+1-\left(J_{1}^{1}\right)}z_{4} ^{k_{1234}+2-\left(J_{1}^{1}+J_{2}^{1}+J_{1}^{2}\right)}z_{5}^{k_{12345}+3- \left(J_{1}^{1}+J_{2}^{1}+J_{1}^{2}+J_{3}^{1}+J_{2}^{3}+J_{1}^{3}\right)}\] \[\times(1-z_{2})^{k_{23}-J_{1}^{1}}\left(1-z_{3}\right)^{k_{34}} \left(1-z_{4}\right)^{k_{45}}\left(1-z_{5}\right)^{k_{56}}\] \[\times\sum_{m_{23}}\frac{\left[-k_{24}+\left(J_{2}^{1}+J_{1}^{2} \right)\right]_{m_{23}}}{m_{23}!}\left(z_{2}z_{3}\right)^{m_{23}}\sum_{m_{24}} \frac{\left[-k_{25}+\left(J_{3}^{1}+J_{2}^{2}+J_{1}^{3}\right)\right]_{m_{24}}} {m_{24}!}\left(z_{2}z_{3}z_{4}\right)^{m_{24}}\] \[\times\sum_{m_{25}}\frac{\left[-k_{26}+\left(J_{4}^{1}+J_{3}^{2}+J_ {2}^{3}+p_{4}\right)\right]_{m_{25}}}{m_{25}!}\left(z_{2}z_{3}z_{4}z_{5}\right)^ {m_{25}}\] \[\times\sum_{m_{34}}\frac{\left[-k_{35}\right]_{m_{34}}}{m_{34}!} \left(z_{3}z_{4}\right)^{m_{34}}\sum_{m_{25}}\frac{\left[-k_{36}\right]_{m_{34}}} {m_{35}!}\left(z_{3}z_{4}z_{5}\right)^{m_{35}}\sum_{m_{45}}\frac{\left[-k_{46} \right]_{m_{34}}}{m_{45}!}\left(z_{4}z_{5}\right)^{m_{45}}, \tag{4.49}\] which gives \[A^{\{p_{1},p_{2},p_{3},p_{4}\},0,0}\] \[=\sum_{J_{1}^{1}+J_{2}^{1}+J_{3}^{1}+J_{4}^{1}=N+p_{1}}^{N+p_{1}} \frac{(N+p_{1})!}{J_{1}^{1}!J_{2}^{1}!J_{3}^{1}!J_{4}^{1}!}\left(k_{3}^{T_{1}} \right)^{J_{1}^{1}}\left(k_{4}^{T_{1}}\right)^{J_{2}^{1}}\left(k_{5}^{T_{1}} \right)^{J_{3}^{1}}\left(k_{6}^{T_{1}}\right)^{J_{4}^{1}}\] \[\times\sum_{J_{1}^{2}+J_{2}^{2}+J_{3}^{2}=p_{2}}^{p_{2}}\frac{P_{2 }!}{J_{1}^{2}!J_{2}^{2}!J_{3}^{2}!}\left(k_{4}^{T_{2}}\right)^{J_{1}^{2}}\left( k_{5}^{T_{2}}\right)^{J_{2}^{2}}\left(k_{6}^{T_{2}}\right)^{J_{5}^{2}}\] \[\times\sum_{J_{3}^{1}+J_{2}^{2}=p_{3}}^{p_{2}}\frac{P_{3}!}{J_{3} ^{3}!J_{3}^{3}!}\left(k_{5}^{T_{3}}\right)^{J_{1}^{3}}\left(k_{6}^{T_{3}} \right)^{J_{2}^{3}}\left(k_{6}^{T_{4}}\right)^{p_{4}}\] \[\times\sum_{m_{23}}\frac{\left[-k_{24}+\left(J_{1}^{1}+J_{1}^{2} \right)\right]_{m_{23}}}{m_{23}!}\sum_{m_{24}}\frac{\left[-k_{25}+\left(J_{3} ^{1}+J_{2}^{2}+J_{1}^{3}\right)\right]_{m_{24}}}{m_{24}!}\sum_{m_{25}}\frac{ \left[-k_{26}+\left(J_{4}^{1}+J_{3}^{2}+J_{2}^{3}+p_{4}\right)\right]_{m_{25}} }{m_{25}!}\] \[\times\sum_{m_{34}}\frac{\left[-k_{35}\right]_{m_{34}}}{m_{34}!} \sum_{m_{35}}\frac{\left[-k_{36}\right]_{m_{35}}}{m_{35}!}\sum_{m_{45}}\frac{ \left[-k_{46}\right]_{m_{45}}}{m_{45}!}\] \[\times\int_{0}^{1}dz_{2}\cdot z_{2}^{k_{12}+m_{23}+m_{24}+m_{25}} \left(1-z_{2}\right)^{k_{23}-J_{1}^{1}}\] \[\times\int_{0}^{1}dz_{3}\cdot z_{3}^{k_{123}+1-\left(J_{1}^{1} \right)+m_{23}+m_{24}+m_{25}+m_{34}+m_{35}}\left(1-z_{3}\right)^{k_{34}}\] \[\times\int_{0}^{1}dz_{4}\cdot z_{4}^{k_{1234}+2-\left(J_{1}^{1}+J_ {2}^{1}+J_{1}^{2}\right)+m_{24}+m_{25}+m_{34}+m_{35}+m_{45}}\left(1-z_{4}\right) ^{k_{45}}\] \[\times\int_{0}^{1}dz_{5}z_{5}^{k_{12345}+3-\left(J_{1}^{1}+J_{2} ^{1}+J_{1}^{1}+J_{3}^{1}+J_{2}^{2}+J_{1}^{3}\right)+m_{25}+m_{35}+m_{45}}\left( 1-z_{5}\right)^{k_{56}}. \tag{4.50}\] After integration, we obtain \[A^{\{p_{1},p_{2},p_{3},p_{4}\},0,0}\] \[=\sum_{J_{1}^{1}+J_{2}^{1}+J_{3}^{1}+J_{4}^{1}=N+p_{1}}^{N+p_{1}} \frac{(N+p_{1})!}{J_{1}^{1}!J_{2}^{1}!J_{3}^{1}!J_{4}^{1}!}\left(k_{3}^{T_{1}} \right)^{J_{1}^{1}}\left(k_{4}^{T_{1}}\right)^{J_{2}^{1}}\left(k_{5}^{T_{1}} \right)^{J_{3}^{1}}\left(k_{6}^{T_{1}}\right)^{J_{4}^{1}}\] \[\times\sum_{J_{1}^{2}+J_{2}^{2}+J_{3}^{2}=p_{2}}^{p_{2}}\frac{P_{2 }!}{J_{1}^{2}!J_{2}^{2}!J_{3}^{2}!}\left(k_{4}^{T_{2}}\right)^{J_{1}^{2}}\left( k_{5}^{T_{2}}\right)^{J_{2}^{2}}\left(k_{6}^{T_{2}}\right)^{J_{3}^{2}}\] \[\times\sum_{J_{3}^{1}+J_{2}^{2}=p_{3}}^{p_{2}}\frac{P_{3}!}{J_{1} ^{3}!J_{2}^{3}!}\left(k_{5}^{T_{3}}\right)^{J_{1}^{3}}\left(k_{6}^{T_{3}} \right)^{J_{2}^{3}}\left(k_{6}^{T_{1}}\right)^{p_{4}}\] \[\times\sum_{m_{23}}\frac{\left[-k_{24}+\left(J_{2}^{1}+J_{1}^{2} \right)\right]_{m_{23}}}{m_{23}!}\sum_{m_{24}}\frac{\left[-k_{25}+\left(J_{3} ^{1}+J_{2}^{2}+J_{1}^{3}\right)\right]_{m_{24}}}{m_{24}!}\sum_{m_{25}}\frac{ \left[-k_{26}+\left(J_{4}^{1}+J_{3}^{2}+J_{2}^{3}+p_{4}\right)\right]_{m_{25}}}{ m_{25}!}\] \[\times\sum_{m_{34}}\frac{\left[-k_{35}\right]_{m_{34}}}{m_{34}!} \sum_{m_{35}}\frac{\left[-k_{36}\right]_{m_{35}}}{m_{35}!}\sum_{m_{45}}\frac{ \left[-k_{46}\right]_{m_{45}}}{m_{45}!}\] \[\times\frac{\Gamma\left(k_{12}+1+m_{23}+m_{24}+m_{25}\right)\Gamma \left(k_{23}+1-J_{1}^{1}\right)}{\Gamma\left(k_{12}+k_{23}+2+m_{23}+m_{24}+m_{ 25}\right)}\] \[\times\frac{\Gamma\left(k_{123}+2-J_{1}^{1}+m_{23}+m_{24}+m_{25}+m_ {34}+m_{35}\right)\Gamma\left(k_{34}+1\right)}{\Gamma\left(k_{123}+k_{34}+3-J_{1}^ {1}+m_{23}+m_{24}+m_{25}+m_{34}+m_{35}\right)}\] \[\times\frac{\Gamma\left(k_{1234}+3-\left(J_{1}^{1}+J_{2}^{1}+J_{1} ^{2}\right)+m_{24}+m_{25}+m_{34}+m_{35}+m_{45}\right)\Gamma\left(k_{45}+1 \right)}{\Gamma\left(k_{1234}+k_{45}+4-\left(J_{1}^{1}+J_{2}^{1}+J_{1}^{2}\right) +m_{24}+m_{25}+m_{34}+m_{35}+m_{45}\right)}\] \[\times\frac{\Gamma\left(k_{12345}+4-\left(J_{1}^{1}+J_{2}^{1}+J_{1} ^{3}+J_{2}^{2}+J_{1}^{3}\right)+m_{25}+m_{35}+m_{45}\right)\Gamma\left(k_{56}+1 \right)}{\Gamma\left(k_{12345}+k_{56}+5-\left(J_{1}^{1}+J_{2}^{1}+J_{2}^{1}+J_{1} ^{3}+J_{2}^{2}+J_{1}^{3}\right)+m_{25}+m_{35}+m_{45}\right)}. \tag{4.51}\] Now we choose to work on the Regge regime defined by \[k_{12345}\sim s,k_{12345}+k_{56}\sim t. \tag{4.52}\] In this regime, the RSSA can be approximated as \[A^{\{p_{1},p_{2},p_{3},p_{4}\},0,0}\] \[\sim\sum_{J_{1}^{1}+J_{2}^{1}+J_{3}^{1}+J_{4}^{1}=N+p_{1}}^{N+p_{ 1}}\frac{(N+p_{1})!}{J_{1}^{1}!J_{2}^{1}!J_{3}^{1}!J_{4}^{1}!}\left(k_{3}^{T_{1 }}\right)^{J_{1}^{1}}\left(k_{4}^{T_{1}}\right)^{J_{2}^{2}}\left(k_{5}^{T_{1}} \right)^{J_{3}^{1}}\left(k_{6}^{T_{1}}\right)^{J_{4}^{1}}\] \[\times\sum_{J_{1}^{2}+J_{2}^{2}+J_{3}^{2}=p_{2}}^{p_{2}}\frac{P_{2 }!}{J_{1}^{2}!J_{2}^{2}!J_{3}^{2}}\left(k_{4}^{T_{2}}\right)^{J_{2}^{2}}\left( k_{5}^{T_{2}}\right)^{J_{3}^{2}}\left(k_{6}^{T_{2}}\right)^{J_{3}^{2}}\] \[\times\sum_{J_{1}^{3}+J_{2}^{2}=p_{3}}^{p_{3}}\frac{P_{3}!}{J_{1} ^{3}!J_{2}^{3}!}\left(k_{5}^{T_{3}}\right)^{J_{1}^{3}}\left(k_{6}^{T_{3}} \right)^{J_{2}^{2}}\left(k_{6}^{T_{4}}\right)^{p_{4}}\] \[\times\sum_{m_{23}}\frac{\left[-k_{24}+\left(J_{2}^{1}+J_{1}^{2} \right)\right]_{m_{23}}}{m_{23}!}\sum_{m_{24}}\frac{\left[-k_{25}+\left(J_{3} ^{1}+J_{2}^{2}+J_{1}^{3}\right)\right]_{m_{24}}}{m_{24}!}\sum_{m_{25}}\frac{ \left[-k_{26}+\left(J_{4}^{1}+J_{3}^{2}+J_{2}^{3}+p_{4}\right)\right]_{m_{25}} }{m_{25}!}\] \[\times\sum_{m_{34}}\frac{\left[-k_{35}\right]_{m_{34}}}{m_{34}!} \sum_{m_{35}}\frac{\left[-k_{36}\right]_{m_{35}}}{m_{35}!}\sum_{m_{45}}\frac{ \left[-k_{46}\right]_{m_{45}}}{m_{45}!}\] \[\times\frac{\Gamma\left(k_{12}+1+m_{23}+m_{24}+m_{25}\right) \Gamma\left(k_{23}+1-J_{1}^{1}\right)}{\Gamma\left(k_{12}+k_{23}+2-J_{1}^{1}+ m_{23}+m_{24}+m_{25}\right)}\] \[\times\frac{\Gamma\left(k_{123}+2-J_{1}^{1}+m_{23}+m_{24}+m_{25}+ m_{34}+m_{35}\right)\Gamma\left(k_{34}+1\right)}{\Gamma\left(k_{123}+k_{34}+3-J_{1}^{1 }+m_{23}+m_{24}+m_{25}+m_{34}+m_{35}\right)}\] \[\times\frac{\Gamma\left(k_{1234}+3-\left(J_{1}^{1}+J_{2}^{1}+J_{ 2}^{2}\right)+m_{24}+m_{25}+m_{34}+m_{35}+m_{45}\right)\Gamma\left(k_{45}+1 \right)}{\Gamma\left(k_{1234}+k_{45}+4-\left(J_{1}^{1}+J_{2}^{1}+J_{1}^{2}+J_{ 1}^{2}\right)+m_{24}+m_{25}+m_{34}+m_{35}+m_{45}\right)}\] \[\times\frac{\left(k_{12345}\right)^{-\left(J_{1}^{1}+J_{2}^{1}+J_{ 2}^{2}+J_{1}^{1}+J_{2}^{2}+J_{1}^{3}\right)+m_{25}+m_{35}+m_{45}}}{(k_{12345}+ k_{56}+5)-\left(J_{1}^{1}+J_{2}^{1}+J_{1}^{2}+J_{3}^{1}+J_{2}^{2}+J_{1}^{3} \right)+m_{25}+m_{35}+m_{45}}\frac{\Gamma\left(k_{12345}+4\right)\Gamma\left(k_ {56}+1\right)}{\Gamma\left(k_{12345}+k_{56}+5\right)}. \tag{4.53}\] To get the leading order in \(k_{12345}\), we take \[J_{1}^{1}=J_{2}^{1}=J_{3}^{1}=J_{1}^{2}=J_{2}^{2}=J_{1}^{3}=0,\] which implies \[J_{4}^{1}=N+p_{1},J_{3}^{2}=p_{2},J_{2}^{3}=p_{3}. \tag{4.54}\] With \(p_{1}+p_{2}+p_{3}+p_{4}=0\), the leading term is \[A^{\{p_{1},p_{2},p_{3},p_{4}\},0,0}\] \[\sim\left(k_{6}^{T_{1}}\right)^{N+p_{1}}\left(k_{6}^{T_{2}} \right)^{p_{2}}\left(k_{6}^{T_{3}}\right)^{p_{3}}\left(k_{6}^{T_{4}}\right)^{p_{4}}\] \[\times\sum_{m_{23}}\frac{\left[-k_{24}\right]_{m_{23}}}{m_{23}!} \sum_{m_{24}}\frac{\left[-k_{25}\right]_{m_{24}}}{m_{24}!}\sum_{m_{25}}\frac{ \left[-k_{26}+N\right]_{m_{25}}}{m_{25}!}\sum_{m_{34}}\frac{\left[-k_{35}\right]_ {m_{34}}}{m_{34}!}\sum_{m_{35}}\frac{\left[-k_{36}\right]_{m_{25}}}{m_{35}!} \sum_{m_{45}}\frac{\left[-k_{46}\right]_{m_{45}}}{m_{45}!}\] \[\times\frac{\Gamma\left(k_{12}+1+m_{23}+m_{24}+m_{25}\right) \Gamma\left(k_{23}+1\right)}{\Gamma\left(k_{12}+k_{23}+2+m_{23}+m_{24}+m_{25} \right)}\] \[\times\frac{\Gamma\left(k_{123}+2+m_{23}+m_{24}+m_{25}+m_{34}+m_{35} \right)\Gamma\left(k_{34}+1\right)}{\Gamma\left(k_{123}+k_{34}+3+m_{23}+m_{24}+m_{25 }+m_{34}+m_{35}\right)}\] \[\times\frac{\Gamma\left(k_{1234}+3+m_{24}+m_{25}+m_{34}+m_{35}+m_{45 }\right)\Gamma\left(k_{45}+1\right)}{\Gamma\left(k_{1234}+k_{45}+4+m_{24}+m_{25 }+m_{34}+m_{35}+m_{45}\right)}\] \[\times\frac{\left(k_{12345}\right)^{m_{25}+m_{35}+m_{45}}}{\left(k_{1 2345}+k_{56}+5\right)_{m_{25}+m_{35}+m_{45}}}\frac{\Gamma\left(k_{12345}+4 \right)\Gamma\left(k_{56}+1\right)}{\Gamma\left(k_{12345}+k_{56}+5\right)}. \tag{4.55}\] So the ratios of the 7-point RSSA is \[\frac{A^{\{p_{1},p_{2},p_{3},p_{4}\},0,0}}{A^{\{0,0,0,0\},0,0,0}} =\left(k_{6}^{T_{1}}\right)^{p_{1}}\left(k_{6}^{T_{2}}\right)^{p_{ 2}}\left(k_{6}^{T_{3}}\right)^{p_{3}}\left(k_{6}^{T_{4}}\right)^{p_{4}}\] \[=\left(\cos\phi_{2}^{5}\right)^{p_{1}}\left(\sin\phi_{2}^{6}\cos \phi_{3}^{6}\right)^{p_{2}}\left(\sin\phi_{2}^{6}\sin\phi_{3}^{6}\cos\phi_{4}^ {6}\right)^{p_{3}}\left(\sin\phi_{2}^{6}\sin\phi_{3}^{6}\sin\phi_{4}^{6}\right) ^{p_{4}}\] \[=\left(\cos\theta_{1}\right)^{p_{1}}\left(\sin\theta_{1}\cos \theta_{2}\right)^{p_{2}}\left(\sin\theta_{1}\sin\theta_{2}\cos\theta_{3} \right)^{p_{3}}\left(\sin\theta_{1}\sin\theta_{2}\sin\theta_{3}\right)^{p_{4}}\] \[=\left(\omega_{1}\right)^{p_{1}}\left(\omega_{2}\right)^{p_{2}} \left(\omega_{3}\right)^{p_{3}}\left(\omega_{4}\right)^{p_{4}}, \tag{4.56}\] which is the same as Eq.(2.31) with \(m=q=0\) and \(r=4\). ## V The \(n\)-point Regge stringy scaling In this section, we generalize the previous calculations to the case of \(n\)-point RSSA. We first define the 26-dimensional momenta in the CM frame to be \[k_{1} =\left(\sqrt{p^{2}+M_{1}^{2}},-p,0^{r}\right),\] \[k_{2} =\left(\sqrt{p^{2}+M_{2}^{2}},p,0^{r}\right),\] \[\vdots\] \[k_{j} =\left(-\sqrt{q_{j}^{2}+M_{j}^{2}},-q_{j}\Omega_{1}^{j},-q_{j} \Omega_{2}^{j},\cdots,-q_{j}\Omega_{r}^{j},-q_{j}\Omega_{r+1}^{j}\right) \tag{5.1}\] where \(j=3,4,\cdots,n\), and \[\Omega_{i}^{j}=\cos\phi_{i}^{j}\prod_{\sigma=1}^{i-1}\sin\phi_{ \sigma}^{j}\text{ with }\phi_{j-1}^{j}=0,\text{ }\phi_{i>r}^{j}=0\text{ and }r\leq\min\left\{n-3,24\right\} \tag{5.2}\] are the solid angles in the \((j-2)\)-dimensional spherical space with \(\sum_{i=1}^{j-2}\left(\Omega_{i}^{j}\right)^{2}=1\). In Eq.(5.1), \(0^{r}\) denotes the \(r\)-dimensional null vector. The amplitude of one tensor state \[\left(\alpha_{-1}^{T_{1}}\right)^{N+p_{1}}\left(\alpha_{-1}^{T_{ 2}}\right)^{p_{2}}\cdots\left(\alpha_{-1}^{T_{r}}\right)^{p_{r}}\left|0,k \right\rangle,p_{1}+p_{2}+\cdots+p_{r}=0 \tag{5.3}\] and \(n-1\) tachyon states is \[A^{\{p_{1},p_{2},\cdots,p_{r}\},0,0}\] \[=\int_{0}^{1}dx_{n-2}\int_{0}^{x_{n-2}}dx_{n-3}\cdots\int_{0}^{x_ {4}}dx_{3}\int_{0}^{x_{3}}dx_{2}\times\prod_{0\leq i<j\leq n-1}\left(x_{j}-x_{ i}\right)^{k_{ij}}\prod_{\sigma=1}^{r}\left[\sum_{j=\sigma+2}^{n-1}\left(\frac{k_{j} ^{T_{\sigma}}}{x_{j}-x_{2}}\right)\right]^{\mathcal{P}_{\sigma}} \tag{5.4}\] where we have defined \[\mathcal{P}_{1}=N+p_{1},\mathcal{P}_{\sigma\neq 1}=p_{\sigma}. \tag{5.5}\] Now, let's explicitly write down the second product part of Eq.(5.4) as \[A^{\{p_{1},p_{2},\cdots,p_{r}\},0,0}\] \[=\int_{0}^{1}dx_{n-2}\int_{0}^{x_{n-2}}dx_{n-3}\cdots\int_{0}^{x_{4 }}dx_{3}\int_{0}^{x_{3}}dx_{2}\times\prod_{0\leq i<j\leq n-1}\left(x_{j}-x_{i} \right)^{k_{ij}}\] \[\times\left[\frac{k_{3}^{T_{1}}}{x_{3}-x_{2}}+\frac{k_{4}^{T_{1} }}{x_{4}-x_{2}}+\frac{k_{5}^{T_{1}}}{x_{5}-x_{2}}\cdots+\frac{k_{n-1}^{T_{1}}} {1-x_{2}}\right]^{\mathcal{P}_{1}}\] \[\times\left[\frac{k_{4}^{T_{2}}}{x_{4}-x_{2}}+\frac{k_{5}^{T_{2} }}{x_{5}-x_{2}}+\cdots+\frac{k_{n-1}^{T_{2}}}{1-x_{2}}\right]^{\mathcal{P}_{2}}\] \[\vdots\] \[\times\left[\frac{k_{r+2}^{T_{r}}}{x_{r+2}-x_{2}}+\cdots+\frac{k_ {n-1}^{T_{r}}}{1-x_{2}}\right]^{\mathcal{P}_{r}}. \tag{5.6}\] For convience, from now on we add trivial terms with \(\mathcal{P}_{\sigma}=0\)\((r+1\leq\sigma\leq n-3)\) to the amplitude and obtain \[A^{\{p_{1},p_{2},\cdots,p_{r}\},0,0}\] \[=\int_{0}^{1}dx_{n-2}\int_{0}^{x_{n-2}}dx_{n-3}\cdots\int_{0}^{x_ {4}}dx_{3}\int_{0}^{x_{3}}dx_{2}\times\prod_{0\leq i<j\leq n-1}\left(x_{j}-x_{ i}\right)^{k_{ij}}\] \[\times\left[\frac{k_{3}^{T_{1}}}{x_{3}-x_{2}}+\frac{k_{4}^{T_{1} }}{x_{4}-x_{2}}+\frac{k_{5}^{T_{1}}}{x_{5}-x_{2}}+\frac{k_{5}^{T_{1}}}{x_{6}- x_{2}}+\cdots+\frac{k_{n-1}^{T_{1}}}{1-x_{2}}\right]^{\mathcal{P}_{1}}\] \[\times\left[\frac{k_{4}^{T_{2}}}{x_{4}-x_{2}}+\frac{k_{5}^{T_{2} }}{x_{5}-x_{2}}+\frac{k_{6}^{T_{2}}}{x_{6}-x_{2}}+\cdots+\frac{k_{n-1}^{T_{2}}} {1-x_{2}}\right]^{\mathcal{P}_{2}}\] \[\vdots\] \[\times\left[\frac{k_{r+2}^{T_{r}}}{x_{r+2}-x_{2}}+\frac{k_{r+3}^{ T_{r}}}{x_{r+3}-x_{2}}+\cdots+\frac{k_{n-1}^{T_{r}}}{1-x_{2}}\right]^{\mathcal{P}_{r}}\] \[\times\left[\frac{k_{r+3}^{T_{r+1}}}{x_{r+3}-x_{2}}+\cdots+\frac{ k_{n-1}^{T_{r+1}}}{1-x_{2}}\right]^{\mathcal{P}_{r+1}}\] \[\vdots\] \[\times\left[\frac{k_{n-1}^{T_{n-3}}}{1-x_{2}}\right]^{\mathcal{P} _{n-3}}. \tag{5.7}\] Now we can expand the brackets \[A^{\{p_{1},p_{2},\cdots,p_{r}\},0,0}\] \[=\int_{0}^{1}dx_{n-2}\int_{0}^{x_{n-2}}dx_{n-3}\cdots\int_{0}^{x_{4 }}dx_{3}\int_{0}^{x_{3}}dx_{2}\times\prod_{0\leq i<j\leq n-1}\left(x_{j}-x_{i} \right)^{k_{ij}}\] \[\times\sum_{J_{1}^{1}+J_{2}^{1}+\cdots+J_{n-4}^{1}+J_{n-3}^{1} \equiv\mathcal{P}_{1}}\frac{\mathcal{P}_{1}!}{J_{1}^{1}!J_{2}^{1}!\cdots J_{n- 4}^{1}!J_{n-3}^{1}!}\left(\frac{k_{3}^{T_{1}}}{x_{3}-x_{2}}\right)^{J_{1}^{1}} \left(\frac{k_{4}^{T_{1}}}{x_{4}-x_{2}}\right)^{J_{2}^{1}}\cdots\left(\frac{k_ {n-1}^{T_{1}}}{1-x_{2}}\right)^{J_{n-3}^{1}}\] \[\times\sum_{J_{1}^{2}+J_{2}^{2}+\cdots+J_{n-5}^{2}+J_{n-4}^{2} =\mathcal{P}_{2}}\frac{\mathcal{P}_{2}!}{J_{1}^{2}!J_{2}^{2}!\cdots J_{n-5}^{ 2}!J_{n-4}^{2}!}\left(\frac{k_{4}^{T_{2}}}{x_{4}-x_{2}}\right)^{J_{1}^{2}} \cdots\left(\frac{k_{n-1}^{T_{2}}}{1-x_{2}}\right)^{J_{n-4}^{2}}\] \[\vdots\] \[\times\sum_{J_{1}^{n-3}=\mathcal{P}_{n-3}=0}\frac{\mathcal{P}_{n -3}!}{J_{1}^{n-3}!}\left(\frac{k_{n-1}^{T_{n-3}}}{1-x_{2}}\right)^{J_{1}^{n-3}} \tag{5.8}\] where all \(J\) are non-negative integers. (Note that all \(J_{j}^{\sigma\geq r+1}=0\) due to \(\mathcal{P}_{\sigma\geq r+1}=0\).) After some rearrangements, we can derive \[A^{\{p_{1},p_{2},\cdots,p_{n-3}\},0,0}\] \[=\sum_{J_{1}^{1}+J_{2}^{1}+\cdots+J_{n-4}^{1}+J_{n-3}^{1}\equiv \mathcal{P}_{1}}\frac{\mathcal{P}_{1}!}{J_{1}^{1}!J_{2}^{1}!\cdots J_{n-4}^{1}!J_{n-3}^{1}!}\left(k_{3}^{T_{1}}\right)^{J_{1}^{1}}\left(k_{4}^{T_{1}}\right) ^{J_{2}^{1}}\cdots\left(k_{n-1}^{T_{1}}\right)^{J_{n-3}^{1}}\] \[\times\sum_{J_{1}^{2}+J_{2}^{2}+\cdots+J_{n-5}^{2}+J_{n-4}^{2} =\mathcal{P}_{2}}\frac{\mathcal{P}_{2}!}{J_{1}^{2}!J_{2}^{2}!\cdots J_{n-5}^{2 }!J_{n-4}^{2}!}\left(k_{4}^{T_{2}}\right)^{J_{1}^{2}}\left(k_{5}^{T_{2}}\right) ^{J_{2}^{2}}\cdots\left(k_{n-1}^{T_{2}}\right)^{J_{n-4}^{2}}\] \[\vdots\] \[\times\sum_{J_{1}^{n-3}=\mathcal{P}_{n-3}=0}\frac{\mathcal{P}_{n- 3}!}{J_{1}^{n-3}!}\left(k_{n-1}^{T_{n-3}}\right)^{J_{1}^{n-3}}\] \[\times\int_{0}^{1}dx_{n-2}\int_{0}^{x_{n-2}}dx_{n-3}\cdots\int_{0} ^{x_{4}}dx_{3}\int_{0}^{x_{3}}dx_{2}\times\prod_{0\leq i<j\leq n-1}\left(x_{j}- x_{i}\right)^{k_{ij}}\] \[\times\left(\frac{1}{x_{3}-x_{2}}\right)^{J_{1}^{1}}\left(\frac{1} {x_{4}-x_{2}}\right)^{J_{2}^{1}}\cdots\left(\frac{1}{1-x_{2}}\right)^{J_{n-3}^{ 1}}\] \[\times\left(\frac{1}{x_{4}-x_{2}}\right)^{J_{1}^{2}}\cdots\left( \frac{1}{1-x_{2}}\right)^{J_{n-4}^{2}}\] \[\vdots\] \[\times\left(\frac{1}{1-x_{2}}\right)^{J_{1}^{n-3}}. \tag{5.9}\] We can collect terms with the same sum of subscripts and superscripts to get \[\sum_{J_{1}^{n-3}=\mathcal{P}_{n-3}=0}\frac{\mathcal{P}_{n-3}!}{J_{1}^{n-3}!} \left(k_{3}^{T_{1}}\right)^{J_{1}^{1}}\cdots\left(k_{3}^{T_{1}}\right)^{J_{1}^ {1}}\cdots\left(k_{3}^{T_{1}}\right)^{J_{n-3}^{1}}\] \[\times\sum_{J_{1}^{n-3}=\mathcal{P}_{n-3}=0}\frac{\mathcal{P}_{n- 3}!}{J_{1}^{n-3}!}\left(k_{3}^{T_{1}}\right)^{J_{1}^{1}}\cdots\left(k_{3}^{T_ {1}}\right)^{J_{1}^{1}}\cdots\left(k_{3}^{T_{1}}\right)^{J_{n-3}^{1}}\] \[\times\sum_{J_{1}^{n-3}=\mathcal{P}_{n-3}=0}\frac{\mathcal{P}_{n -3}!}{J_{1}^{n-3}!}\left(k_{3}^{T_{1}}\right)^{J_{1}^{1}}\cdots\left(k_{3}^{T_ {1}}\right)^{J_{n-3}^{1}}\] \[\times\sum_{J_{1}^{n-3}=\mathcal{P}_{n-3}=0}\frac{\mathcal{P}_{n -3}!}{J_{1}^{n-3}!}\left(k_{3}^{T_{1}}\right)^{J_{1}^{1}}\cdots\left(k_{3}^{T_ {1}}\right)^{J_{1}^{1}}\cdots\left(k_{3}^{T_{1}}\right)^{J_{n-3}^{1}}\] \[\times\sum_{J_{1}^{n-3}=\mathcal{P}_{n-3}=0}\frac{\mathcal{P}_{n -3}!}{J_{1}^{n-3}!}\left(k_{3}^{T_{1}}\right)^{J_{1}^{1}}\cdots\left(k_{3}^{T_ {1}}\right)^{J_{n-3}^{1}}\] \[\times\sum_{J_{1}^{n-3}=\mathcal{P}_{n-3}=0}\frac{\mathcal{P}_{n -3}!}{J_{1}^{n-3}!}\left(k_{3}^{T_{1}}\right)^{J_{1}^{1}}\cdots\left(k_{3}^{T_ {1}}\right)^{J_{n-3}^{1}}\] \[\times\sum_{J_{1}^{n-3}=\mathcal{P}_{n-3}=0}\frac{\mathcal{P}_{n -3}!}{J_{1}^{n-3}!}\left(k_{3}^{T_{1}}\right)^{J_{1}^{1}}\cdots\left(k_{3}^{T_ {1}}\right)^{J_{1}^{1}}\cdots\left(k_{3}^{T_{1}}\right)^{J_{n-3}^{1}}\] \[\times\sum_{J_{1}^{n-3}=\mathcal{P}_{n-3}=0}\frac{\mathcal{P}_{n -3}! \[A^{\{p_{1},p_{2},\cdots,p_{n-3}\},0,0}\] \[=\prod_{\sigma=1}^{n-3}\left[\sum_{j=1}^{n-2-\sigma}J_{j}^{\sigma} =\mathcal{P}_{\sigma}\left(\mathcal{P}_{\sigma}!\prod_{j=1}^{n-2-\sigma}\frac{ \left(k_{j+\sigma+1}^{T_{\sigma}}\right)^{J_{j}^{\sigma}}}{J_{j}^{\sigma}!} \right)\right] \tag{5.10}\] \[\times\int_{0}^{1}dx_{n-2}\int_{0}^{x_{n-2}}dx_{n-3}\cdots\int_{0 }^{x_{4}}dx_{3}\int_{0}^{x_{3}}dx_{2}\times\prod_{0\leq i<j\leq n-1}\left(x_{j} -x_{i}\right)^{k_{ij}-\delta_{i2}\left(J_{j-2}^{1}+\cdots+J_{1}^{j-2}\right)}. \tag{5.11}\] To perform the integral of the last line in Eq.(5.11), let's do the following change of variables \[x_{i}=\prod_{k=i}^{n-2}z_{k},\text{ or }x_{2}=z_{2}z_{3}\cdots z_{n-2},x_{3}=z _{3}z_{4}\cdots z_{n-2},\cdots,x_{n-2}=z_{n-2},x_{n-1}=z_{n-1}=1 \tag{5.12}\] to make all the integral intervals from \(0\) to \(1\). The integral becomes \[\int_{0}^{1}dx_{n-2}\int_{0}^{x_{n-2}}dx_{n-3}\cdots\int_{0}^{x_{4 }}dx_{3}\int_{0}^{x_{3}}dx_{2}\times\prod_{0\leq i<j\leq n-1}\left(x_{j}-x_{i} \right)^{k_{ij}-\delta_{i2}\left(J_{j-2}^{1}+\cdots+J_{1}^{j-2}\right)}\] \[=\int_{0}^{1}dz_{n-2}\cdots\int_{0}^{1}dz_{3}\int_{0}^{1}dz_{2} \prod_{i=1}^{n-4}\left(z_{i+2}\right)^{i}\prod_{0\leq i<j\leq n-1}\left(\prod_ {k=j}^{n-2}z_{k}-\prod_{k=i}^{n-2}z_{k}\right)^{k_{ij}-\delta_{i2}\left(J_{j-2 }^{1}+\cdots+J_{1}^{j-2}\right)}\] \[=\int_{0}^{1}dz_{n-2}\cdots\int_{0}^{1}dz_{2}\times\prod_{i=1}^{n -4}\left(z_{i+2}\right)^{i}\prod_{0\leq i<j\leq n-1}\left[\prod_{k=j}^{n-2}z_{ k}\left(1-\prod_{k=i}^{j-1}z_{k}\right)\right]^{k_{ij}-\delta_{i2}\left(J_{j-2 }^{1}+\cdots+J_{1}^{j-2}\right)}. \tag{5.13}\] Now, the amplitude can be explicitly written as \[A^{\{p_{1},p_{2},\cdots,p_{n-3}\},0,0}\] \[=\prod_{\sigma=1}^{n-3}\left[\sum_{j=1}^{n-2-\sigma}J_{j}^{\sigma }=\mathcal{P}_{\sigma}\left(\mathcal{P}_{\sigma}!\prod_{j=1}^{n-2-\sigma}\frac {\left(k_{j+\sigma+1}^{T_{\sigma}}\right)^{J_{j}^{\sigma}}}{J_{j}^{\sigma}!} \right)\right]\] \[\times\int_{0}^{1}dz_{n-2}\int_{0}^{1}dz_{n-3}\cdots\int_{0}^{1} dz_{3}\int_{0}^{1}dz_{2}\] \[\times z_{2}^{k_{12}}z_{3}^{k_{123}+1-J_{1}^{1}}z_{4}^{k_{123}+2- J_{1}^{1}-\left(J_{1}^{2}+J_{2}^{1}\right)}\cdots z_{n-2}^{k_{1,\cdots,n-2}+(n-4)-J_{1}^{1 }-\left(J_{1}^{2}+J_{2}^{1}\right)-\cdots-\left(J_{1}^{n-4}+\cdots+J_{1-4}^{1}\right)}\] \[\times\left(1-z_{2}\right)^{k_{23}-J_{1}^{1}}\left(1-z_{2}z_{3} \right)^{k_{24}-\left(J_{1}^{2}+J_{2}^{1}\right)}\cdots\left(1-z_{2}z_{3}z_{4} \cdots z_{n-2}\right)^{k_{2,n-1}-\left(J_{1}^{n-3}+\cdots+J_{n-3}^{1}\right)}\] \[\times\left(1-z_{3}\right)^{k_{34}}\left(1-z_{3}z_{4}\right)^{k_{ 35}}\cdots\left(1-z_{3}z_{4}\cdots z_{n-2}\right)^{k_{3,n-1}}\] \[\vdots\] \[\times\left(1-z_{n-3}\right)^{k_{n-3,n-2}}\left(1-z_{n-3}z_{n-2} \right)^{k_{n-3,n-1}}\] \[\times\left(1-z_{n-2}\right)^{k_{n-2,n-1}}. \tag{5.14}\] Let's rearrange the above equation to get a more symmetric form in the following \[A^{\{p_{1},p_{2},\cdots,p_{n-3}\},0,0}\] \[=\prod_{\sigma=1}^{n-3}\left[\sum_{j=1}^{n-2-\sigma}J_{j}^{\sigma}= \mathcal{P}_{\sigma}\left(\mathcal{P}_{\sigma}!\prod_{j=1}^{n-2-\sigma}\frac{ \left(k_{j+\sigma+1}^{T_{\sigma}}\right)^{J_{j}^{\sigma}}}{J_{j}^{\sigma}!} \right)\right]\] \[\times\int_{0}^{1}dz_{n-2}\int_{0}^{1}dz_{n-3}\cdots\int_{0}^{1} dz_{3}\int_{0}^{1}dz_{2}.\] \[\times z_{2}^{k_{12}}z_{3}^{k_{123}+1-J_{1}^{1}}z_{4}^{k_{123}+ 2-J_{1}^{1}-\left(J_{1}^{1}+J_{2}^{1}\right)}\ldots z_{n-2}^{k_{1,\cdots,n-2}+ \left(n-4\right)-J_{1}^{1}-\left(J_{1}^{2}+J_{2}^{1}\right)-\cdots-\left(J_{1 }^{n-4}+\cdots+J_{n-4}^{1}\right)}\] \[\times\left(1-z_{2}\right)^{k_{23}-J_{1}^{1}}\left(1-z_{3}\right) ^{k_{34}}\left(1-z_{4}\right)^{k_{45}}\cdots\left(1-z_{n-2}\right)^{k_{n-2,n- 1}}\] \[\times\left(1-z_{2}z_{3}\right)^{k_{24}-\left(J_{1}^{2}+J_{2}^{1} \right)}\left(1-z_{2}z_{3}z_{4}\right)^{k_{25}-\left(J_{1}^{3}+J_{2}^{2}+J_{3} ^{1}\right)}\cdots\left(1-z_{2}z_{3}z_{4}\cdots z_{n-2}\right)^{k_{2,n-1}-\left( J_{1}^{n-3}+\cdots+J_{n-3}^{1}\right)}\] \[\times\left(1-z_{3}z_{4}\right)^{k_{35}}\cdots\left(1-z_{3}z_{4} \cdots z_{n-2}\right)^{k_{3,n-1}}\] \[\vdots\] \[\times\left(1-z_{n-3}z_{n-2}\right)^{k_{n-3,n-1}}. \tag{5.15}\] Then we expand the crossing terms to get \[A^{\{p_{1},p_{2},\cdots,p_{n-3}\},0,0}\] \[=\prod_{\sigma=1}^{n-3}\left[\sum_{\sum_{j=1}^{n-2-\sigma}J_{j}^{ \sigma}=\mathcal{P}_{\sigma}}\left(\mathcal{P}_{\sigma}!\prod_{j=1}^{n-2- \sigma}\frac{\left(k_{j+\sigma+1}^{T_{\sigma}}\right)^{J_{j}^{\sigma}}}{J_{j}^ {\sigma}!}\right)\right]\] \[\times\int_{0}^{1}dz_{n-2}\int_{0}^{1}dz_{n-3}\cdots\int_{0}^{1} dz_{3}\int_{0}^{1}dz_{2}\] \[\times z_{2}^{k_{12}}z_{3}^{k_{123}+1-J_{1}^{1}}z_{4}^{k_{123}+2- J_{1}^{1}-\left(J_{1}^{2}+J_{2}^{1}\right)}\ldots z_{n-2}^{k_{1,\cdots,n-2}+\left(n-4 \right)-J_{1}^{1}-\left(J_{1}^{2}+J_{2}^{1}\right)-\cdots-\left(J_{1}^{n-4}+ \cdots+J_{n-4}^{1}\right)}\] \[\times\left(1-z_{2}\right)^{k_{32}-J_{1}^{1}}\left(1-z_{3}\right) ^{k_{34}}\left(1-z_{4}\right)^{k_{45}}\cdots\left(1-z_{n-2}\right)^{k_{n-2,n- 1}}\] \[\times\sum_{m_{23}}\frac{\left[-k_{24}+\left(J_{1}^{2}+J_{2}^{1} \right)\right]_{m_{23}}}{m_{23}!}\left(z_{2}z_{3}\right)^{m_{23}}\cdots\sum_{m _{24}}\frac{\left[-k_{2,n-1}+\left(J_{1}^{n-3}+\cdots+J_{n-3}^{1}\right) \right]_{m_{2,n-2}}}{m_{2,n-2}!}\left(z_{2}z_{3}z_{4}\cdots z_{n-2}\right)^{m _{2,n-2}}\] \[\times\sum_{m_{34}}\frac{\left(-k_{35}\right)_{m_{34}}}{m_{34}!} \left(z_{3}z_{4}\right)^{m_{34}}\cdots\sum_{m_{3,n-2}}\frac{\left(-k_{3,n-1} \right)_{m_{3,n-2}}}{m_{3,n-2}!}\left(z_{3}z_{4}\cdots z_{n-2}\right)^{m_{3,n -2}}\] \[\vdots\] \[\times\sum_{m_{n-3,n-2}}\frac{\left(-k_{n-3,n-1}\right)_{m_{3,n-2} }}{m_{n-3,n-2}!}\left(z_{n-3}z_{n-2}\right)^{m_{n-3,n-2}} \tag{5.16}\] where the subscripts of \(m_{ij}\) keep record of the first and the last subscripts of \(\left(z_{i}z_{i+1}\cdots z_{j-1}z_{j}\right)\) etc.. The amplitude becomes \[A^{\{p_{1},p_{2},\cdots,p_{n-3}\},0,0}\] \[=\prod_{\sigma=1}^{n-3}\left[\sum_{\sum_{j=1}^{n-2-\sigma}J_{j}^{ \sigma}=\mathcal{P}_{\sigma}}\left(\mathcal{P}_{\sigma}!\prod_{j=1}^{n-2-\sigma }\frac{\left(k_{j+\sigma+1}^{T_{\sigma}}\right)^{J_{j}^{\sigma}}}{J_{j}^{\sigma 1}} \right)\right]\] \[\times\sum_{m_{23}}\frac{\left[-k_{24}+\left(J_{1}^{2}+J_{2}^{1} \right)\right]_{m_{23}}}{m_{23}!}\ldots\sum_{m_{24}}\frac{\left[-k_{2,n-1}+ \left(J_{1}^{n-3}+\cdots+J_{n-3}^{1}\right)\right]_{m_{2,n-2}}}{m_{2,n-2}!}\] \[\times\sum_{m_{34}}\frac{\left(-k_{35}\right)_{m_{34}}}{m_{34}!} \ldots\sum_{m_{3,n-2}}\frac{\left(-k_{3,n-1}\right)_{m_{3,n-2}}}{m_{3,n-2}!}\] \[\vdots\] \[\times\sum_{m_{n-3,n-2}}\frac{\left(-k_{n-3,n-1}\right)_{m_{3,n- 2}}}{m_{n-3,n-2}!}\] \[\times\int_{0}^{1}dz_{n-2}\int_{0}^{1}dz_{n-3}\cdots\int_{0}^{1} dz_{3}\int_{0}^{1}dz_{2}.\] \[\times z_{2}^{k_{12}+\sum_{i\leq 2\leq j}m_{ij}}z_{3}^{k_{123}+ 1-\sum_{i+j\leq 2}J_{j}^{i}+\sum_{i\leq 3\leq j}m_{ij}}\ldots z_{n-2}^{k_{1, \cdots,n-2}+\left(n-4\right)-\sum_{i+j\leq n-3}J_{j}^{i}+\sum_{i\leq n-2\leq j }m_{ij}}\] \[\times(1-z_{2})^{k_{23}-J_{1}^{1}}\left(1-z_{3}\right)^{k_{34}} \cdots(1-z_{n-2})^{k_{n-2,n-1}}\,. \tag{5.17}\] After integration, we can write it as \[A^{\{p_{1},p_{2},\cdots,p_{n-3}\},0,0}\] \[=\prod_{\sigma=1}^{n-3}\left[\sum_{\sum_{j=1}^{n-2-\sigma}J_{j}^{ \sigma}=\mathcal{P}_{\sigma}}\left(\mathcal{P}_{\sigma}!\prod_{j=1}^{n-2- \sigma}\frac{\left(k_{j+\sigma+1}^{T_{\sigma}}\right)^{J_{j}^{\sigma}}}{J_{j}^ {\sigma 1}}\right)\right]\] \[\times\sum_{m_{23}}\frac{\left[-k_{24}+\left(J_{1}^{2}+J_{2}^{1} \right)\right]_{m_{23}}}{m_{23}!}\sum_{m_{24}}\frac{\left[-k_{25}+\left(J_{1} ^{3}+J_{2}^{2}+J_{3}^{1}\right)\right]_{m_{24}}}{m_{24}!}\ldots\sum_{m_{24}} \frac{\left[-k_{2,n-1}+\left(J_{1}^{n-3}+\cdots+J_{n-3}^{1}\right)\right]_{m_{ 2,n-2}}}{m_{2,n-2}!}\] \[\times\sum_{m_{34}}\frac{\left(-k_{35}\right)_{m_{34}}}{m_{34}!} \ldots\sum_{m_{3,n-2}}\frac{\left(-k_{3,n-1}\right)_{m_{3,n-2}}}{m_{3,n-2}!}\] \[\vdots\] \[\times\frac{\Gamma\left(k_{12}+1+\sum_{i\leq 2\leq j}m_{ij}\right) \Gamma\left(k_{23}+1-J_{1}^{1}\right)}{\Gamma\left(k_{12}+k_{23}+2-J_{1}^{1}+ \sum_{i\leq 2\leq j}m_{ij}\right)}\] \[\times\frac{\Gamma\left(k_{123}+2-\sum_{i+j\leq 2}J_{j}^{i}+ \sum_{i\leq 3\leq j}m_{ij}\right)\Gamma\left(k_{34}+1\right)}{\Gamma\left(k_{123}+k_{ 34}+3-\sum_{i+j\leq 2}J_{j}^{i}+\sum_{i\leq 3\leq j}m_{ij}\right)}\] \[\vdots\] \[\times\frac{\Gamma\left(k_{1,\cdots,n-2}+\left(n-3\right)-\sum_{i +j\leq n-3}J_{j}^{i}+\sum_{i\leq n-2\leq j}m_{ij}\right)\Gamma\left(k_{n-2,n-1 }+1\right)}{\Gamma\left(k_{1,\cdots,n-2}+k_{n-2,n-1}+\left(n-2\right)-\sum_{i +j\leq n-3}J_{j}^{i}+\sum_{i\leq n-2\leq j}m_{ij}\right)}. \tag{5.18}\] Now we choose to work on the Regge regime defined by \[k_{1,\cdots,n-2}\sim s,k_{1,\cdots,n-2}+k_{n-2,n-1}\sim t. \tag{5.19}\] In this regime, the RSSA can be approximated as \[A^{\{p_{1},p_{2},\cdots,p_{n-3}\},0,0}\] \[\sim\prod_{\sigma=1}^{n-3}\left[\sum_{\sum_{j=1}^{n-2-\sigma}J_{j}^ {s}=\mathcal{P}_{\sigma}}\left(\mathcal{P}_{\sigma}!\prod_{j=1}^{n-2-\sigma} \frac{\left(k_{j+\sigma+1}^{T_{\sigma}}\right)_{j}^{J_{j}^{\sigma}}}{J_{j}^{ \sigma}!}\right)\right]\] \[\times\sum_{m_{23}}\frac{\left[-k_{24}+\left(J_{1}^{2}+J_{2}^{2} \right)\right]_{m_{23}}}{m_{23}!}\sum_{m_{24}}\frac{\left[-k_{25}+\left(J_{1}^ {3}+J_{2}^{2}+J_{3}^{1}\right)\right]_{m_{24}}}{m_{24}!}\cdots\sum_{m_{2,n-2}} \frac{\left[-k_{2,n-1}+\left(J_{1}^{n-3}+\cdots+J_{n-3}^{1}\right)\right]_{m_{ 2,n-2}}}{m_{2,n-2}!}\] \[\times\sum_{m_{34}}\frac{\left(-k_{35}\right)_{m_{34}}}{m_{34}!} \cdots\sum_{m_{3,n-2}}\frac{\left(-k_{3,n-1}\right)_{m_{3,n-2}}}{m_{3,n-2}!}\] \[\vdots\] \[\times\sum_{m_{n-3,n-2}}\frac{\left(-k_{n-3,n-1}\right)_{m_{3,n-2 }}}{m_{n-3,n-2}!}\] \[\times\frac{\Gamma\left(k_{12}+1+\sum_{i\leq 2\leq j}m_{ij} \right)\Gamma\left(k_{23}+1-J_{1}^{1}\right)}{\Gamma\left(k_{12}+k_{23}+2-J_{1 }^{1}+\sum_{i\leq 2\leq j}m_{ij}\right)}\] \[\times\frac{\Gamma\left(k_{123}+2-\sum_{i+j\leq 2}J_{j}^{i}+\sum_{i \leq 3\leq j}m_{ij}\right)\Gamma\left(k_{34}+1\right)}{\Gamma\left(k_{123}+k_{34}+ 3-\sum_{i+j\leq 2}J_{j}^{i}+\sum_{i\leq 3\leq j}m_{ij}\right)}\] \[\vdots\] \[\times\frac{\left(k_{1,\cdots,n-2}\right)^{-\sum_{i+j\leq n-3}J_{j }^{i}+\sum_{i\leq n-2\leq j}m_{ij}}\Gamma\left(k_{1,\cdots,n-2}+\left(n-3 \right)\right)\Gamma\left(k_{n-2,n-1}+1\right)}{\left(k_{1,\cdots,n-2}+k_{n-2, n-1}+\left(n-2\right)\right)_{-\sum_{i+j\leq n-3}J_{j}^{i}+\sum_{i\leq n-2\leq j}m_{ij} }\Gamma\left(k_{1,\cdots,n-2}+k_{n-2,n-1}+\left(n-2\right)\right)}. \tag{5.20}\] To get the leading order in \(k_{1,\cdots,n-2}\)- \(s\), we take \[J_{j}^{i}=0,\ (\text{for all }i+j\leq n-3) \tag{5.21}\] or \[J_{1}^{1}=J_{2}^{1}=\cdots=J_{n-4}^{1}=0,\] \[J_{1}^{2}=\cdots=J_{n-5}^{2}=0,\] \[J_{1}^{r}=\cdots=J_{n-r-3}^{r}=0 \tag{5.22}\] which imply \[J_{n-3}^{1}=N+p_{1},J_{n-4}^{2}=p_{2},\cdots,J_{n-r-2}^{r}=p_{r}. \tag{5.23}\] Finally, the leading order term of the amplitude is \[A^{\{p_{1},p_{2},\cdots,p_{r}\},0,0}\] \[\sim\prod_{\sigma=1}^{r}\left[\left(k_{n-1}^{T_{\sigma}}\right)^{ \mathcal{P}_{\sigma}}\right]\] \[\times\sum_{m_{23}}\frac{\left[-k_{24}\right]_{m_{23}}}{m_{23}!} \sum_{m_{24}}\frac{\left[-k_{25}\right]_{m_{24}}}{m_{24}!}\cdots\sum_{m_{24}} \frac{\left[-k_{2,n-1}\right]_{m_{2,n-2}}}{m_{2,n-2}!}\] \[\times\sum_{m_{34}}\frac{\left(-k_{35}\right)_{m_{34}}}{m_{34}!} \cdots\sum_{m_{3,n-2}}\frac{\left(-k_{3,n-1}\right)_{m_{3,n-2}}}{m_{3,n-2}!}\] \[\times\sum_{m_{n-3,n-2}}\frac{\left(-k_{n-3,n-1}\right)_{m_{3,n-2 }}}{m_{n-3,n-2}!}\] \[\times\frac{\Gamma\left(k_{12}+1+\sum_{i\leq 2\leq j}m_{ij} \right)\Gamma\left(k_{23}+1\right)}{\Gamma\left(k_{12}+k_{23}+2+\sum_{i\leq 2 \leq j}m_{ij}\right)}\] \[\times\frac{\Gamma\left(k_{123}+2+\sum_{i\leq 3\leq j}m_{ij} \right)\Gamma\left(k_{34}+1\right)}{\Gamma\left(k_{123}+k_{34}+3-\sum_{i+j \leq 2}J_{j}^{i}+\sum_{i\leq 3\leq j}m_{ij}\right)}\] \[\vdots\] \[\times\frac{\left(k_{1,\cdots,n-2}\right)^{\sum_{i\leq n-2\leq j} m_{ij}}\Gamma\left(k_{1,\cdots,n-2}+\left(n-3\right)\right)\Gamma\left(k_{n-2,n-1}+1 \right)}{\left(k_{1,\cdots,n-2}+k_{n-2,n-1}+(n-2)\right)_{\sum_{i\leq n-2\leq j }m_{ij}}\Gamma\left(k_{1,\cdots,n-2}+k_{n-2,n-1}+(n-2)\right)}\] \[=\prod_{\sigma=1}^{r}\left[\left(k_{n-1}^{T_{\sigma}}\right)^{ \mathcal{P}_{\sigma}}\right]\times\text{(factors independent of $J_{q}^{r}$s )}. \tag{5.24}\] The ratios of the amplitudes are \[\frac{A^{\{p_{1},p_{2},\cdots,p_{r}\},0,0}}{A^{\{0,0,\cdots,0\}, 0,0}} =\left(k_{n-1}^{T_{1}}\right)^{p_{1}}\left(k_{n-1}^{T_{2}}\right)^{ p_{2}}\cdots\left(k_{n-1}^{T_{r}}\right)^{p_{r}},\] \[=\left(\Omega_{2}^{n-1}\right)^{p_{1}}\left(\Omega_{3}^{n-1} \right)^{p_{2}}\cdots\left(\Omega_{r+1}^{n-1}\right)^{p_{r}},\] \[=\left(\omega_{1}\right)^{p_{1}}\left(\omega_{2}\right)^{p_{2}} \cdots\left(\omega_{r}\right)^{p_{r}}, \tag{5.25}\] which is the same as Eq.(2.31) with \(m=q=0\). ## VI Conclusion In this paper, we first give a review with detailed calculations of ratios among HSSA at each fixed mass level to demonstrate the stringy scaling behavior in the hard scattering limit. We then extend the calculations and discover a similar stringy scaling behavior for a class of \(n\)-point RSSA. The number of independent kinematics variables of these RSSA is found to be reduced by \(\mathrm{dim}\mathcal{M}\), similar to those of the HSSA. These stringy scaling behaviors are reminiscent of deep inelastic scattering of electron and proton where the two structure functions \(W_{1}(Q^{2},\nu)\) and \(W_{2}(Q^{2},\nu)\) scale, and become not functions of 2 kinematics variables \(Q^{2}\) and \(\nu\) independently but only of their ratio \(Q^{2}/\nu\). Thus the number of independent kinematics variables reduces from 2 to 1. Indeed, it is now well-known that the structure functions scale as [33] \[MW_{1}(Q^{2},\nu)\to F_{1}(x),\quad\nu W_{2}(Q^{2},\nu)\to F_{2}(x) \tag{6.1}\] where \(x\) is the Bjorken variable and \(M\) is the proton mass. Moreover, due to the spin-\(\frac{1}{2}\) assumption of quark, Callan and Gross derived the following relation [34] \[2xF_{1}(x)=F_{2}(x). \tag{6.2}\] Both of these scaling behaviors, the reduction of the number of kinematics variables in Eq.(6.1) and the number of structure functions in Eq.(6.2) in the hard scattering limit of quark-parton model in QCD seem to persist in some way in the HSSA and some RSSA of string theory. We believe that, comparing to hard QCD scaling, high energy stringy scaling in general has not been well studied yet in the literature [35]. More new phenomena of stringy scaling remain to be uncovered. ###### Acknowledgements. This work is supported in part by the National Science and Technology Council (NSTC) and S.T. Yau center of National Yang Ming Chiao Tung University (NYCU), Taiwan. We thank H. Kawai and Y. Okawa for givng many valuable comments on stringy scaling before the publication.
2309.14283
Using the Gerchberg-Saxton algorithm to reconstruct non-modulated pyramid wavefront sensor measurements
Adaptive optics (AO) is a technique to improve the resolution of ground-based telescopes by correcting, in real-time, optical aberrations due to atmospheric turbulence and the telescope itself. With the rise of Giant Segmented Mirror Telescopes (GSMT), AO is needed more than ever to reach the full potential of these future observatories. One of the main performance drivers of an AO system is the wavefront sensing operation, consisting of measuring the shape of the above mentioned optical aberrations. Aims. The non-modulated pyramid wavefront sensor (nPWFS) is a wavefront sensor with high sensitivity, allowing the limits of AO systems to be pushed. The high sensitivity comes at the expense of its dynamic range, which makes it a highly non-linear sensor. We propose here a novel way to invert nPWFS signals by using the principle of reciprocity of light propagation and the Gerchberg-Saxton (GS) algorithm. We test the performance of this reconstructor in two steps: the technique is first implemented in simulations, where some of its basic properties are studied. Then, the GS reconstructor is tested on the Santa Cruz Extreme Adaptive optics Laboratory (SEAL) testbed located at the University of California Santa Cruz. This new way to invert the nPWFS measurements allows us to drastically increase the dynamic range of the reconstruction for the nPWFS, pushing the dynamics close to a modulated PWFS. The reconstructor is an iterative algorithm requiring heavy computational burden, which could be an issue for real-time purposes in its current implementation. However, this new reconstructor could still be helpful in the case of many wavefront control operations. This reconstruction technique has also been successfully tested on the Santa Cruz Extreme AO Laboratory (SEAL) bench where it is now used as the standard way to invert nPWFS signal.
Vincent Chambouleyron, Aditya Sengupta, Maïssa Salama, Maaike A. M van Kooten, Benjamin L. Gerard, Sebastiaan Y. Haffert, Sylvain Cetre, Daren Dillon, Renate Kupke, Rebecca Jensen-Clem, Phil Hinz, Bruce Macintosh
2023-09-25T16:48:21Z
http://arxiv.org/abs/2309.14283v1
Using the Gerchberg-Saxton algorithm to reconstruct non-modulated pyramid wavefront sensor measurements ###### Abstract Context:Adaptive optics (AO) is a technique to improve the resolution of ground-based telescopes by correcting, in real-time, optical aberrations due to atmospheric turbulence and the telescope itself. With the rise of Giant Segmented Mirror Telescopes (GSMT), AO is needed more than ever to reach the full potential of these future observatories. One of the main performance drivers of an AO system is the wavefront sensing operation, consisting of measuring the shape of the above mentioned optical aberrations. Aims:The non-modulated pyramid wavefront sensor (nPWFS) is a wavefront sensor with high sensitivity, allowing the limits of AO systems to be pushed. The high sensitivity comes at the expense of its dynamic range, which makes it a highly non-linear sensor. We propose here a novel way to invert nPWFS signals by using the principle of reciprocity of light propagation and the Gerchberg-Saxton (GS) algorithm. Methods:We test the performance of this reconstructor in two steps: the technique is first implemented in simulations, where some of its basic properties are studied. Then, the GS reconstructor is tested on the Santa Cruz Extreme Adaptive optics Laboratory (SEAL) testbed located at the University of California Santa Cruz. Results:This new way to invert the nPWFS measurements allows us to drastically increase the dynamic range of the reconstruction for the nPWFS, using the dynamics close to a modulated PWFS. The reconstructor is an iterative algorithm requiring heavy computational burden, which could be an issue for real-time purposes in its current implementation. However, this new reconstructor could still be helpful in the case of many wavefront control operations. This reconstruction technique has also been successfully tested on the Santa Cruz Extreme AO Laboratory (SEAL) bench where it is now used as the standard way to invert nPWFS signal. Conclusions: ## 1 Introduction The pyramid wavefront sensor (PWFS) (Ragazzoni, 1996) falls under the category of Fourier-filtering wavefront sensors (Fauvaro, 2017), which are commonly used to measure aberrations in optical systems. Inspired by the Foucault knife test, the original PWFS consists of a 4-sided glass pyramid located at an intermediate focal plane and a detector that captures images of the four pupils created by each beam passing through the different faces of the pyramid. This configuration efficiently converts phase into intensity, but lacks the necessary dynamic range to accurately measure atmospheric turbulence aberrations, which can induce optical path differences on the order of several waves. To address this issue, the PWFS is often paired with a modulator, which causes the electromagnetic (EM) field to circulate around the pyramid tip during the camera's acquisition time. Modulation drastically improves the PWFS's linearity, but it comes with three main drawbacks. First, the modulation leads to a strong decrease in PWFS sensitivity, especially for low-order modes. Secondly, modulating the PWFS alters the nature of the signal (Verinaud, 2004; Guyon, 2005), causing the response to resemble that of a slope-sensor, hence making it difficult to detect phase discontinuities. This is particularly problematic for the next generation of Giant Segmented Mirror Telescopes (GSMTs), where wavefront control will involve correcting not only turbulence-induced aberrations but also those induced by the telescope itself (fragmentation (Schwartz et al., 2017; Bertou-Cantou, A. et al., 2022; Demers et al., 2022) and segmentation (Chanan et al., 1998)). Finally, adding the modulation mirror stage requires the use of more optics and leads to difficulties related to fast steering components (stability issues, temperature constrains, speed limitations, failure risks, etc...). Therefore, extending the dynamic range of the non-modulated PWFS (nPWFS) to remove the need for modulation could allow the use of the PWFS at its full potential while removing the requirement for moving parts, making it a less complex system. Already a lot of studies have been done to deal with PWFS non linearities and several options envisaged. It was proposed to keep the matrix formalism and consider the PWFS as a linear varying -parameter system: the reconstruction matrix evolves according to the phase to be measured Korkiakoski et al. (2008); Deo et al. (2019), but this technique usually requires some knowledge on the statistics of the measured phase (Chambouleyron et al., 2020). Gradient-descent methods have also been investigated (Frazin, 2018; Hutterer et al., 2023). Finally, another approach, more appealing today, is to use a machine-learning approach to reconstruct the nPWFS signal (Landman and Haffert, 2020; Nousiainen et al., 2022; Archinuk et al., 2023). The goal of this paper is to present a novel approach to invert the nPWFS signal. This approach is based on the principle of reciprocity of light propagation and the Gerchberg-Saxton (GS) algorithm. In section 2, we will present in detail the principle of the reconstruction algorithm. Section 3 will assess the basic performance of the reconstructor, mainly in terms of dynamic range but also in terms of noise propagation and broadband performance. Section 4 will highlight the results of an experimental implementation of this new reconstructor on the Santa Cruz Extreme Adaptive optics Laboratory (SEAL) testbed (Jensen-Clem et al., 2021). Finally, section 5 will describe a possible way to push even further the dynamic range and convergence speed of this reconstructor by using sensor-fusion. ## 2 A new method to invert non-modulated PWFS signal ### Reciprocity of light propagation principle The reconstructor presented in this paper relies on one of the most basic properties of light propagation: the principle of reciprocity of light propagation. The idea is to construct a high-fidelity numerical model of the nPWFS and use the measurements to send the light backwards in the numerical nPWFS. This technique could actually apply to any Fourier-filtering wavefront sensor, but we will focus only on the nPWFS throughout this study. The numerical model of the nPWFS is built by propagating the light assuming Fraunhofer approximation from one pupil plane to another, while going through the pyramid mask in an intermediate focal plane. In more details, the EM-field in the nPWFS detector plane can be written as: \[\Psi_{d}=\Psi_{p}\star\widehat{m} \tag{1}\] where \(\Psi_{p}\) is the EM-field in the entrance pupil, \(m\) the complex shape of pyramid mask, \(\star\) is the convolution product and\(\widehat{\cdot}\) denotes the Fourier transform operator. Equation 1 allows us to simply simulate light propagation from the entrance pupil plane to the WFS detector plane. The back propagation of light from detector plane to the entrance pupil plane is: \[\Psi_{p}=\Psi_{d}\star\widehat{\widehat{m}} \tag{2}\] where \(\widehat{\cdot}\) is the conjugate operator. This last equation can be simply understood in the nPWFS case: we write \(m\) as \(m=e^{i\Delta}\) where \(\Delta\) is a 2D real function describing the phase corresponding to the pyramid shape. Going through the pyramid in the opposite direction means that the light propagates through the inverse phase mask, _i.e_\(\widehat{m}=e^{-i\Delta}\). We can then write the entrance pupil and detector EM-fields in their complex form: \[\begin{cases}\Psi_{p}&=A_{p}e^{i\phi_{p}}\\ \Psi_{d}&=A_{d}e^{i\phi_{d}}\end{cases} \tag{3}\] where \(A_{p}\) and \(A_{d}\) are the amplitudes and \(\phi_{p}\) and \(\phi_{d}\) are the phases of the electromagnetic fields. The goal of wavefront sensing is to find back the phase \(\phi_{p}\) in the entrance pupil. Light back propagation cannot be easily performed from the detector measurements because we only have access to intensities in the detector plane \(I_{d}=|\Psi_{d}|^{2}=A_{d}^{2}\). The phase \(\phi_{d}\) is therefore missing in the measurements. Hence, we propose to use an iterative algorithm called the Gerchberg-Saxton (GS) algorithm to propagate the light back and forth in the numerical model of the nPWFS. ### Gerchberg-Saxton algorithm The GS algorithm was first proposed by (Gerchberg, 1972) and is widely used to perform image sharpening from point spread function (PSF) images (Fienup, 1982; Ragland et al., 2016). To perform this algorithm in our case, we will assume that we have access to a measurement of the entrance pupil amplitude \(A_{p}\) (we will show later an easy practical way to obtain this quantity with the nPWFS). In the nPWFS framework, we have two complex quantities (equation 3) for which the amplitudes are known and with a relation that links them together (equation 1). The principle of the GS algorithm is to propagate the light back and forth in the numerical model of the nPWFS, injecting at each step the knowledge of the amplitudes of the complex quantities we are trying to retrieve. We propose to go through one iteration of the GS algorithm applied to the nPWFS based on the schematic given figure 1. The amplitude for the entrance pupil and the detector plane used for this example are true measurements from the SEAL testbed (highlighted with yellow dots). One iteration of the GS algorithm can be split in four parts: 1. We compute the detector EM-field complex amplitude \(A_{p}=\sqrt{I_{p}}\). For the first iteration, the complex EM-field in the detector plane is built by using \(\phi_{d}=\arg(A_{p}\star\widehat{m})\), which corresponds to the phase at the detector plane when a flat wavefront is propagated through the nPWFS system. We therefore have a first estimation \(\Psi_{d}\) that can be back-propagated in the system (through equation 2). 2. A first estimation of \(\Psi_{p}\) is then obtained. 3. Since we already have access to \(A_{p}\) the amplitude found through back-propagation is discarded and replaced by the measurement of \(A_{p}\) while keeping the estimated phase \(\phi_{p}\). The entrance pupil plane EM-field can then be propagated in the system (direct propagation, through equation 1). 4. A new estimation of \(\Psi_{d}\) is obtained. As previously done for the entrance pupil plane, we discard the amplitude and replace it by the measurement of \(A_{d}\) given by the detector and keep the estimated phase \(\phi_{d}\). We can then go back to step 1 and iterate again. In this paper, we will call one iteration of GS algorithm the numerical operation consisting in going through these four steps. The GS algorithm is therefore an iterative algorithm, which for one iteration performs four Fast-Fourier Transforms. It is important to notice that this reconstructor assumes coherence of the EM-field. Therefore, this algorithm does not work for the modulated PWFS. It also raises the question of the impact of measurements with larger spectral bandpasses, which will be tackled in the next section. We also emphasize that this GS principle can be applied to any FFWFS, but in this paper we focus only on the nPWFS. ### Phase unwrapping The phase, \(\phi_{p}\), retrieved by the algorithm will be wrapped, modulo \(2\pi\), as optical propagation is done. Therefore when working with phase with amplitude larger than \(2\pi\) peak-to-valley (PtV), it is necessary to add an extra step to the reconstruction process: a phase-unwrapping algorithm applied to the phase estimated through the GS reconstruction. A detailed analysis of this step is out of the scope of this study, hence we will restrain to using a well-known "off-the-shelf" algorithm based on (Ghiglia and Romero, 1994) throughout all this paper. ## 3 Performance of the GS algorithm reconstructor This section is dedicated to giving a first overview of the GS algorithm's basic performance. We will focus on how this reconstructor performs in terms of dynamic range, while also evaluating the number of iterations needed, noise propagation and broadband impact on reconstruction. This study is not meant to be exhaustive, and parameters like the minimum number of pixels needed on the detector with respect to the amplitude of the phase to be measured will not be assessed nor a fine analysis of the impact of model errors. All the simulations will simply match the SEAL testbed configuration. Here are the main simulation parameters: * Each pupil on the PWFS detector has 106 pixels across (matching SEAL testbed configuration), which is a realistic case as the best low read-out-noise (RON) cameras can run at a few kHz with resolutions around \(250px\times 250px\). We use a Shannon sampling of 2 (4 pixels per \(\lambda/D\)) for the nPWFS model. * For a more realistic simulation, the phase screens are first simulated and propagated with a resolution 4 times bigger (4x106px across) through a high-resolution model of the nPWFS with also a Shannon sampling of 2. The signal is then binned to produce the nPWFS image with 106px across. * We work with a modal basis of the first 500 Zernike modes (excluding piston), close to the control space that can be achieved with the deformable mirrors installed on the SEAL testbed. * The linear reconstructor is created in a standard fashion by building an interaction matrix and computing its pseudo-inverse. No modes are filtered during the inversion as the measurements are over-sampled, leading to a good conditioning number of the interaction matrix. * Simulations tools used are internally sourced, except for the turbulent phase screens which are created using the HCIpy library (Por et al., 2018). ### Convergence speed and first comparison with linear reconstructor The first test presented aims at giving an idea of the convergence speed of the GS reconstruction. It will also be a way to draw a first comparison between the linear model and our reconstructor. For that, we are studying the reconstruction of four different turbulent phase screens following a Von-Karman spectrum and corresponding to four different configuration of \(D/r_{0}\) where \(D\) is the telescope diameter and \(r_{0}\) the Fried parameter. To produce a fair comparison between the linear reconstructor and the GS reconstructor which gives the phase with a much higher resolution (106px across), these screens are projected on the 500 Zernike modes basis before propagation (no aliasing). The reconstruction error is estimated at each GS iteration (estimated phase is systematically unwrapped) and is expressed as a ratio of the rms error in the entrance pupil plane (error \(rms_{rec}/rms_{input}\)). Results are given in figure 2, where the horizontal dashed lines correspond to the linear reconstructor and the x-axis is given in log-scale. One can notice that the GS reconstruction accuracy increases with the number of iterations, before stabilizing. The smaller the phase, the less iterations are needed to reach the best reconstruction. Hence we see that for \(D/r_{0}=1\) only 10 iterations are required, whereas it needs more than 1000 iterations for the case \(D/r_{0}=32\) (corresponding to a value of \(r_{0}=25\)\(cm\) for a \(D=8\)\(m\) telescope). This figure also shows that in any case and after a few iterations, the GS reconstructor performs better than the linear reconstructor. In the small phase regime (in our context, it would correspond to regimes smaller than \(D/r_{0}\approx 0.5\)), both will perform evenly as the nPWFS would Figure 1: Principle of the GS algorithm applied to non-modulated PWFS. Yellow dots are highlighting the fact that these are true data obtained on the SEAL testbed. be working in its linear range. To illustrate reconstruction products, input phase and reconstructed phases (linear and GS after 2000 iterations) for the case \(D/r_{0}=32\) are shown in figure 3. It is clear that even if the GS reconstruction still underestimates the phase (due to nPWFS saturation, this will be discussed in more details in section 4.3), the shape of the retrieved phase is much closer than the one produced by the linear reconstructor. As a side note on the shape given by the linear reconstructor: the phase is highly underestimated and the pattern seems quite different than the input phase with larger high spatial frequencies, suggesting that it would be hard to start a closed loop with such a reconstruction. As the nPWFS is sensitive to phase discontinuities, the linear reconstruction shape could be partially explained by the impact of phase wrapping on measurements. A detailed analysis of this phase wrapping effect on the linear reconstruction is out of the scope of this paper, but remains an important point to understand nPWFS behavior. To illustrate how the GS reconstructed phases evolve with iteration, reconstructed phases for iterations 1, 200, and 2000 are shown in figure 4 for the strongest turbulent case \(D/r_{0}=32\). The top row is presenting the wrapped phase given by the GS algorithm and the bottom row shows the corresponding unwrapped phases. The algorithm seems to quickly converge towards the good overall wrapped shape and then improves the estimation by scaling the phase closer from the input amplitude (by therefore increasing the phase wrapping). For figure 2, seeing limited phase screens were used as examples to give a first insight on GS reconstructor behavior. In a true AO system, the nPWFS would typically work in closed loop around residual phases, so typical AO residual phases could have also been used for this previous analysis. We instead decided to use a full power law turbulence phase screen because: (i) The closed loop bootstrapping is done on full turbulence anyways and (ii) using AO residual phase screens would have been required to choose a specific system configuration. In order to refine the comparison between linear and GS reconstructors, it was decided to build linearity curves for Zernike modes. Results are presented in the next section. Finally, from this first analysis, it seems that the GS reconstructor outperforms the linear reconstructor, but it requires tens of iterations to be effective. Each propagation back and forth requires four 2D Fast-Fourier-Transforms (FFTs) (figure 1). In the current implementation, _numpy.fft.fft2_ is used to compute the Fourier transforms on a problem size of [424, 424] (106 pixels across pupil, with a sampling of 2 times Shannon), averaging at \(7.1\pm 0.4\) ms for each FFT on the SEAL control computer. Although the 2D FFTs algorithm scales as \(O(N^{2}\log N)\) overall, problem sizes that are a factor of 2 or that are divisible by large powers of 2 or 3 can run significantly faster. Padding to \(448=2^{6}\times 7\) improves the _numpy.fft.fft2_ runtime to \(3.2\pm 0.07\) ms. With the downsampled problem size of [212, 212] (that could easily be reached by decreasing sampling and reducing the number of subapertures) and a more efficient FFT algorithm called _FFTW_ (Frigo 1999), the runtime goes down to \(640\pm 27\mu\)s, an order of magnitude compared with the initial runtime. This represents an efficiency of about 5500 mflops, below the stated achievable performance for similar CPUs of \(\sim 12000\) mflops. It is likely that an improved CPU would be able to achieve this performance, enabling us to run each FFT at the full problem size in less than 1 ms. Further, it is possible that a lower-level implementation of the GS algorithm would be able to make use of the repeated FFT per iteration, for example by optimizing memory access or by varying the FFT problem size per iteration. Based on prior benchmarking led for the CUDA-based _cuFFT_ implementation (Kunkel et al. 2017), GPU computation could reduce this up to an order of magnitude further. This could allow us to run few iterations of the algorithm within a millisecond, opening the path for closed-loop operation. Figure 4: **Top:** Wrapped phases reconstructed by the GS algorithm. **Bottom:** Corresponding unwrapped phases. Figure 3: **Left:** Input turbulent phase in the case \(D/r_{0}=32\), projected on the first 500 Zernike modes. **Middle:** Linear reconstruction. **Right:** GS reconstruction after 2000 iterations. Figure 2: GS algorithm convergence speed. The GS algorithm is applied for input turbulent screens for four different seeing conditions. Dashed lines represent the corresponding reconstruction error in the linear framework. ### Linearity Curves and dynamic range plots To achieve a more quantitative comparison in terms of dynamic range between the linear and the GS reconstructors, linearity curves for some Zernike modes are analyzed. Zernike modes within a full range of amplitudes are sent through the PWFS and then reconstructed, for four different cases: _(i)_ nPWFS with linear reconstructor _(ii)_ modulated PWFS with a modulation radius of \(3\lambda/D\) as a comparison point in terms of dynamic range _(iii)_ nPWFS with GS reconstructor without the unwrapping step _(iv)_ nPWFS with GS reconstructor with phase unwrapping. Results for four different Zernike modes ranging from low spatial frequency to high spatial frequency (\(Z^{6}\), \(Z^{19}\), \(Z^{150}\) and \(Z^{490}\)) are given in figure 5. In this case, the GS reconstructor was used with an arbitrary number of 25 iterations (impact of number of iterations will be assessed further in this section). These linearity curves are showing the well known behavior of the modulated PWFS with respect to the nPWFS: significant increase in dynamic range for low order modes located within the modulation radius, and comparable dynamic range for high-order modes composed of spatial frequencies outside the modulation radius. The GS algorithm without the unwrapping algorithm is showing extended linear range for all the modes, but the response curves are steeply dropping after 1 rad rms of aberrations. It actually corresponds to the amplitude for which the phase starts to wrap (PtV greater than \(2\pi\)), hence the reconstructed phase starts to have a different shape from the unwrapped phase and projection on Zernike modes is modified as more high-order modes are introduced. However, adding the unwrapping step to the reconstruction allows us to better improve the linearity curves after 1 rad rms of input phase. It is clearly demonstrated here that the GS reconstructor added with the unwrapping step allows us to significantly increase the nPWFS dynamic range for all modes. For the plots in figure 5, it was arbitrarily chosen to run 25 iterations for the GS algorithm. The impact of the number of iterations on the linearity curve of the Zernike mode \(Z^{19}\) is assessed in figure 6 where the linearity curve for this mode is shown using 1, 25 and 200 iterations in the case of GS algorithm combined with the unwrapping step. One can notice that for 1 iteration, the linearity curve doesn't follow the \(y=x\) slope around a null phase. That shows that even in a small phase regime, only one iteration of the algorithm is not enough to accurately find back the phase. We noticed that actually only 2 iterations are enough in the small phase regime (in the ideal case of these simulations where there is no model error). Then, as expected the linearity curves improve with the number of iterations, especially for the strong aberrations regime. Linearity curves provided in Figure 5 and 6 exhibit partial demonstrations of the gain in dynamic range, as they do not evaluate the potential non-linear cross talk between modes during reconstruction. Another approach to show the improved dynamics offered by the GS reconstructor is to generate dynamic range plots as proposed in (Lin et al. 2022). In this method, we introduce randomly sampled aberrations based on Von-Karman atmospheric Power-Spectral Density (PSD) into the PWFS system, varying the total wavefront error from 0 to 3 rad rms. For each wavefront error value, we inject and reconstruct 100 phase screens. This process is repeated for the nPWFS with the linear reconstructor, the GS algorithm with unwrapping (25 and 200 iterations), and the modulated PWFS (\(r_{mod}=3\ \lambda/D\)). The mean values of the 100 reconstructions for each input amplitude across all configurations are plotted in Figure 7, with the filled area representing the reconstructed error variance for each amplitude sample. It reveals that the GS algorithm outperforms the linear reconstructor for the nPWFS. When considering a satisfying reconstruction threshold as the point where reconstruction error reaches 10% of the input rms, then the GS algorithm is extending the linearity range by a factor of approximately 3. Moreover, it achieves dynamic performance comparable to the \(3\ \lambda/D\) modulated PWFS for input wavefront errors of up to 1.5 rad rms in the case of 25 GS iterations and up to 2.3 rad rms in the case of 200 GS iterations. For this test, the modulated PWFS shows best linearity as it is expected when analyzing linearity curves presented figure 5: modulated PWFS has the best dynamic range for low-order modes (which are affected by the modulation) and input turbulent phases used to produce curves Figure 7 have precisely higher amplitude for low order modes. Figure 5: Linearity curves for different Zernike modes, ranging from low order to high order. GS algorithm is run with 25 iterations in this case. Figure 6: Linearity curves for the Zernike mode 19 for different numbers of iterations while performing the GS algorithm. Number of iterations are written on the top-right part of the curves. We have demonstrated that the GS algorithm combined with an unwrapping step can drastically improve the nPWFS dynamic range. Hence, it is possible to use this technique to reconstruct more accurately larger amplitude phases. Still, it is important to analyze how noise propagates through this reconstructor with respect to the linear framework, as one of the motivations to use the nPWFS is the better sensitivity with respect to noise. ### Noise propagation through reconstruction To investigate the way noise spreads across different reconstructors, the four configurations outlined in the preceding section will be used : a nPWFS with a linear reconstructor, a modulated PWFS, a nPWFS with a GS reconstructor without phase unwrapping, and a nPWFS with a GS reconstructor combined with phase unwrapping. For this study, only photon noise propagation will be analyzed, as it is a fundamental noise which cannot be mitigated by technological improvement. The effect of noise propagation will simply be evaluated by introducing noise into the measurements and reconstructing the signals considering the different reconstructors. Once again, the idea is not to run an extensive simulation study on noise propagation through the GS algorithm in order to assess a large parameter space, but rather to highlight the basic sensitivity performance of this reconstructor with respect to the nPWFS and the modulated PWFS. To keep the analysis relatively simple, we study two configurations: noise impact on reconstruction of a flat wavefront (often referred to as reference intensities) and in the case of a turbulent phase screen with an amplitude of 0.75 radians rms (outside nPWFS linearity range but low enough to ensure that phase is not wrapped). To assess noise propagation relative to these configurations, the following procedure was held in simulation: for a given number of photons available for the measurement, the mean variance of each mode after reconstruction is estimated by averaging the variance of reconstructed modes over 200 noise realizations. This operation is repeated for different number of photons, setting the number of iteration in the GS algorithm to 25. This number of iteration was once again arbitrarily chosen as it was found that the number of iterations doesn't change the impact of noise propagation. The phase reconstruction errors due to photon noise propagation in the case of a flat wavefront and a 0.75 radians rms turbulent screen with respect to incident flux are given figure 8. The incident flux is expressed in total number of photons available for measurements assuming a prefect transmission and quantum efficiency of 1 for the detector (as a reminder: reconstruction is done over 500 Zernike modes). For the cases of the linear reconstructor and a flat wavefront, the figure 8 (top) gives back a well-known behavior: the phase estimation error due to noise propagation through the reconstructor is lower for the nPWFS than for the modulated one. In the case of the GS algorithm, noise propagation behaves differently whether the unwrap step is done or not. Still in the context of noisy measurements for a flat wavefront: when not employing unwrapping, the GS algorithm propagates more error than the nPWFS and modulated PWFS for the high flux regime, and seems to perform slightly better than the modulated PWFS for the low flux regime. For even lower flux regimes (not shown here) GS would even appear to perform better than linear nPWFS reconstructor, but this behavior is misleading, in the sense that the GS algorithm will always provide wrapped measurements, preventing the amplitude of the reconstructed phase from diverging for extremely noisy measurements. Another way to explain this fact is to say that the GS comes with an intrinsic regularization that biases the results in this analysis. In the case where the wrapping step is added, the GS reconstructor builds more error in presence of noise compared to all the other studied configurations. Figure 8 provides the same analysis of noise propagation for the case of a turbulent phase screen (0.75 nm rms). We see the same trend, except that there is an offset for the linear nPWFS reconstructor which correspond to the reconstruction due to non-linearity error (not present for GS cases or the modulated PWFS). To better understand noise propagation behavior, it is useful to check, for one flux regime, how the error propagates along the modes. Such an example is given figure 9, for the case of 400 photons available per mode. It is clear that the unwrapping step Figure 8: Reconstruction errors due to photon noise. **Top** Case of a input flat wavefront. **Bottom** Case of an input turbulent wavefront of 0.75 radians rms. Figure 7: Dynamic range plots produced by sending for each input amplitude 100 randomly sampled phase screen following a Von-Kaman PSD and reconstructing them. drives the noise to drastically propagate on the low-order spatial frequencies. To conclude this brief study on noise propagation, the GS reconstructor combined with an unwrapping algorithm performs badly in presence of noise as it propagates more noise than the modulated PWFS itself. Therefore, the sensitivity advantage of the nPWFS is lost while using this reconstructor. However, this implementation is for now in its most basic form. Noise propagation could be mitigated in the reconstructor through 2 aspects: on one hand by using a more noise-robust GS algorithm (Levin & Bendory, 2020), and on the other hand by using an unwrapping algorithm with better behavior with respect to noise (Estrada et al., 2011). As mentioned in the introduction, using a nPWFS brings more advantages than only the gain in sensitivity with respect to noise, hence this current implementation of the reconstructor is still interesting for uses at high signal-to-noise ratio (SNR) (application examples are given in the conclusion). It is also important to notice that such a drawback for the GS algorithm was not raised in previous study proposing its implementation for reconstructing the curvature wavefront sensor signals (Guyon, 2010). ### Broadband impact on performance It is crucial to take into account the potential influence of larger spectral bandpass measurements on the GS algorithm when reconstructing the EM field, as this technique assumes the coherence of the field. The nPWFS is achromatic in phase (Fauvarque, 2017), meaning that it measures the phase of the incoming wavefront independently of the wavelength of the light (dispersion effects are neglected here). However, the amplitude of the wavefront does depend on the wavelength as aberrations present in the system and the turbulence usually introduce a fixed optical path difference (OPD) across all wavelengths. Hence, the measured signal will scale proportionally with the wavelength. This leads to 2 advantageous properties: for a flat wavefront, all the wavelengths will give the same measurements. In the right conditions, the closed loop can therefore converge towards a null-phase. Secondly, by choosing the central wavelength for the reconstruction, it allows the scaling of the phase to compensate between larger and smaller wavelength, giving back the monochromatic measurement in the case of the linear range (providing a flat spectrum). Overall, it is known that bandwidth has a limited impact on nPWFS measurements in general, and therefore it seems reasonable to expect the same for the GS reconstructor despite the fact that it assumes monochromatic measurements. A thorough investigation of the broadband impact on reconstruction would require an extensive study assessing various factors such as the amplitude of the measured phase, sampling used for the measurements, and the bandwidth. Due to the scope limitations of the paper, a brief example will be given to illustrate the impact of broadband measurement on the GS algorithm using the same configuration as in the previous section, but providing the algorithm measurements recorded by a polychromatic light having a flat spectrum ranging from 550 \(nm\) to 800 \(nm\) (bandwidth around 35%, sampled with 50 points in our simulation). To run the GS algorithm, the nPWFS model is simulated as working at a monochromatic wavelength, set as the broadband central wavelength (675 \(nm\)). Linearity curves for Zernike mode 19 in the case of different numbers of GS iterations are again displayed, but adding the one corresponding to the reconstruction of a broadband measurement (figure 10). This figure demonstrates the limited impact of the broadband measurement on the reconstruction, despite the large bandwidth chosen. To confirm the GS algorithm robustness to broadband measurement, the reconstruction error as a function of iteration number for a turbulent screen in the case \(D/r_{0}=4\) is plotted figure 11. We see that the reconstruction error stabilizes to an reconstruction error slightly larger than the monochromatic case. Overall, this short study points out that broadband measurements should not jeopardize the GS reconstruction scheme. We also mention that a polychromatic implementation of the GS algorithm could be imagined Fienup (1999), but it will comes at the expense of the computational time. As the GS reconstruction scheme proposed in this paper is highly model-dependent, it is important to show that our technique is robust enough to model errors so it can be implemented on a real experimental setup. This will be achieved with the next Figure 11: Impact of broadband measurement on phase reconstruction. Input phase: turbulent screen for a case \(D/r_{0}=4\) (colorbar in radians). Figure 10: Impact of broadband measurement on linearity curves for Zernike mode 19. Figure 9: Modal photon noise propagation for a case of 8000 photons available for the measurements. section, in which we provide a experimental demonstration of our reconstructor. ## 4 Experimental demonstration The goal of this section is to present a laboratory demonstration of the GS algorithm achieved on the SEAL optical testbed at the University of California Santa Cruz. ### Experimental setup SEAL is an extreme adaptive optics testbed composed of several deformable mirrors (DM), wavefront sensors and coronagraphic branches (Jensen-Clem et al., 2021). A schematic layout of the SEAL testbed for the nPWFS subsystem only is presented in figure 12. The SEAL components relevant for the experiment described here are the following ones: * Source at \(\lambda\) = 635 \(nm\). * Spatial Light Modulator (SLM): 1100 pixels across pupil diameter (van Kooten et al., 2022). * IRISAO Segmented deformable mirror (DM): 6 segments across pupil diameter. * Low-order ALPAO DM: 9 actuators across pupil diameter. * High-order BMC DM: 24 actuators across pupil diameter. * Focal plane camera at 2.3 Shannon sampling. * Double rooftop nPWFS (Lozi et al., 2019) with 106 pixels across pupil diameter (same sampling as the simulations shown in this paper). To build a reliable model of the SEAL nPWFS and run the GS algorithm, two quantities are required: an image of the pupil and the shape of the pyramid mask. To obtain an image of the pupil, large tip-tilt offsets are applied on the ALPAO DM in order to move the PSF away from the pyramid tip (\(\sim 20\lambda/D\)) and place it in each nPWFS quadrant successively. By doing so, four images of the pupil are obtained through each side of the pyramid mask. The SEAL pupil is then computed by re-centering and averaging these four images, with an estimated accuracy below one pixel. An image of the SEAL pupil measured through this method is given on the top-left of figure 1, the segments gaps coming from the IRISAO DM are clearly visible. In order to compute the pyramid shape, the four pupil images are simply registered in order to produce the corresponding tip-tilt for each face of the pyramid mask. The phase to DMs (ALPAO and BMC) registration was done the following way: the central actuator of the DM is pushed and reconstructed and then pulled and reconstructed again. Difference of the images corresponds to the phase of this actuator. Then a waffle pattern is sent on the DM and reconstructed in order to register actuators positions. Finally the phase of the central actuator is fitted with a Gaussian function and duplicated at the other actuators positions. This calibration process requires only 4 images (2 for the central actuator and 2 for the waffle). In both cases of the ALPAO and BMC DM, the phase reconstructed is largely over-sampled (106 pixels across versus 23 actuators for the BMC). The nPWFS and associated GS algorithm is routinely used on the SEAL testbed with both DMs as the standard way to flatten the wavefront in order to correct for quasi-static aberrations, achieving a wavefront error of about 17 nm rms (as measured by the nPWFS). The nPWFS image after closed loop is given in Figure 13 (left). A high frequency pattern coming from the BMC DM is clearly visible (a well known effect, called the print-through or quilting effect). The corresponding residual phase is reconstructed from the reference image (best flat after closed loop on static aberrations) and propagated in the model. The obtained image is displayed Figure 13 (right), showing that the model exhibits a high fidelity with measurements. Among the small differences that can be spotted: print-through effect seems slightly underestimated in the simulated image (most likely because field-of view of simulated nPWFS is smaller than the real one, acting like a spatial low-pass filter) and a faint ring pattern can be seen between the two bottom pupil images of the true image (most likely coming from a dust particle on the glass pyramid). It is worth mentioning that all the results presented in the following subsections are obtained for measurements done at high SNR. ### Linearity curves As a first demonstration of the GS algorithm performance on the SEAL testbed, some of the linearity curves obtained in simulation in the previous section are reproduced. This study is done the following way: Zernike modes are sent with the SLM, in order to have a good knowledge value of the input phase. Following the same procedure as before, the linear reconstructor is calibrated with the Zernike modal basis, hence the reconstruction directly gives the value of each of the modes. For the GS algorithm, the phase is reconstructed for each pixel and then projected on the Zernike basis. Linearity curves obtained for the Zernike mode 19 and 150 are given in Figure 14, in the case of 200 iterations used for the GS algorithm. The conclusions from the simulations are confirmed in this experiment: the GS demonstrates a higher dynamic range than the linear reconstructor. However, for higher order modes, the GS algorithm without phase unwrapping seems to Figure 12: A simplified layout of the SEAL testbed for the nPWFS subsystem. Figure 13: **Left: nPWFS image on SEAL after closed loop. nPWFS estimated wavefront error is 17 nm rms. Right Corresponding simulated image. The same scaling is used for both images.** under-perform compared to simulation (Figure 5). This could be explained by model errors which are slightly affecting performance. ### Example of phase reconstruction and close loop As a demonstration of GS capabilities to reconstruct strong phase aberrations on the bench, a \(D/r_{0}=32\) phase screen is generated on the SLM (bottom-left of figure 15). The corresponding nPWFS image recorded is given in the top-left of the same figure, and the reconstructed unwrapped phase after 200 iterations is displayed on the bottom-right corner. Figure 15 also shows a simulated image of the nPWFS signal, produced by simply propagating the reconstructed phase in the nPWFS model. Despite the fact the simulated image and the real image are almost identical, the phase is largely underestimated (more than a factor \(2\), this effect was also observed in the simulations presented in the previous section). This is explained by the fact that the nPWFS saturates: for an increasing input phase, the signal reaches a point where it almost does not change anymore. A simple example of such saturation is the case of a tip-tilt aberration: once the PSF is moved several tens of \(\lambda/D\) in one quadrant, only one pupil image of the nPWFS is illuminated. Displacing the PSF even further away will almost have no impact on the measurements, as the illuminated pupil image is already concentrating all the flux. The saturation is an important effect that will limit the reconstruction range for any kind of linear but also non-linear reconstructor, as it implies that two different phases can lead to the same measurements. Hence, the saturation seems to be an intrinsic limitation in the nPWFS measurement, and inverting more accurately large amplitude aberrations would require extra knowledge on the phase to be measured. A potential solution to even further improve the dynamic range of GS reconstruction will be sketched in the next section. Reconstructed phases at different iteration steps are displayed in figure 16. Once again the wrapped phases for 1, 200, and 2000 iterations are plotted on top row, and corresponding unwrapped phases are shown on the bottom row. As shown before, the phase estimation improves with iterations, but it seems that the estimation stops improving only after a few hundreds of iterations, instead of few thousands in simulations. Hence, the reconstructed phase for 200 and 2000 iterations are highly similar. Once again, this could be explained by the fact of differences between the nPWFS on the SEAL testbed and its model used for reconstruction. It is also possible to compare the GS reconstruction with the linear reconstruction. To do so, a push-pull interaction matrix was measured on the bench by sending the first 500 Zernike Figure 16: **Top:** Wrapped phases reconstructed by the GS algorithm on SEAL. **Bottom:** Corresponding unwrapped phases. Figure 14: **Top:** Linearity curve for \(Z^{19}\) obtained on SEAL. **Bottom:** Linearity curve for \(Z^{150}\) obtained on SEAL. GS algorithm is used with 200 iterations. Yellow dots are highlighting the fact these curves were obtained on the SEAL testbed. Figure 15: **Top-left:** nPWFS image on SEAL testbed.**Top-right:** nPWFS simulated image (propagating reconstructed phase through nPWFS model). **Bottom-left:** Input phase on SLM. **Bottom-right:** Reconstructed unwrapped phase with 200 iterations for the GS algorithm. modes on the SLM. The command matrix is then computed by taking the pseudo inverse of the interaction matrix and used to reconstruct the signal. The comparison between the linearly reconstructed phase and the GS reconstructed unwrapped phase projected on the first 500 Zernike modes are shown Figure 17. It is clearly demonstrating that our GS reconstruction shows also a significant improvement compared to the linear reconstruction on the bench. To conclude the testbed demonstration, we present results of a closed loop run using the BMC DM (controlling all the modes) on the static turbulent screen presented in Figure 15. The linear reconstructor uses a push-pull zonal interaction matrix, and the GS reconstructor uses 25 iterations with phase unwrapping. For closed loop AO control, a simple integrator controller with a loop gain of 0.5 was used. The PSFs obtained after 12 closed-loop steps are presented in figure 18. In the figure, we also present the PSF corresponding to the best flat after closing the loop with the nPWFS on bench aberrations (with the BMC DM), and the uncorrected PSF corresponding to the atmospheric phase displayed on the SLM. The best-flat PSF exhibits a faint and vertical light pattern crossing its core. This comes from diffraction effects due to the SLM. The closed loop test clearly confirms the extended dynamic range of our reconstructor by showing that the closed loop with GS reconstructor out-performs the closed loop in the linear framework (which ends up diverging after a few tens of steps). It also shows that despite saturation effects, the AO closed loop scheme help to correct high amplitude phase with the nPWFS combined with GS algorithm. As the wavefront error drastically improve after bootstrap, we could imagine a closed loop scheme for which the number of GS iteration used for reconstruction are reduced once in the residual phases regime. These experimental tests demonstrate the GS reconstructor performance on an optical bench, and show that it provides an improvement over the linear reconstructor. ## 5 Beyond the non-modulated PWFS saturation As described in the previous section, the dynamic range of the GS algorithm seems to be limited by the saturation of nPWFS measurements. As shown, we can find different phases (in our case: the same shape, but different amplitudes) that give substantially the same measurements on the nPWFS. Therefore, high amplitude input phases bring a degeneracy in the measurements that seems to be unsolvable, no matter what non-linear reconstructor is considered. To push the dynamic range further, extra information on the phase is needed. To do so, the following strategy is proposed: using a second sensor that would be a focal plane camera located just before the pyramid tip, as shown in the schematic figure 19. A technique to use a focal plane image in order to push PWFS dynamics has already been proposed in (Chambouelyron et al., 2021). Here, the PSF image delivered by this camera provides the amplitude of the EM field in the focal plane, allowing this extra-information to be added in the GS algorithm procedure. In fact, the information of the EM amplitude recorded at the focal plane can be used to replace simulated focal plane EM field amplitude during back propagation (between step 1 and 2 figure 1) and during forward propagation (between step 3 and 4 figure 1). To do so, simulated amplitude of the EM field at the PWFS apex is replaced by the square root of the PSF measurement, while simulated EM phase is kept. We call this reconstruction the focal plane assisted GS (f-GS). The f-GS reconstructor was tested in simulation, using the same system as in section 3, using as input phase the atmospheric screen used in figure 15 (case \(D/r_{0}\)=32). For this input phase, a convergence plot similar to figure 2 is presented in figure 20 for the case of the GS reconstructor and the f-GS reconstructor (both being unwrapped). The f-GS is outperforming the standard Figure 19: The focal plane assisted GS algorithm for the nPWFS. Figure 17: **Left:** Input turbulent phase injected on the SLM in a case \(D/r_{0}=32\), projected on the first 500 Zernike modes. **Middle:** Linear reconstruction. **Right:** GS reconstruction after 2000 iterations. Figure 18: Closing the loop with the BMC DM on a static atmospheric screen. **Top-left:** PSF corresponding to best flat on the SEAL testbed. **Top-right:** Uncorrected PSF for a input phase \(D/r_{0}=32\) on SLM. **Bottom-left:** PSF after 12 closed loop steps using GS reconstructor. **Bottom-right:** PSF after 12 close loop steps using linear reconstructor. GS algorithm in two ways: first, the convergence speed is much faster and only one iteration is actually enough to provide a decent phase reconstruction. This represents a significant improvement as it could allow the GS approach to potentially be used in real time. Secondly, the overall reconstruction is also improved, showing that this strategy is indeed a solution to push the reconstruction further than the nPWFS saturation. The corresponding reconstructed unwrapped phases after 1 iteration and 4000 iterations for the GS and f-GS are given in Figure 21. It shows how after only one iteration the f-GS reconstructed phase is already really similar to the input phase, whereas the GS one is largely underestimated (f-GS providing a three times better reconstruction error on first iteration). Actually, even after 4000 iterations, the GS algorithm does not provide a better phase estimation than the f-GS after one iteration. Hence, the f-GS could be a powerful tool to drastically increase the convergence speed of the algorithm and performance in the case of high amplitude aberrations. These advantages come with a more complicated practical implementation, requiring splitting the flux (an operation that could introduce non-common path aberrations) between two synchronized cameras. The details of such a implementation will be considered in future studies, with the main motivation of building a demonstration of the f-GS on the SEAL testbed. We precise that f-GS may not be needed in the case of a classic AO close loop scheme, as the nPWFS would work around residual phases and therefore far from the saturation regime after bootstrap is completed. Nevertheless, such a f-GS setup could open the nPWFS to a wider range of applications that would require to perform wavefront sensing on large phases. ## 6 Conclusion This paper introduces a new way to invert the nPWFS measurements: it relies on the numerical model of the sensor and the use of the GS algorithm. We demonstrated that the GS based reconstructor along with an unwrapping algorithm can drastically improve the nPWFS dynamic range of the reconstruction compared to the linear framework, extending its linearity range by a factor of approximately 3. Moreover, it can achieve dynamic performance comparable to the 3 \(\lambda/D\) modulated PWFS up to 2 rad rms. This technique was successfully demonstrated on the SEAL testbed at UCSC on which it was used to close the loop on high \(D/r_{0}\) turbulent phase screens. This reconstructor, however, comes with two drawbacks: it has a high computational complexity which prevents it from being used for real time control purposes (in its current implementation) and noise propagation is worse compared to the linear reconstructor. However, noise propagation for only the most basic implementation of GS and phase unwrapping was studied in this paper. There might be a better suited GS algorithm combined with a noise-robust phase unwrapping algorithm that could improve the noise propagation. The GS reconstructor already has several areas which it can be useful in an AO system despite only being tested at high SNR at slow speeds. For example, calibration purposes (as is done routinely on the SEAL testbed), segments/fragments phasing, highly sampled phase reconstruction, etc. It could also support a second stage nPWFS running in real time with a linear reconstructor through soft-real time reconstruction of the phase in order to compute optical gains, and could also be used to reconstruct telemetry data with higher fidelity. The most promising path towards a real time implementation of this technique is using focal plane assisted GS, which we showed in simulation in section 5. This technique uses the fact that the GS algorithm is well suited for sensor-fusion and seems to significantly increase the convergence speed of the algorithm while allowing us to measure high amplitude phases with better accuracy. The next steps for this approach is to push the understanding of the f-GS and start to implement it on the SEAL testbed, using a PSF image as an extra-sensor. Finally, the GS algorithm has been applied in this paper for nPWFS only, but we argue that it can be used for all Fourier-Filtering WFS and maybe beyond. In a more general way, this reconstructor belongs to a wider class of reconstructors: the ones using a numerical model of the sensor and iterative algorithms to increase the dynamic range. ## Acknowledgments This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. The document number is LLNL-JRNL-850197.
2310.00395
Analysis of system capacity and spectral efficiency of fixed-grid network
In this article, the performance of a fixed grid network is examined for various modulation formats to estimate the system's capacity and spectral efficiency. The optical In-phase Quadrature Modulator structure is used to build a fixed grid network modulation, and the homodyne detection approach is used for the receiver. Data multiplexing is accomplished using the Polarization Division Multiplexed technology. 100 Gbps, 150 Gbps, and 200 Gbps data rates are transmitted under these circumstances utilizing various modulation formats. Various pre-processing and signal recovery steps are explained by using modern digital signal processing systems. The achieved spectrum efficiencies for PM-QPSK, PM-8 QAM, and PM-16 QAM, respectively, were 2, 3, and 4 bits/s/Hz. Different modulation like PM-QPSK, PM-8-QAM, and PM-16-QAM each has system capacities of 8-9, 12-13.5, and 16-18 Tbps and it reaches transmission distances of 3000, 1300, and 700 kilometers with acceptable Bit Error Rate less than equal to 2*10-3 respectively. Peak optical power for received signal detection and full width at half maximum is noted for the different modulations under a fixed grind network.
Adarsha M, S. Malathi, Santosh Kumar
2023-09-30T14:33:19Z
http://arxiv.org/abs/2310.00395v1
# Analysis of System Capacity and Spectral Efficiency of Fixed-Grid Network ###### Abstract In this article, the performance of a fixed grid network is examined for various modulation formats to estimate the system's capacity and spectral efficiency. The optical In-phase Quadrature Modulator (IQM) structure is used to build a fixed grid network modulation, and the homodyne detection approach is used for the receiver. Data multiplexing is accomplished using the Polarization Division Multiplexed (PDM) technology. 100 Gbps, 150 Gbps, and 200 Gbps data rates are transmitted under these circumstances utilizing various modulation formats. Various pre-processing and signal recovery steps are explained by using modern digital signal processing systems. The achieved spectrum efficiencies for PM-QPSK, PM-8 QAM, and PM-16 QAM, respectively, were 2, 3, and 4 (bits/s)/Hz. Different modulation like PM-QPSK, PM-8-QAM, and PM-16-QAM each has system capacities of 8-9, 12-13.5, and 16-18 Tbps and it reaches transmission distances of 3000, 1300, and 700 kilometers with acceptable Bit Error Rate (BER\(\leq\) 2\(\times\) 10\({}^{-3}\)) respectively. Peak optical power for received signal detection and full width at half maximum is noted for the different modulations under a fixed grind network. Fixed-grid network, System capacity, Spectrum efficiencies 10.5121/ijenc.2023.15506 ## 1 Introduction An Optical Network is a communication network used for the exchange of information through an optical fiber cable between one ends to another. It is one of the quickest networks used for data communication[1]. Optical networks offer increased capacity and reduced costs for new applications such as the Internet, video and multimedia interaction, and advanced digital services [2]. Global demand for high data rates is rising, so researchers are looking for different ways to supply gigabit capacity [3]. There are various classes of optical communication networks. For instance, multiple wavelengths per optical fiber network architecture are used in central, metropolitan, or wide-area applications to connect thousands of users with a wide range of transmission speeds and capacities. Sending multiple wavelengths through a fiber 1300 to 1600 nm range at the same time is a powerful feature of an optical communication link [4][5]. The technology of combining multiple wavelengths onto a single fiber is known as wavelength division multiplexing (WDM)[6]. The use of the WDM principle in conjunction with optical amplifiers results in communication links that allow users from all over the world to communicate quickly [7]. The outdated 10 Gbps transport optical networks were updated to 40-100 Gbps networks to accommodate the varying bandwidth requirements of a variety of services [8]. Within the same fiber, simultaneous transmission of 10/40/100 Gbps on various wavelengths is possible. The bulk discount on high-bit-rate transponders could lower the overall cost of transmission[9]. In fixed-grid networks, a particular kind of transceiver is selected, and it serves a single demand. It fixes the data rate, range, and spectrum used. Fixed networks use 50 GHz channel spacing because they are based on the fixed grid as specified by the ITU-T[10]. Commercial optical fiber communication systems' transmission capacities have been growing at a rate of 140% annually. The trend is likely to persist due to the anticipated demand sparked by the launch of new data communications and high-definition video services[11]. The introduction of commercial transport networks typically occurs 5 to 10 years after the relevant study, and their transmission capacities have also continued to rise[12]. If the current growth rate is maintained, the commercial capacity will reach its limit within the next ten years. At present Time Division Multiplexing (TDM), and WDM technologies are introduced to meet the demand for more capacity and are used to multiplexes several channels. To handle higher modulation impairment Digital Coherent Transmission (DCT) technologies are used to provide additional spectral efficiency[13]. WDM and TWDM [14] will be the top two technology choices for back-haul and front-haul access network design and deployment. ### Motivation By 2022, it's projected that traffic between machines and connected devices would increase globally at a compound annual growth rate of 47%[15][16]. By 2023, Cisco predicts that 50 billion people using fixed-line, mobile, and machine-to-machine internet will need between 44 and 110 Mbps of bandwidth per user to access modern applications[17][18]. All this prediction shows that there is high demand for data rates so which drives to do the investigation of the C-band capacity for upcoming demand. ### Problem Statement One can go for the very expensive cable like EDFA for better Optical Signal Noise Ratio(OSNR) as well as spectral efficiency. Replacement of under-laying cable is very cost-consuming as well as labor-intensive. Because of that traditional network architecture setup is kept as it is, to improve the system capacity. Additionally, the following major research gaps are addressed in this paper: * What opportunities are available to boost C-band spectral efficiency and system capacity? * It investigates the peak optical power requirements for the different modulation schemes in fixed grid networks. * Which information coding gives the best BER rate for different modulations? ### Paper Contribution Considering the prediction Fixed grid optical network is constructed and simulated in this article to analyze spectrum efficiency and system capacity.In the transmitter section, differential coding and gray coding are introduced. As a result, BER effectiveness is evaluated and compared. Additionally, various modulation schemes are transmitted and its OSNR versus BER are observed. The rest of this article is broken down into the sections that follow. Evolution of Optical Communication Network (OCN) and related work in section 2. Introduces details of the fixed grid in section 3. The design of the fixed-grid network is presented in section 4. The simulation of fixed-grid networks is described in section 5. In section 6, the system capacity and spectral efficiency of various simulation results obtained from fixed-grid networks are presented and discussed. Finally, the conclusions of the study are presented in section 7. ## 2 Evolution of Optical Communication Network When optical fibers were first utilized for optical communications in 1977, the slope changed. The bit-rate-distance product \(BL\), where \(B\) is the bit rate and \(L\) is the repeater spacing, the distance after which an optical signal must be regenerated to maintain its fidelity, is a commonly used figure of merit for communication systems[19]. Table 1 indicate how technological advancements have raised from 1977 to 2015. The first generation of Optical Communication (OC) systems used GaAs semiconductor lasers with wavelengths of about 850 nm inside their optical transmitters. Before reaching an optical receiver, the optical bit-stream was transferred over graded-index multimode fibers and transformed to the electric domain using a silicon photodetector.System designers were motivated by the larger repeater spacing compared to the coaxial system's 1 km spacing since it reduced the installation and maintenance expenses associated with each repeater. One can observe that attenuation gradually decreases from generation to generation which allows signals to travel for long distances without using the repeater. It's worth noticing that from first to third generation TDM is used. By 1990, system designers had shifted their attention to system capacity, fiber loss, and fiber dispersion. The fiber loss is overcome by periodic optical amplification similarly fiber dispersion is controlled by periodic \begin{table} \begin{tabular}{c|c|c|c} \hline \multicolumn{4}{c}{**Megabits per second**} \\ \hline **Details** & \(\mathbf{1^{st}\,G}\) & \(\mathbf{2^{nd}\,G}\) & \(\mathbf{3^{rd}\,G}\) \\ \hline Optical Fiber & Multi-Mode Fiber & Single Mode Fiber & Single Mode Fiber \\ \hline Link /Topology & P-T-P link & Ring, P-T-P & Ring, P-T-P \\ \hline Multiplexing & Bit-wise multiplexing (TDM) & Byte-wise multiplexing (TDM) & Bit-wise multiplexing (TDM) \\ \hline Data rate & 1-45 Mbps & 50 M bps – 2 Gb bps & 50 Mbps – 10 Gbps \\ \hline Reach & 10Km & 50Km & 50Km \\ \hline Attenuation & 3 dB/km & 1 dB/km & 0.2 dB/km \\ \hline Year & 1977-1980 & 1981-1987 & 1988-1994 \\ \hline \multicolumn{4}{c}{**Gigabit per second**} \\ \hline **Details** & \(\mathbf{4^{th}\,G}\) & \(\mathbf{5^{th}\,G}\) & \(\mathbf{6^{th}\,G}\) \\ \hline Optical Fiber & Single Mode Fiber & Single Mode Fiber & Single Mode Fiber \\ \hline Link /Topology & Ring, Mesh & Ring, Mesh & Ring, Mesh \\ \hline Multiplexing & WDM & WDM /PDM & WDM /PDM \\ \hline Data rate & 2.5 G bps – 40 Gbps & 10 Gbps – 100 Gb bps & 10 G bps – 200 G bps \\ \hline Reach & 50Km & 50Km & 50Km \\ \hline Attenuation & 0.2 dB/km & 0.2 dB/km & 0.2 dB/km \\ \hline Year & 1995-2001 & 2001-2010 & 2010-2015 \\ \hline \end{tabular} \end{table} Table 1: Evolution of Optical Communication Network dispersion compensation. Finally, system capacity is handled by using WDM and PDM. In 2011, 64-T bit/s transmission over 320 km of single-mode fiber was achieved using 640 WDM channels spanning both the C and L bands with 12.5-G Hz channel spacing. Each channel has two polarization-multiplexed 107-G bit/s signals that were coded using the Quadrature Amplitude Modulation (QAM) scheme [20]. The greatest capacity Discrete Multi-tone Modulation (DMT)signal transmission across 2.4-km SMF with the BER under SD-FEC constraint of 2.4 X10-2 was accomplished using a four-channel WDM 256.51Gbps 16-QAM-DMT short-reach optical-amplifier-free interconnection [21]. ### Related Works In the present generation (2015 - 2022) to overcome transmission losses, digital coherent receivers are used[22]. The digital coherent receiver relies on robust digital carrier-phase estimation, one can use several spectrally effective modulation patterns[23]. Furthermore, the phase information is kept after detection, we can use Digital Signal Processing (DSP) [24] to equalize linear transmission impairments such as Group-Velocity Dispersion (GVD)[25] and Polarization-Mode Dispersion (PMD)[26] of transmission fibers. Recently, 100-Gbit/s transmission systems using Quadrature Phase-Shift Keying (QPSK)[27] modulation, PDM, and phase-diversity homodyne detection aided by high-speed DSP have been developed and used in commercial networks[28]. It is possible to compensate for chromatic dispersion by periodically deploying special optical fibers known as Dispersion Compensating Fibers (DCF)[29], allowing amplifiers and DCFs to handle multiple wavelengths simultaneously. \begin{table} \begin{tabular}{|c|c|c|} \hline **Challenges** & **Solutions** & **Referen** \\ \hline Speed, Quality & Coherent detection polarization multiplexing, digital processing, and multilevel modulations & [31][29] \\ \hline Performance and cost & Providing the most prominent factors of the Radio Over Fiber (ROF) architecture reduces the system installation costs. & [32] \\ \hline Fiber Nonlinearity & Estimating nonlinear noise power and OSNR induced via fiber nonlinearity by Long Short-Term Memory (LTSM) network. & [33] \\ \hline Reducing the communication degradation & Adaptive Field/ Digital Signal Processing & [34][26] \\ \hline Handling nonlinearity in Single-channel optical communication & Splitting the nonlinearity compensationis always advantageouswhen there are two or more spans. & [35] \\ \hline To Achieve ultrahigh-capacity fiber communications & Effective coherent multi-wavelength sources are used for the new generation of coherent fiber communication networks. & [36] \\ \hline To overcome data center network traffic & Photonic Integrated Circuits (PIC), improved fiber optic communication infrastructure, using the full spectrum range of fiber optic technologies, and signal modulation to resolve losses are used. & [37][38] \\ \hline Canceling Kerr-induced transformations to increase the capacity & Fiber information capability can be substantially improved concerning previous estimates. & [39] \\ \hline Nonlinearity in coherent optical communication & Proposed and demonstrated a basic nonlinear equalizer based on the Functional-Link of Neural Networks (FLNN). & [40] \\ \hline \end{tabular} \end{table} Table 2: Different Challenges in OCN and Its Solutions Modern WDM optical networks rely on spans to connect nodes. These are 70-100 km fiber lengths with amplifiers Erbium-Doped Fiber Amplifiers (EDFAs)[30] and DCFs. The total transmission range could be several thousand kilometers in this manner (without O-E-O conversion). The demand for bandwidth is increasing due to the explosive growth of Internet services such as video conferencing, Net-Fix, cloud computing, and mobile access with video clients. This requires an expansion of the transmission capacity of optical fibers and the development of next-generation high-speed optical networks. The various challenges faced by OCN are listed in Table 2 along with possible solutions. Where it addresses the network speed, quality, performance, and cost. To reduce communication degradation DSP is utilized in the receiver end. To enhance the network performance PIC is incorporated which makes an easy way to do the signal modulation and to resolve losses in the transmitter side. Unlike in the early days of DWDM systems, when an optical fiber's bandwidth was thought to be unlimited, the optical spectrum [41] will be a valuable commodity now in data centers, and the industry is now looking for ways to increase overall spectrum efficiency. Considering these facts we tried to explore increasing the spectral and system capacity. One way to overcome this issue is to have error-free communication and also use WDM. Therefore, we have implemented the channel coding in the transmitter part to reduce bit error rates. Both differential and gray coding are adopted for information encoding. Differential coding involves encoding data by representing the difference between consecutive values or samples rather than encoding each value independently. This can be particularly useful when the variations between successive data points are smaller compared to the absolute values themselves. The primary goal of differential encoding is to reduce redundancy in data representation, which can lead to more efficient storage and transmission. It is especially effective when dealing with data that exhibits smooth or gradual changes over time. The primary advantage of Gray code is that it reduces the likelihood of errors when transitioning from one value to the next. In traditional binary representation, transitioning from one value to another can result in multiple bits changing simultaneously, which can lead to errors due to timing or noise. In Gray code, only one bit changes at a time, which reduces the chances of errors during transitions. Such a coding system is adopted for different modulations. Grey and differential codes systems OSNR requirements are observed. By keeping transmission distances, data rates, and fiber launch optical powers are used for the modulation formats are taken into consideration. Overall C-band capacity is investigated using different modulation techniques and also analyzed its spectral efficiency including each modulation max reach. ## 3 Details of the Fixed Grid Network The Conventional band (C band) has the lowest losses across the spectrum, so this band is used to transmit data over an extremely long distance. The C band includes wavelengths between approximately 1525 and 1565 nm. Wavelength allocation and standardization were set by ITU-T. A technique called Dense Wavelength Division Multiplexing (DWDM) is used. It makes it possible to transmit numerous optical signal carriers at various wavelengths through a single optical fiber. DWDM central frequencies are specified in the ITU-T G.694.1 guideline [42]. Figure 1: Fixed grid channel spacing DWDM wavelengths were first placed in a grid with an optical frequency spacing of exactly 100 GHz which is approximately 0.8 nm wavelength. Over the past ten years, numerous significant new developments have increased the capacity to keep up with the steadily growing traffic. For instance, core networks can transmit about 80 channels by compressing the channels and spacing that 50 GHz apart. To build a fixed grid network 50GHz channel spacing is used to transmit data rate. 193.10 THz will serve as the reference frequency for this design. A single carrier network is made up of a fixed grid (50 GHz) that transmits either a single line rate or a mixed line rate. Figure 1 illustrates the fixed grid network channel spacing. Capacity scaling is possible in a communication system by exploring modulation so Quadrature Phase Shift Keying (QPSK) and Quadrature Amplitude Modulation (QAM) are explored. To increase the bit rate per second Dual-Polarizations (DP) technique is adopted[29]. ## 4 Design of Fixed Grid Network The fixed grid network is shown in Figure 2 it consists of a transceiver, Optical Cross-Connect (OXC), and optical fiber. Different clients are connected by using mesh topology in this example. It's also possible to connect different clients with different topologies. An enhanced form of an optical network called a fixed grid optical network is made to deliver dependable, fast communication between numerous locations. They offer a practical method for developing an optical network without the use of pricey and complicated routing methods. Based on a mesh network architecture, fixed grid optical networks connect each node to several other nodes in a preset grid pattern. Each node in the network is directly connected to several other nodes, enabling effective communication between all of the nodes. High scalability, low latency, and dependable data transmission are benefits of employing a fixed-grid optical network. They are therefore perfect for services like streaming media, video conferencing, and long-distance communication. ### Fixed Gird Network Transceiver A transceiver is a transmitter and receiver combined in a single unit. Figure 3 gives the standard design schematic of the fixed grid transmitter. Which exhibits the complete design configuration of the DP-QPSK modulation formats. The transmitter unit divides the input bit sequence in half Figure 2: The architecture of fixed grid network evenly using a serial-to-parallel converter. Both even and odd parts are present in each sequence. For the QPSK, each bit sequence is transformed from binary signals into M-ary symbol sequences using Phase Shift Keying (PSK). Phase ambiguity is eliminated with QPSK/QAM modulation methods utilizing differential encoding[43]. The multilayer pulse is produced by the M-ary pulse generator. M-Ary pulse generator out will drive the IQ modulators I and II. An IQ modulator comprises two-phase modulators, a phase shifter, and two couplers cross couplers. Mach-Zehnder Modulator (MZM) is used to design the IQ modulator which works under the push-pull configuration. Figure 4: Fixed grid receiver Figure 3: Fixed grid transmitter Continuous light is used as the input, and a Polarization Splitter (PS) divides it into two orthogonally polarized, equally powerful beams of light. An IQ modulator is used to modulate the two orthogonal polarization lights. Each phase modulator handles data streams coming from the M-ary pulse generator. At the end of the IQ-modulator, the modulated signal is going to be combined with the help of a polarization combiner so the modulated output is ready for communication. The DP-QPSK/DP-QAM optical signal is demodulated using a coherent detection technique which is shown in Figure 4. The incoming data stream is divided into two by the PS. Two 90\({}^{\circ}\) optical hybrids are used to combine the signal, which has both X and Y polarization, with the local oscillator (LO)1. Information about intensity is generated from phase difference information by combining the optical carrier-containing data with the LO signal. The Balance Detector's (BD) light is transformed into analog signals. A high-speed Analog to Digital Conversion (ADC) sampling transforms such signals into digital signals. After being precisely sampled, the signal can be recovered. Footnote 1: Local Oscillator: This is nothing but the laser, having similar properties as the source laser especially line width almost equal to the source laser. To overcome the loss that occurred from the transmitter to the reception section, a Digital Signal Processing (DSP) unit is used which is shown in Figure 5. The leading cause of signal transmission loss in optical fiber is Chromatic Dispersion (CD) [44][26]. The frequency domain is used to do CD compensation since it requires less computation when the compensation value is higher. As a result, data is first translated into the frequency domain, followed by multiplication by the inverse transfer function of the dispersion function, and finally, turn backs to the time domain. After CD compensation it enters the symbol clock recovery to overcome clock misalignment. Due to the independence of the transmitter output data clock and the A to D sampling clock, the clock's frequency and phase can differ. The symbol clock recovery algorithm determines the symbol clock frequency, decides on the best sampling point, and then resamples the data appropriately. Figure 5: DSP unit in the Receiver section It follows the double sampling rules which mean 2 samples/symbol. Thus, by doing this, it is possible to maintain synchronization between the sample clock and the receiver's launching symbol clock. Following the acquisition of the appropriate symbol clock for each polarization. There are two signals (one on each polarization) but due to the polarization's rotated state, the data is mixed. Polarization Mode Dispersion (PMD) can be resolved by using the inverse Jones matrix2. It will separate the signal's two orthogonal polarizations and compensate for signal loss. The Constant Modulus Algorithm (CMA) is used to compensate for the PMD. As a result, two polarization signals are apart but rotating as a result of carrier frequency offset and phase noise. The same theory underlies carrier phase recovery and Frequency Offset Compensation (FOC). Estimating the constellation diagram's rate of rotation is intended to be followed by the offset's removal. It is performed with the help of the Viterbi-Viterbi algorithm[44]. Footnote 2: Jones matrix: It gives the details of different polarization states. Finally, by applying the proper thresholds to the received constellation slices, one can identify the symbols (and bits) transmitted through the fiber. Depending upon the coding technique applied in the transmitter section an appropriate reverse de-coding needs to be used to extract the bit stream, which is shown in Figure 6. Reversing the differential encoding process is known as differential decoding. Taking into account the variations between succeeding values, it entails recovering the original data from the encoded or differentially encoded values.In the binary numbering scheme known as gray code, which is also referred to as reflected binary code or unit distance code, two successive values are separated by only one bit. Recovery of the original binary values from their Gray code is known as gray decoding. M-ary pulse generator involves both differential and gray coding techniques which is encoding the PSK sequence signal. Similarly, PSK/QAM decoder will extract the original signal from encoded data. Both differential and gray coding and decoding are introduced in this work and analyze the BER and OSNR. ### Optical Cross-Connect (Oxc) In the Older days, electronic signals are mostly responsible for all networking equipment's operation. That first optical signal was transformed into electrical, then it was amplified, regenerated, or switched before being converted back into optical signals [45]. For all varieties of optical networks, OXC is the most appealing key component. It switches at a very fast rate and with good reliability[46]. In networks, wavelength routing is provided via OXC, which is used to connect any two topologies. There are two different types of OXC switches one is the digital switch which is opaque or hybrid the second one is transparent OXC[47]. Figure 6: The primary instance of demodulation With the use of electronic cross-connection technology, optical data streams are first changed into electronic data in the digital OXC switch, which is then transformed back into optical data streams. OXC operates specifically in the photonic field[48]. Figure 7 illustrates the creation of an OXC in optical fiber communications using de-multiplexed and multiplexed WDM channels. ## 5 Simulation of Fixed Grid Network Based on the OptiSystem V.18 simulation platform, the reference model of the fixed grid network is constructed as depicted in Fig. 2. The hypothetical center frequencies that are permitted are represented by a grid of frequencies. For DWDM systems, six channels with equal channel separations of 50 GHz are utilized. Fixed grid nominal central frequencies are supplied by 193.1 \(+\) n \(\times\) 0.05, where n is a positive or negative integer including 0 [5]. The central frequency of each channel is assigned which is mentioned in Table 3. The addition of optoelectronic components that operate at higher data rates while staying within the 50 GHz grid has increased channel capacity. In this design, the fiber loss is assumed to be 0.2 dB/km. The dispersion is 0.2 ps/nm.km. When the data rate is increased from 2.5 to 100 Gbps per wavelength while channel spacing remains constant. To generate 100Gbps, Dual-Polarization-Quadrature Phase Shift Keying (DP-QPSK) modulation scheme with 25 G baud rate and 2 Bits/Symbol is used. \begin{table} \begin{tabular}{|c|c|} \hline \multirow{2}{*}{**Channel Number**} & **Nominal central frequencies (THz) for spacing of** \\ & **each channel** \\ \hline 1 & 193.1 \\ \hline 2 & 193.15 \\ \hline 3 & 193.2 \\ \hline 4 & 193.25 \\ \hline 5 & 193.3 \\ \hline 6 & 193.35 \\ \hline \end{tabular} \end{table} Table 3: Central frequency allocation for each channel Figure 7: OXC implemented using 2X2 Optical switch The fixed grid optical network simulation parameter is expressed in Table.4 which involves the channel impairments, bit rate, baud rate, receiver sensitivity, and EDFA gain and noise figure. A similar setup is carried out for the PM-M-QAM modulation scheme. Where 150 Gbps data rate is generated by using PM-8-QAM modulation scheme with 25 G baud rate and 6 bits/symbol is used. With the 50GHz channel spacing maximum, 200 Gbps data rate is transmitted by using PM16-QAM scheme with 25 G baud rate with 8 bits/symbol used. ## 6 Result Analysis The QPSK modulation scheme is taken into consideration for a better understanding of and visualization of the constellation diagram in the different stages of the DSP receiver section. After dispersion compensation and non-linear compensation by acquiring the proper symbol clock for each polarization, the X and Y-polarization constellation diagrams are shown in Figure 8(a). We receive two signals, one for each polarization, but due to the rotated state of the polarization, they have mixed data. These problems are addressed by PMD compensation. The constellation diagrams after the Polarization De-multiplexing and PMD compensation which is expressed in Figure. 8(b). Where two signals are separated but they are rotating due to carrier frequency offset and phase noise. It can be overcome by using FOC and Carrier Phase Estimation (CPE) Figure8(c) shows the respective constellation diagrams. Finally, by applying the proper thresholds to the received constellation slices, we can identify the symbols (and bits) that were transmitted via the fiber. \begin{table} \begin{tabular}{|c|c|} \hline **Parameters Values** & **PM-QPSK** \\ \hline bit rate & 100 Gbps \\ \hline baud rate & 25 G-baud \\ \hline Fibre attenuation coefficient & 0.2 dB/km \\ \hline Fibre dispersion coefficient & 17 ps/nm/km \\ \hline Fibre differential group delay & 0.2 ps/km \\ \hline laser power & 14 dBm \\ \hline laser central wavelength & 1550 nm \\ \hline laser linewidth & 0.1MHz \\ \hline laser initial phase & 0\({}^{\circ}\) \\ \hline fiber launch power & 0 dBm \\ \hline EDFA gain & 8 dBm \\ \hline EDFA noise BW & 4 THz \\ \hline EDFA noise figure & 6 dB \\ \hline photodetector responsivity & 1 A/W \\ \hline photodetector dark current & 10 nA \\ \hline \end{tabular} \end{table} Table 4: Fixed grid network Simulation parameters Figure 9 shows the transmitter output for three polarization multiplexed gray-coded optical modulation formats that were evaluated. For PM- QPSK, PM- 8-QAM, and PM- 16-QAM the reported peak optical powers are -5.20962, -10.6693, and -14.63234 dBm respectively. The Full Width at Half Maximum (FWHM) is also observed to be 20.6, 14.6, and 10.7. An IQ Modulator is used in optical communication for transmitting data in the form of light pulses. It modulates light waves by combining in-phase and quadrature components of the data signal, resulting in a composite signal. This composite signal is then transmitted through an optical fiber. PM-16 QAM is an IQ Modulator that combines sixteen in-phase and quadrature components to generate sixteen light pulses. It has a very wide bandwidth and is capable of very high data rates. It is used in high-speed optical communication systems. By altering the M-Ary pulse generator, the IQmodulator can function as PM-QPSK and PM-QAM. Figure 8: Constellation diagrams at various DSP component levels. (a) After dispersion compensation and right symbol clock for each polarization (b) After Polarization De-multiplexing and PMD compensation Figure 9: Different modulation schemes its peak Optical power and FWHM. The term OSNR describes the proportion of a transmission link's signal optical power to its noise optical power which is expressed in Eq.1[13]. The level of optical noise interference on optical signals inside a valid BW is measured using OSNR. The OSNR requirement is inversely related to the Euclidean distances between the constellation points for different modulation schemes. \[OSNR(dB)=10\log\left(\frac{P_{signal}(mW)}{P_{noise}(mW)}\right)\] (Eq.1) Hence, with a given BER (\(\leq 2\times\) 10\({}^{-3}\)), the OSNR tolerance decreases as the modulation format increases which is depicted in Figures 11 and 12. For systems using various modulations with differential and gray-coding sequences, BER is determined concerning the user-defined OSNRs. For PM-QPSK, PM-8-QAM, and PM-16-QAM, respectively, the OSNRs required for systems using differential code were found to be 18.1, 20.4, and 22.8 dB. The essential OSNRs for networks with gray-coded are found to be 14.1, 17.2, and 18.8 dB for PM-QPSK, PM-8-QAM, and PM-16-QAM, respectively. This leads to greater OSNR needs for modulation formats with larger bits per symbol. In comparison to differential-coded systems, gray-coded modulations were shown to require less OSNR to be maintained. Figure 11OSNR performance of transmitted signal using Gray Coding Figure 10OSNR Performance of Transmitted Signal using Differential Coding ### Fixed Grid Network System Capacity And Spectral Efficiency Figure 2 shows how a fixed grid network uses the WDM system and it may boost system capacity by concurrently transmitting numerous bit streams over the same cable. when an L-length fiber is used to simultaneously broadcast N channels with bit rates _B1, B2,..._, and _B\({}_{N}\)_. Equation 2[19] expresses the WDM link's overall bit rate. For equal bit rates, the system capacity is increased by a factor of N. \[B_{T}=B_{1}+B_{2}+\ \ \cdots\cdots\cdots+B_{N}\] (Eq.2) The most crucial design factors for a WDM system are the number of channels N, the bit rate B at which each channel operates, and the frequency separation \(\Delta v_{ch}\) between adjacent channels. System capacity is denoted by the term NB. The total bandwidth consumed by a fixed grid network system is denoted by the product \(\mathrm{N}\times\Delta v_{ch}\). For WDM systems, the standard way to introduce the idea of spectral efficiency is expressed in equation 3[19]. \[\eta_{s}=B/\Delta v_{ch}\] (Eq.3) Where Bitrate (B) for PM-QPSK is 100 Gbps and frequency separation \(\Delta v_{ch}\) is 0.4 nm which is 50 GHz, so spectral efficiency is 2 (bits/s)/Hz. Similarly, for PM-8- QAM, and PM-16 QAM is 3 (bits/s)/Hz and 4 (bits/s)/Hz respectively. Fig. 13 shows the spectral efficiencies for different data rates by using effective modulation schemes. It indicates that an effective modulation scheme gives better spectral efficiency. Figure 14 illustrates the system capacity and optical reach for the different modulation schemes as counterpart optical reach will decrease for the effective modulation scheme but system capacity will be increasing drastically. DWDM system is used for a fixed grid network so that number of channels (N) is 80-90 and an equal bitrate (B) is used for each channel which is 100 Gbps so the system capacity of the network will be 8 -9 Tbps for PM -QPSK. Similarly, system capacity for PM -8QAM, and PM -16QAM will be 12-13.5 Tbps and 16-18 Tbps for a data rate of 150 Gbps and 200 Gbps respectively. Figure 12: Spectral efficiencies vs data rate for the different modulation schemes The gray-coded IQM-based optical modulation formats are performance compared in Table 5 using three research studies. It has been noted that employing single carrier transmission, the proposed designs exhibit a greater figure of merit, it primarily discusses bit rate and length. Although PDM-16-QAM has a higher data rate than the proposed architecture, the required OSNR is much higher. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline **Parameters** & [49] & [50] & [51] & **Proposed Work** \\ \hline Multiplexed, Modulation & PDM, QPSK, 16- & PDM, 16- & \begin{tabular}{c} PDM, 16- \\ QAM, \\ WDM, and \\ DSP \\ \end{tabular} & \begin{tabular}{c} PDM, 16- \\ QAM, \\ WDM, and \\ DSP \\ \end{tabular} & \begin{tabular}{c} DP-16- \\ QAM, PCS \\ 16-QAM, and DSP \\ \end{tabular} & \begin{tabular}{c} IQM, PDM, WDM, PM- \\ QPSK, PM-8-QAM, PM- \\ 16-QAM, and DSP \\ \end{tabular} \\ \hline Number of channels & 1 & 5 & 2 & 80 - 90 \\ \hline Spectral efficiency & NA & 7.6 & 4 & \begin{tabular}{c} 2 (PM-QPSK) \\ 3 (PM-8-QAM), \\ 4 (PM-16-QAM) \\ \end{tabular} \\ \hline The maximum data rate, Gbps & 112 & 5X256 & 2X200 & \begin{tabular}{c} 100 (PM-QPSK), \\ 150 (PM-8-QAM), \\ 200 (PM-16-QAM) \\ \end{tabular} \\ \hline Maximum SMF length, km & 80-960 & 80 & 90-727 & \begin{tabular}{c} 3000 (PM-QPSK), \\ 1300 (PM-8-QAM), \\ 700 (PM-16-QAM) \\ \end{tabular} \\ \hline Required OSNR, dB & \begin{tabular}{c} 12.8 (QPSK), \\ 17.1 (16-QAM) \\ \end{tabular} & 36 & NA & \begin{tabular}{c} 14.1 (PM-QPSK), \\ 17.5 (PM-8-QAM), \\ 19.2 (PM-16-QAM) \\ \end{tabular} \\ \hline BER & 3.8\(\times\)10\({}^{3}\) & 4.5\(\times\)10\({}^{3}\) & 4.93\(\times\)10\({}^{3}\) & \begin{tabular}{c} 2\(\times\)10\({}^{3}\) \\ 50 \\ \end{tabular} \\ \hline Each Channel spacing (GHz) & 50 & 29 & 50 & 50 \\ \hline The total capacity of C- & & & & 8 - 9 (PM-QPSK), \\ Band(\(\sim\)1530–1565 nm) & NA & NA & NA & 12 - 13.5 (PM-8-QAM), \\ in Tbps & & & & 16 - 18 (PM-16-QAM) \\ \hline \end{tabular} * NA: Not available; PCS: Probabilistic Constellation Shaping \end{table} Table 5: Performance evaluation of various modulation types with reference to relevant literature Figure 13: System capacity Vs Optical reach for the different Modulation schemes The proposed work mainly includes IQM, PDM, and WDM with different modulation techniques. It also uses gray coding while encoding the data, which helps the receiver section to retrieve the data. These enhanced performances are the consequence of gray coding, optimized component settings, EDFA, sophisticated DSP compensation algorithms, and straightforward IQ-based higher-order optical modulations. C-band total capacity is studied, to understand the upcoming capacity demand. ## 7 Conclusions C-Band spectral and system capacity can be improved by using sophisticated optical fiber but in this work, we kept under-laying cable as it is to nullify the cost consumption. We have improved the system capacity by using different information coding. Especially differential and gray coding is used to overcome bits errors in the receiver and improve the OSNR performance.Different modulation techniques are tested under the fixed grid network and calculated its system capacity and spectral efficiency by adopting both codes. It shows the spectral efficiency increases as we go for higher modulation but transmission distance reduces. PM-QPSK is used for long-distance communication and PM-QAM for short distances. Depending upon the application an appropriate modulation needs to be chosen for better performance under a fixed grid network. To account for fiber attenuation, EDFA with an 8 dB gain is utilized at regular intervals of 40 km SMF. The receiver's DSP unit is used to adjust for a variety of signal faults caused by fiber impairments, including CD, PMD, and Kerr non-linearity. This allows for maximum transmission distances of 3000, 1300, and 700 km for PM-QPSK, PM-8-QAM, and PM-16-QAM, respectively, while maintaining an acceptable BER. The fixed grid network's system capacity will be 8-9 Tbps, 12-13.5 Tbps, and 16-18 Tbps for data rates of 100Gbps, 150Gbps, and 200Gbps, respectively. However, it's important to note that differential encoding can be sensitive to errors, especially if there are significant variations between consecutive values. In cases where the changes are abrupt or large, the cumulative effect of differential encoding can lead to distortion or inaccuracies in the decoded data.Overall, Gray code is a good encoding method that aids in addressing binary value transition concerns, making it helpful in circumstances where precision, mistake detection, or smooth transitions are crucial.Grey-coded systems performed better than differential-coded systems in terms of OSNR requirements for the investigated modulation schemes when the same data rates, transmission distances, and fiber launch optical powers were taken into account. Grey-coded systems provide an OSNR improvement of 4, 3.2, and 4dB for PM-QPSK, PM-8-QAM, and PM-16-QAM, respectively, after SMF transmission. The proposed system is best to transmit 100 Gbps data rate with 50 GHz channel spacing. We noticed that for lesser data rate (10 Gbps) requests same 50 GHz channel spacing is used so it's not an effective way to utilize the channel spacing. A standard transmission data rate of more than 100Gbps is currently being considered by the datacom and telecom industries, and 400Gbps is receiving a lot of attention. The spectral width occupied by 400 Gbps in standard modulation formats is too wide to fit into the 50 GHz ITU grid. To overcome such issues new techniques need to be adopted. The solution is the Elastic optical network (EON)which mainly deals with the allocation of the channels and handling the channel spacing. EON performance is not studied under practical consideration. In the future, one can develop the Flexi-grid network to achieve effectively utilize channel spacing and to understand its system capacity. ## Conflicts of Interest The authors declare no conflict of interest.
2309.12194
Empowering People with Intellectual and Developmental Disabilities through Cognitively Accessible Visualizations
Data has transformative potential to empower people with Intellectual and Developmental Disabilities (IDD). However, conventional data visualizations often rely on complex cognitive processes, and existing approaches for day-to-day analysis scenarios fail to consider neurodivergent capabilities, creating barriers for people with IDD to access data and leading to even further marginalization. We argue that visualizations could be an equalizer for people with IDD to participate in data-driven conversations. Drawing on preliminary research findings and our experiences working with people with IDD and their data, we introduce and expand on the concept of cognitively accessible visualizations, unpack its meaning and roles in increasing IDD individuals' access to data, and discuss two immediate research objectives. Specifically, we argue that cognitively accessible visualizations should support people with IDD in personal data storytelling for effective self-advocacy and self-expression, and balance novelty and familiarity in data design to accommodate cognitive diversity and promote inclusivity.
Keke Wu, Danielle Albers Szafir
2023-09-21T16:01:32Z
http://arxiv.org/abs/2309.12194v1
Empowering People with Intellectual and Developmental Disabilities through Cognitively Accessible Visualizations ###### Abstract Data has transformative potential to empower people with Intellectual and Developmental Disabilities (IDD). However, conventional data visualizations often rely on complex cognitive processes, and existing approaches for day-to-day analysis scenarios fail to consider neurodivergent capabilities, creating barriers for people with IDD to access data and leading to even further marginalization. We argue that visualizations could be an equalizer for people with IDD to participate in data-driven conversations. Drawing on preliminary research findings and our experiences working with people with IDD and their data, we introduce and expand on the concept of cognitively accessible visualizations, unpack its meaning and roles in increasing IDD individuals' access to data, and discuss two immediate research objectives. Specifically, we argue that cognitively accessible visualizations should support people with IDD in personal data storyclling for effective self-advocacy and self-expression, and balance novelty and familiarity in data design to accommodate cognitive diversity and promote inclusivity. Human-centered computing--Visualization; Human-centered computing--Accessibility ## 1 Introduction Data plays an important role in understanding, addressing, and advocating for the needs and rights of people with Intellectual and Developmental Disabilities (IDD) [26]. Researchers, advocates, and organizations can use data to create interventions to improve lives of people with IDD [1, 8, 17]. Collecting and analyzing data about people with IDD provides insights into challenges faced by the community [17], helps identify disparities and inequalities in resource allocation [1], and guides evidence-based policy-making and drives positive social change [8]. Data can also empower individuals with IDD. It allows the monitoring of personal health, behaviours and progress, assists with autonomy and enables people with IDD to make informed decisions and develop effective advocacy strategies [26]. However, working with data is complicated and requires various cognitive skills to effectively analyze, interpret, and draw insights [25]. People with IDD face significant limitations in many cognitive areas; such as learning, reasoning, abstract thinking, attention, concentration, memory and recall [2], which prevent them from effectively using, understanding, and making decisions with data, often excluding them from data analytics and leaving them vulnerable to many ethical issues. Still, little progress has been made to improve data accessibility for people with IDD; society tends to focus on their disabilities rather than abilities [24], underestimate their potential and trivialize their needs for data analytics. In addition, each individual with IDD can have a significantly different cognitive profile, which makes developing universal solutions particularly challenging. Since 2018, we have been working with this population to design cognitively accessible visualizations. Our investigation started from a collaboration with an initiative that aims to support people with IDD in data-driven financial self-advocacy [8]. In past work, we conducted a graphical perception experiment to understand how various visualizations may impact data accessibility in the context of budgetary data analysis [25]. Overall, the study underlined the need to design visualizations that attend to the specific abilities and preferences of people with IDD and demonstrated that cognitively accessible visualizations may require a different set of guidelines. We then explored more broadly the lived data experiences of people with IDD through semi-structured interviews [26]. We found that people with IDD frequently used personal data for self-advocacy, self-expression and everyday functioning. However, many of these data encounters were invisible and inaccessible to people with IDD, and were not well supported by current data visualizations, leaving them with limited access to data important to their well-being. Collectively, these studies suggest that visualizations have the potential to empower people with IDD and that visualization research needs to pay more attention to the specific needs of this population to develop truly accessible solutions. In this article, we build on these findings and our own experiences to identify a critical near-term research agenda for increasing cognitive access to data. We argue that cognitively accessible visualizations could serve as an equalizer for people with IDD to participate in society. Specifically, we discuss two immediate research priorities: supporting people with IDD in personal data storytelling for effective self-advocacy and self-expression, and balancing novelty and familiarity in data design to accommodate cognitive diversity and promote inclusivity. ## 2 Supporting Personal Storytelling with Data People with IDD use data in a variety of contexts, such as advocating for one's needs and rights, sharing personal experiences, or expressing thoughts and feelings through data as a creative outlet. However, our preliminary findings suggest that people assemble and visualize data sets to tell personal stories in a manual and ad hoc manner. With limited visualization literacy, most participants found traditional storytelling approaches inaccessible and often intimidating to consume and author, leaving them unable to leverage data to their full benefit in self-advocacy. Therefore, visualization research needs to understand how to support effective personal data storytelling for people with IDD (Fig. 2). Visualizations need to represent personal data of people with IDD to help build empathy through shared lived experience and turn this empathy into actions. Visualization tools should empower people with IDD to better communicate personal stories for creative expression, self-reflection, and compassion. ### Designing for Empathetic Persuasion Visualization research should aim to develop best practices for designing data stories that build empathy in the viewers to help them better understand the lived experience of people with IDD. Many individuals with IDD experience stigma, prejudice and discrimination: they are largely viewed as liabilities to be managed rather than assets that contribute unique perspectives to their community [24]. These individuals usually live in systems that were designed without their unique needs in mind and, as a result, this population often faces barriers to engagement and participation, ranging from the health-care system to the criminal justice system to employment and education [22]. Our research showed that data could be an empowering tool for self-advocacy and help shift these negative narratives for better social inclusion. Specifically, we found that people with IDD were trying to make compelling arguments through collecting and sharing data from their personal lives. These could be written logs of daily activities and accomplishments, multimedia files that demonstrate competence and capability, or even survey results that illustrate the life expectancy gap between disabled and non-disabled populations. To translate this data into powerful arguments, cognitively accessible visualizations should more vividly represent the stories of people with IDD; they could help communicate their life aspects, challenges, thoughts and feelings to those who may not share the experiences; and evoke empathy in the viewers. To design such compelling data visualizations, or to more effectively move a viewer to action, we recommend designers use storytelling techniques, but center the design of these stories around the particular person described by the data and/or the argument to make rather than the data _per se_. This means that designers may need to co-design with people with IDD to build better empathy and an understanding of their lives, and then base the design on IDD individuals' point of view. Visualizations may help a viewer better relate to the story by using true-to-life imagery and having a character that carries the IDD individual's values and beliefs. To better elicit empathetic responses, future authoring tools may learn from principles of the arts to create an aesthetic experience with data. For example, different colors can be mapped to emotions and feelings; visual emphasis and contrast can be made to highlight certain piece of data or make a statement; music, sound effects and narrations can be layered into a multimedia data visualization to establish a setting, develop characters, or advance the plot. These aesthetic experiences may allow a viewer without IDD to feel the emotions and perspectives of people with IDD, being able to peek into their inner worlds and daily lives, getting a sense of "us" rather than "them." Such stories may help people develop a strong sense of engagement [21] and deeper connection with characters depicted in the story (i.e., people with IDD), and activate empathy, which then translates to action. ### Designing for Therapeutic Expression Visualization can serve as a therapeutic tool for people with IDD to regulate emotions and develop self-compassion by encouraging creative expression and meaning-making with personal data. People with IDD are three to four times more likely to develop psychiatric disorders compared with the general population [14]. Approximately one third of adults with ASD have emotion dysregulation and challenging behaviours (CBs) [9], negatively affecting their quality of life and social participation. We found that data was commonly used by people with IDD, especially those with ASD, as a way of self-expression and reflection, a proxy to communicate thoughts and feelings in psychiatric treatments, and occasionally an educational device to improve self-regulation [26]. For example, participants kept written notes of their symptoms and reactions to social situations, they created a values checklist to reflect everyday behaviors, curated video clips that depict human interactions to discuss with their therapists, and created drawings, photos, and multimedia files to express how they feel about themselves and the world around them. People with IDD, particularly those with ASD, usually experience anxiety as a result of sensory processing issues [4]. Our studies indicate that the anxiety may also stem from the social awareness that they are different: they may experience feelings of loneliness and isolation as a result of not conforming to social norms or an inability to fit in with their peers. For example, one participant described how they personally relate to the movie _Beauty and the Beast_, _My understanding is that the beast is autistic to a certain extent. He has a lot of books. He loves knowledge. That's essentially me. And he's nice, but he has that inner world that's not being understood. And Belle is just this oddball. She just seems to be ahead of her time that she understands the beast. She's like the therapist that he needs and she Figure 2: Cognitively accessible visualizations should support people with IDD in telling personal stories through data to elicit empathy in the viewer for more effective self-advocacy and facilitate therapeutic expression and self-compassion by helping them make sense of personal data and life experiences. ends up falling in love with the beast._" Similarly, another participant from our workshop on the theme of "Aliens from the VisuaLand" expressed how they "_truly feel like an alien to the human world._" The sense of alienation can magnify feelings of self-doubt, resulting in social anxiety, low self-esteem, and depression, ultimately leading people with ASD to shut down. Due to the many shared behavioral, conceptual, communicative, and social difficulties, individuals with IDD in general experience high risks of co-morbid mental health conditions such as affective, anxiety, psychotic, and impulse control disorders [27]. However, research indicates a common lack of clinical knowledge and training about the needs of people with IDD among mental health professionals, resulting in misdiagnosis and mistreatment that are associated with an exacerbation of dysfunction and disability [27]. Non-pharmacological therapeutic support in IDD--such as social prescribing, behavioral and educational interventions, or psychotherapy--seem to be more effective than other interventions [13]. While beyond traditional visual analytics, data visualizations do not have to be purely analytical. We argue that cognitively accessible visualizations could be a therapeutic device, providing a creative outlet for individuals with IDD to express themselves, communicate ideas, and make sense of complex life experiences for positive meaning-making. This can happen in several ways: **(1) Supporting self-expression and storytelling.** Data visualization can be a creative way to tell personal stories or narratives. By combining data and personal experiences, individuals with IDD can create visual representations that highlight their unique perspectives or journeys. By utilizing design choices such as colors, shapes, and visual metaphors, individuals can infuse their data visualizations with emotion and personal expression. This can help overcome communication challenges and enable people with IDD to create visualizations that evoke specific feelings, connect with the audience on a deeper level, and convey the subjective aspects of the data being represented. Visualization research may further investigate the connection between different design elements and emotions to provide templates for people with IDD to readily communicate feelings and generate stories from the data. **(2) Encouraging self-reflection and compassion.** Engaging in data visualization can prompt people with IDD to reflect on their own data, habits and behaviours. People with IDD can gain insights into their patterns, progress, or areas for improvement. Visualization can serve as a means to explore personal interests or passions. Whether it's visualizing personal fitness data, travel experiences, or reading habits, individuals can create visual representations of their hobbies, interests, or personal pursuits. This allows them to share their enthusiasm, express their unique perspectives, and potentially inspire others who resonate with their visualizations. Visualization research may explore different encoding strategies and representation formats for expressing these personal experiences to help people with IDD intelligently manage their data and arrive at useful insights. ## 2 Balancing Novelty and Familiarity in Data Design Despite being everywhere, data is invisible to many people with IDD, who often consider data as a remote subject and show apathy or even aversion to it. Most of them are usually unconscious of everyday data experiences and disinterested in improving data literacy [26]. Moreover, data is abstract and usually comes in diverse formats in large volumes, which requires significant cognitive resources and specialized training to work with. Many participants in our studies complained about the complexity of data and their lack of abilities and resources to make sense of it [26]. Evidence from our research shows that when designed carefully, visualizations are able to help people with IDD better reason with data [25], and that the awareness of and willingness to work with data is largely driven by personal motivation [26]. However, we found that most participants associate data with "_hard drive_" and "_technology_", treating it as something used by "_a computer type of gal_" or "_people who do studies_" but not themselves. These stereotype-driven beliefs about data can have detrimental effect on people's ability and confidence in taking control of data. To change this stereotypical view and encourage interests in working with data, cognitively accessible visualization should strike for a balance of novelty and familiarity in design. We anticipate that this can best be achieved by creating novel multisensory experiences to increase the awareness and appeal different elements to explore and reason with the data through a construction approach [12, 15]. The sculpture could use various textures, shapes, or colors to represent different data points or categories, allowing individuals to physically rearrange and compare them [11]. Data can also be **sonified**, with different data attributes being mapped to various auditory elements such as pitch, rhythm, or volume. This allows individuals to listen to patterns, trends, or relationships in the data, providing an alternative perspective for Figure 3: Cognitively accessible visualizations may take in multisensory forms for better data awareness and accessibility. reasoning and analysis [16]. Utilizing **tactile interfaces**, such as touchscreens, haptic feedback devices, or braille displays, can enable individuals to reason with personal data through touch. People can feel patterns, textures, or vibrations associated with different data points, enabling a tactile exploration and analysis of the data [16]. By turning data exploration and analysis into **interactive games or challenges**, individuals can utilize visual, auditory, and even kinesthetic senses to make decisions, solve problems, and reason with personal data in an enjoyable and immersive manner. In addition, creating **multisensory data dashboards** that combine visual representations with auditory cues, interactive touchscreens, or haptic feedback can provide a comprehensive data experience. This allows individuals to reason with personal data using a combination of visual, auditory, and tactile senses, enhancing the depth of understanding and analysis. Future visualization research may better understand how different modalities can be used to accommodate IDD individuals' complex sensory needs and preferences, and more thoroughly examine the impact of various modalities on cognitive accessibility. ### Creating Culturally Relevant Data Interfaces Since IDD is usually a set of disabilities rather than single diagnosis, it may impact each individual differently [2]. In our experiences working with this population, almost everyone has a distinctive set of conditions and faces unique challenges despite being characterized under the umbrella term "IDD," which makes developing a universally accessible solution extremely difficult. Yet our graphical perception experiment (Fig. 4) demonstrated that thoughtfully designed visualizations can serve as a cognitive amplifier and help people with a range of disabilities better reason with data [25]. For example, people with IDD tend to relate abstract visual elements to real-world objects (e.g., treemaps as colored pieces of paper, stacked bars as rising stains etc.). They benefit from familiar visual metaphors, such as dollar signs and other semantically meaningful imagery, to understand the context of data. Additionally, axis-aligned isotype visualizations and discrete, countable representations can also largely improve their accuracy and confidence in data interpretation. While pie charts are extremely inaccessible, stacked bars and treemaps can make proportion data twice as accessible. However, our experiment only covered a niche application of data visualization (fiscal analysis). Most conventional tools and guidelines were developed for neurotypical consumers without considering the differences of people with IDD, introducing unintentional yet ongoing challenges for this population. Cognitively accessible visualizations will require a different set of guidelines and likely take on different shapes [25]. They should be designed with respect to the unique needs, abilities, and preferences of people with IDD, tapping into their shared cultural knowledge and experiences. Designing such a culturally relevant interface may involve several aspects: **(1) Visual Design.** Incorporating culturally appropriate visuals, colors, symbols and imagery that are familiar and meaningful to people with IDD in the design. Our studies suggest that individuals with IDD generally prefer true-to-life representations over abstract visualization when reasoning with data. When designing for a particular scenario, designers may first need to understand what visuals make sense to people with IDD, and then consider maximizing the details of the representation, using photos, icons and other relevant pictorials to communicate data context. Future research should examine the role of semiotics in designing accessible visualizations or even explore what "useful" chart junk [6] would look like for people with IDD, identifying meaningful signs and symbols to include in visualization design to cater to their needs. **(2) Navigation & Layout.** Organizing the navigation and layout of a visualization to match IDD individuals' expectations and mental models. People with IDD usually have attention and memory issues [2], whereas those with ASD specifically are highly susceptible to cognitive overload and can easily get overwhelmed by unexpected changes and transitions [4]. However, both groups mentioned the potential benefit of guided sensemaking, such as "step-by-step" imagery, to help direct their attention [26]. Future research should explore the impact of turning abstract visualizations into more concrete formats, such as data comics [5], data videos [3], or games [23], as they may enable self-paced navigation and help people with IDD break down multidimensional data into digestible pieces. **(3) Content, Language & Representation.** Adapting the texts and communication styles of the visualization to reflect the preferences of people with IDD. To ensure the content is inclusive and make people feel represented and respected, designers need to be careful with their data selection and use unbiased data sources, paying close attention to the visual encoding and design choices to not skew or distort the data. They need to provide contextual information and consider alternative perspectives, encouraging critical thinking and interpretation of the data. The visualization needs to be an objective, accurate, and inclusive representation of the data, and the people it describes. Future research may explore how affective design elements [19] can meaningfully reflect the cultural identities of this population and create emotionally engaging visualizations. ## 4 The Need for Cognitively Diverse Visualizations People with Intellectual and Developmental Disabilities (IDD) encompass a diverse population with unique characteristics and abilities, which often translate to more complex and intense challenges compared with other disabilities. The solution for developing cognitively accessible visualization is not a straightforward one, as it needs to cater to diverse cognitive needs and abilities, accounting for different information processing, comprehension and interaction preferences and challenges, and will require flexible and adaptable approaches to working with this population and presenting information. Unlike physical disabilities that can often be addressed through standardized accessibility guidelines, cognitive disabilities are highly individualized and context-dependent. There is no one-size-fits-all Figure 4: Guidelines for designing cognitively accessible visualizations drawing from the graphical perception experiment in support of financial self-advocacy for people with IDD. approach to cognitive accessibility, making it challenging to develop universal guidelines or standards. Yet, our preliminary investigations show that visualization could be a promising tool to improve cognitive access to data and provide concrete support for people with IDD in various areas of life. However, designing cognitively accessible visualizations will require a different set of guidelines and concerted efforts dedicated to addressing their particular challenges. We argue that designing for cognitive accessibility is to design for cognitive diversity, and it is a matter of equity and respect. By providing equitable access to information, individuals can gain knowledge, express their opinions, and contribute to decision-making processes. And by designing visualizations that cater to diverse cognitive needs, we can foster a more inclusive and equitable society where data is accessible to all. ## 5 Conclusion Data can empower people with IDD in many ways. However, working with data is difficult and often out of the reach of this population. In this paper, we argue for a near-term research agenda for cognitively accessible visualizations to serve as an equalizer for people with IDD to better participate in society. Based on preliminary findings and our own reflections working with this population and their data, we discuss two immediate research goals for cognitively accessible visualizations to improve data accessibility. First, visualizations should support people with IDD with better personal data storytelling for more effective self-advocacy and self-expression. Second, visualizations should balance novelty and familiarity in design to increase IDD individuals' awareness of and agency over personal data and ultimately make this population more visible in the inclusive data visualization space. ## Acknowledgments We would like to thank our colleagues, collaborators, and participants for their support. This work was funded by NSF #2046725, #2320920, #2233316, and #1933915, as well as the Ray Hauser Award at University of Colorado Boulder.
2306.17669
MCQUIC -- A Multicast Extension for QUIC
Mass live content, such as world cups, the Superbowl or the Olympics, attract audiences of hundreds of millions of viewers. While such events were predominantly consumed on TV, more and more viewers follow big events on the Internet, which poses a scalability challenge: current unicast delivery over the web comes with large overheads and is inefficient. An attractive alternative are multicast-based transmissions, however, current solutions have several drawbacks, mostly related to security and privacy, which prevent them from being implemented in browsers. In this paper we introduce a multicast extension to QUIC, a widely popular transport protocol standardized by the IETF, that solves several of these problems. It enables multicast delivery by offering encryption as well as integrity verification of packets distributed over multicast and automatic unicast fallback, which solves one of multicasts major obstacles to large scale deployment. It is transparent to applications and can be easily utilized by simply enabling an option in QUIC. This extension is soley focused on the transport layer and uses already existing multicast mechanisms on the network layer.
Max Franke, Jake Holland, Stefan Schmid
2023-06-30T14:00:13Z
http://arxiv.org/abs/2306.17669v1
# MCQUIC - A Multicast Extension for QUIC ###### Abstract Mass live content, such as world cups, the Superbowl or the Olympics, attract audiences of hundreds of millions of viewers. While such events were predominantly consumed on TV, more and more viewers follow big events on the Internet, which poses a scalability challenge: current unicast delivery over the web comes with large overheads and is inefficient. An attractive alternative are multicast-based transmissions, however, current solutions have several drawbacks, mostly related to security and privacy, which prevent them from being implemented in browsers. In this paper we introduce a multicast extension to QUIC, a widely popular transport protocol standardized by the IETF, that solves several of these problems. It enables multicast delivery by offering encryption as well as integrity verification of packets distributed over multicast and automatic unicast fallback, which solves one of multicasts major obstacles to large scale deployment. It is transparent to applications and can be easily utilized by simply enabling an option in QUIC. This extension is solely focused on the transport layer and uses already existing multicast mechanisms on the network layer. + Footnote †: This work has been supported by the Federal Ministry of Education and Research of Germany in the programme of ”Souveraen. Digital. Vernetzt.” Joint project 6G-RIC, project identification number (PIN): FKZ 16KISK030 ## 1 Introduction A big part of today's network traffic is caused by multiple users simultaneously accessing the same content. In these scenarios, the predominant method of delivery over unicast can be highly inefficient. Using multicast delivery instead would reduce load on many parts of the network infrastructure, especially content delivery servers and ISP networks, by removing the need to send the same exact packet multiple times to different receivers. Two concrete examples for cases where unicast delivery is reaching its limit are live streaming and game downloads. As of May 2023, the highest traffic peak that Akamai, one of the worldwide largest CDN providers, recorded across its network was 250 terabit per second [14]. A 4k livestream has a bitrate of around 40 Mbps [15]. This means that 6.25 million concurrent viewers are enough to fully utilize Akamai's available capacity. This is less than 2% of the average viewership of e.g. the EURO finals [13]. This is not even considering an increase in bitrate due to HDR or 8k. Similarly, new game releases can cause huge bursts of traffic. One such release is GTA V, as the fastest selling game of all time it sold over 11 million copies on the first day [16]. With a file size close to a 100GB [1], it would take all of Akamais capacity almost 10 hours to handle the downloads on its launch day. Since its release, file size has also continuously grown, now reaching up to 231GB for certain games [1]. Both of these cases illustrate that already today unicast delivery is becoming unsustainable, while traffic demand is certain to further increase. This might soon lead to an inflection point where increases in link capacities can no longer keep up with demand. Utilizing multicast for delivery of the described types of content would offer a solution for the mentioned scalability problems. While multicast is widely used for intra-domain use cases, such as mDNS [1], its inter-domain applications are very limited. Even though there exists a multicast backbone, called the MBONE, it is mostly comprised of educational networks such as Internet2 and GEANT, while carrier and ISP networks usually disable generic native inter-domain multicast. This was and is largely due to a lack of legitimate applications that use multicast, a lack of privacy and security [12, 13, 14] when compared to unicast, and the large amounts of overhead required to create and main tain any-source multicast (ASM) trees [40]. Two recent developments have eased these issues. For one, ASM has been officially deprecated [1] for inter-domain use cases, removing much of the complexity related to finding multicast sources. On the other hand, the traditional protocols used to maintain multicast trees, most notably PIM [13], are being replaced by BIER [23] which provides a way to route multicast packets without the need to maintain multicast trees and thus state on each router. Furthermore, automatic multicast tunneling (AMT) [1] provides a way to bridge networks that do not support native multicast. As such, the time might have come to make another attempt to deploy multicast delivery to the web. One of the critical pieces that are now missing is a way to enable multicast reception on end user devices. Most current multicast transport protocols, such as NORM or FLUTE, focus on reliable file transfers and are thus not suitable for transmission of live content. Furthermore, since most users use browsers for web content, finding a way to receive multicast packets in them would be advantageous. However, browsers rightfully place very high bars when it comes to user privacy and security. As such, using bare, native multicast would not be feasible for a variety of reasons, which we will further explain in Section 3. To bridge this gap, this paper introduces an extension to the QUIC transport protocol [14], which has recently been specified and is implemented in most of the popular browsers. It provides many advantages compared to TCP, such as built-in encryption through TLS, the prevention of head of line blocking by the option to use multiple streams in parallel or support for mobility by having the option to keep a connection alive even when a client migrates networks and changes its IP address. Some of the mechanisms used by MCQUIC are reused from the Multipath Extension for QUIC that is also currently in the process of being standardized.[15] It should be noted that the extension uses already existing mechanisms and technologies to realize multicast on the network layer and is exclusively focused on the transport layer. ### Our contributions Utilizing QUIC's features, in particular built in packet protection and multiple streams, we introduce MCQUIC, an extension to QUIC that combines both unicast and multicast to act as a new way to enable multicast reception support in browsers, which due to a lack of UDP support in them has so far not been possible. Our extension allows clients to use encrypted multicast that is protected against injection of packets by third parties. This provides the option to trust packets received via multicast on public networks. It can be enabled with minimal changes to application layer implementations and offers automatic fallback to unicast QUIC in case the network does not support multicast. The challenges we will specifically address are: * Prevention of injection of packets by third parties * Guaranteeing security and privacy under the Internet threat model * Guaranteeing compatibility with networks that don't support multicast by enabling automatic unicast fallback ### Organization We will first give a motivating example and basic design overview over our extension in Section 2. Following this in Section 3, we will explain in more detail how it solves the issues and challenges associated with traditional multicast. In Section 4 we will then take a closer look at MCQUIC, including some of the new frames it introduces and the design decisions behind them. We will list more use cases in Section 5 and provide some related work in section 6. Finally, Section 7 will conclude our work. MCQUIC is still in ongoing development, the most up-to-date version of its specification can be found on GitHub1. Footnote 1: [https://github.com/GrumpyOldTroll/draft-jholland-quic-multicast](https://github.com/GrumpyOldTroll/draft-jholland-quic-multicast) ## 2 Motivation and design overview In this section we will give an example to motivate why MCQUIC is a useful addition to QUIC before giving some high level information on its design principles. ### Motivating example One of the main potential use cases for MCQUIC is the streaming of live media in browsers. As mentioned before, more and more people shift their content consumption from traditional (cable) TV to live streaming. As such, the number of people that are going to watch live sports digitally in the US alone is expected to increase from 57.5 million in 2021 to over 90 million by 2025 [24]. This often either occurs in browsers directly or in browser based apps on smart TVs or devices like Chromecast or AppleTV. As such, it was a core design requirement for our extension to be able to be used in browsers. This is also one of the reasons that we choose QUIC as a starting point since it is, as mentioned, available in all major distributions such as Chromium (and thus Chromium based browsers like Chrome or Edge), Firefox and Safari. Importantly, most browsers for security and privacy reasons do not offer a UDP API which means that using traditional, UDP based, multicast was not an option. Additionally, using the HTTP-based request response semantics that are common today are not efficient for multicast. In these, each video segment has to be specifically requested by the client. This would cause a great amount of unnecessary load when used with multicast delivery as those requests have to be sent over unicast and handled separately for each client. Unfortunately, server push has been recently deprecated from Chromium and gQUIC and is thus no longer an option. Instead, we envision using Web-Transport, the successor to WebSockets. It allows servers to push data to browsers without it first having to be requested by the client. Figure 1 shows a simple deployment using MCQUIC. Big data packets, such as video segments, are carried over the red multicast channel, which means the server only has to send them once and they are only transmitted once on each network segment, while the other three colors represent unicast QUIC connections used for control and integrity. This includes the client sending Acknowledgements for each packet received via multicast. As these are comparatively small amounts of data, compared to the actual content that gets carried on multicast, the overhead incurred on the server and network is much lower than sending everything over unicast. Additionally, the unicast connections could be used to transmit data that is client specific, such as subtitles for different languages or other auxiliary information. To make sure this data does not head of line block any video frames, it uses a different stream ID within QUIC. The unicast connection is also used to verify the integrity of all packets sent over multicast, guaranteeing to the client that they come from the intended source. Should any of the clients lose multicast connectivity e.g. due to a network change, the server will notice this by a lack of reception of Acknowledgements. As a result, the server might decide to deliver the data packets over unicast instead of multicast to this specific client. This guarantees that clients are able to receive all data, even in cases where no native multicast reception is available. This mechanic is some what comparable to AMT in that it also makes multicast available in networks that don't support native reception. ### Design overview MCQUIC has two main components: * A regular unicast QUIC connection * One or more multicast QUIC channels The QUIC connection gets established like it usually does. It goes through the TLS handshake and cart chain to authenticate the server to the client and exchange the keying material required to encrypt the connection from there on out. The exact handshake process, as many other parts of QUIC, are beyond the scope of this document and can be looked up in RFC 9000 [11] and 9001 [11] respectively. During the handshake, server and client also exchange transport parameters that are used to specify capabilities and limits of each peer. These include a list of available cipher suites, the max idle timeout, the max ack delay etc. MCQUIC introduces two new transport parameters, one server and one client specific. These are used to negotiate support for multicast. The Figure 1: Schematic overview of a server and three clients using multicast QUIC. The red connection represents multicast, used to carry big data packets such as video frames, while the connections in the other 3 colors represent unicast connections, used only for control. client will also use the parameter to communicate its own requirements and limitations related to multicast to the server. The main design addition that MCQUIC brings is that, along with the normal unicast QUIC connection, clients can join one or more multicast QUIC channels. These are unidirectional data streams from server to client that carry data that is relevant for several clients at the same time. A server can instruct a client to join a specific channel that carries content that is desirable for the client to receive. Whether or not a channel gets joined is up to the client, it may refuse the joining of a channel for any reason, e.g. if it deems that doing so would violate its own bandwidth limits. If a client does decide to join a channel, it will indicate this to the server by sending a relevant frame over the unicast connection. For clients to then receive multicast packets, they have to join an IP multicast group. There are two major flavors of multicast, the before mentioned and now deprecated ASM, and source-specific multicast (SSM), which is the only type supported by our extension. This decision was made as ASM requires different semantics and additional complexity when compared to SSM. As the main intended use case for MCQUIC lies in inter-domain applications, it was deemed that the additional design considerations required were not worth the effort at this time. The mechanisms for joining a group rely on the MLD [10] and IGMP [2] protocols. Any packets sent over multicast are encrypted and a hash for each packet gets securely transferred to allow the client to verify that it has indeed been sent by the original source and not injected by a third party. The verification of the packets requires the calculation of the hash by the receiver, the overheard incurred by this depends on which hash function is used. After verifying the integrity of a packet received over multicast, the client will handle it like it would handle a regular unicast packet. As such, to the application layer, it is transparent whether or not a packet was received over multicast or unicast. This completes the brief overview of the extension. Before going into more detail on its specification, in the next section we will first give an overview of the how these mechanisms solve the issues with traditional multicast mentioned in the introduction. ## 3 Challenges solved As mentioned previously, the main issues traditional multicast deployments face are related to security, privacy and a lack of support for native multicast in many networks. In this section we will give a more detailed explanation on these issues and how our approach solves them. ### Security and privacy MCQUIC allows decryption keys to be distributed over the unicast connection. This allows packets sent over multicast to be protected by the same AEAD mechanism as regular unicast QUIC packets. As such, third parties are not able to gain any information (apart from fingerprinting) on what content is being carried on a particular multicast channel. However, as the same keying material is used by all receivers, an attacker could potentially gain access to the content if it can get a hold of the keys from any of the legitimately subscribed receivers. This is one of the reasons why it is important to regularly update the keying material through MC_KEY frames. It is up to the server to make sure only legitimate clients are sent MC_KEY frames. By utilizing the same encryption mechanisms that are used by other QUIC implementations, we can also make use of any already existing hardware offloading capabilities for cryptographic functions. ### Prevention of packet injection As multicast senders usually have no knowledge of number, much less identity, of receivers and there is no secured two way communication between them, there is no way for receivers to verify that a packet that arrived on a multicast channel has indeed been sent by the intended sender and not by a third party spoofing its IP address. In multicast QUIC, we use identity frames that are anchored to a secure unicast QUIC connection, which required the sender to authenticate itself through the usual TLS cert chain procedure. The integrity frames use standard cryptographic hashes that are resistant against collision attacks. As such, receivers are able to verify each packet received over multicast against its corresponding hash. If they match it can be assured that the packet was sent by the intended sender and not injected by a third party. ### Fallback to unicast Currently, most networks do not support generic native multicast. As such, apps that want to utilize multicast always have to worry about implementing a fallback mechanism on the application layer. With MCQUIC, applications only have to indicate if they wish to enable support for multicast. From there, the QUIC implementation takes care of the rest. If multicast is not supported by the clients network, it can automatically fallback to unicast and act like a regular QUIC connection. This fallback is, depending on the QUIC implementation, transparent to the application as it will always only see delivered data packets. ## 4 Detailed design In this section we will give more details on some of the mechanics and design decisions of MCQUIC. We will first discuss some of the general QUIC mechanisms and how we adapt or change them to enable our extension before taking a look at some of the new frames we introduce. ### Client limits During connection establishment, both client and server can include an additional multicast transport parameter that indicates to the peer that multicast is supported. The clients parameter includes additional details, such as a list of supported hash and encryption algorithms and whether or not IPv4, IPv6, or both are supported for multicast reception. These parameters can be later updated by the client by sending an MC_LIMITS frame to the server that contains the same parameters. There are several reasons as to why we need separate transport parameters instead of using the ones already negotiated for unicast QUIC. Even though there might be support for both IPv4 and IPv6 via unicast, the client might be aware that due to a lack of MLD support, only IPv4 multicast is supported. In this case, disallowing IPv6 multicast from the get go saves time when the server would otherwise instruct the client to join an IPv6 multicast channel only to find out no reception is possible. The list of hash and encryption algorithms might also differ from unicast QUIC as it can be assumed that only less sensitive data will be carried over multicast. This means that there might be an option or desire to use weaker algorithms, such as SHA-1, to reduce the size of the checksums carried in integrity frames. While it is true that SHA-1 is no longer considered to be entirely collision resistant [1], it might be a worthwhile trade off to still use it in certain cases where it is highly unlikely that an attacker can create a collision in time or where a collision would not pertain to any critical or sensitive data. In either case, it is always recommended to use collision resistant algorithms to prevent any such risks from occurring at all. Also included in the client side transport parameter are the limits the client places on all multicast channels combined. These include the maximum number of announced or simultaneously joined channels, but most importantly the maximum aggregated rate across all channels. As the data carried over multicast is often of a steady and pre-determined rate, e.g. in video streams the bitrate is known beforehand, the server can reliably take restrictions of clients into consideration when assigning which channels to join. It might determine that the maximum allowed rate is not high enough to receive a 4K stream, which gets carried on one channel, so instead instructs the client to join a different channel that carries the stream in HD. More sophisticated models could make use of layered video [14] or similar to provide each client with an optimal utilization of its available bandwidth. ### Multicast channels In QUIC, clients and servers assign (multiple) connection IDs to each connection. These are included in each sent packet. This mechanic is used to enable mobility by making a connection identifiable even if the 5-tuple changes. For example, any packet sent from server to client contains a client selected ID so the client can associate it with a connection. In multicast QUIC channels, we use the connection ID field in packets and replace it with a channel ID that has the same format. This allows clients to associate each packet with a multicast channel, instead of a connection. Connection and channel IDs share the same space. For clients to learn that an ID is a channel ID and not a connection ID, the server sends channel announce frames that contain, among other parameters, channel IDs of available multicast channels. These channels are abstractions that represent a source specific multicast channel along with additional parameters such as the AEAD algorithm used for header and packet protection. The secret for the header algorithm is also included and remains static during the lifetime of a channel while the secret for packet protection is sent separately in a KEY frame. Channels exist independent of any single unicast connection and may be joined by many clients. Their lifetime can range from a few seconds to days or even weeks. Channels share the stream ID space with each other and the unicast connection. This allows for the retransmission of lost packets over a different channel or unicast. Contrarily, each channel has its own packet number space to reduce the coordination required between channels. In addition to the ANNOUNCE and KEY frames, the server will also send INTEGRITY frames that include hashes for each packet sent over the multicast channel. These are used to authenticate packets and as the hash algorithms used are secure against collision attacks, this mechanism protects against the injection of packets into the multicast channel by third parties. A server will then at some point send a JOIN frame that will indicate to the client that it is should now join the specified channel. If it decides to do so, it will achieve this by e.g. sending an IGMP or MLD report which triggers the joining of the SSM channel. If the network supports multicast and packets start to arrive, the client will send a STATE frame to the server indicating this. As the entire multicast process is server driven, the server might decide that if a specific timeout passes without receiving a STATE frame, that the clients network does not support multicast. In that case, it can fall back to unicast and send the data packets over the regular QUIC connection [1]. Any packets arriving on a multicast channel get acknowledged by the client over the unicast connection. This is necessary as the channels are strictly unidirectional from server to client. This also means that only specific frames, such as STREAM or DATAGRAM frames, can be sent on them. To reduce load and improve scalability, acknowledgements can be bundled and don't need to happen immediately. Lost packets may be retransmitted. This can occur over both unicast or multicast, depending on what the server deems more efficient. ### Congestion and flow control As there is no regular congestion or flow control mechanism on channels, the client specifies a maximum allowed aggregated rate across all channels. Depending on this rate, the server decides on which channels are suitable for the client to join to make sure that all joined channels combined stay under the clients limits. Each channel ANNOUNCE frame also includes the maximum rate of data that is expected to be sent on that channel. If the client thinks that joining a channel would violate its limits, it can refuse the join. A client can also update its allowed maximum aggregated rate, along with other parameters, during the lifetime of a connection by sending a LIMITS frame. Both server or client may decide that the client should leave a joined channel for a variety of reasons. These include, but are not limited to, high persistent loss on the channel, the end of the channels lifetime, or excessive amounts of spurious traffic (that may be injected by an attacker). #### 4.3.1 Flow control The usual mechanisms used for flow control in QUIC are too strict for use in multicast channels without modification. The multicast extension gives the client a new responsibility to be able to robustly handle multicast packets that would exceed its MAX_DATA without aborting the connection, either by increasing its MAX_DATA as needed to keep up with received multicast packets or by dropping the packet and leaving the channel (resulting in unicast fallback), for clients that cannot do so. As there are potentially many clients joined on a single channel, the server can not adjust its sending of data by limits imposed by any individual one. Additionally, the new transport parameter is used by clients to set their limits and change them by sending new Limit frames. The server has to make sure that the client stays within those limits by having it either join or leave channels appropriately. #### 4.3.2 Congestion control The server is aware of any multicast packet loss experienced by the client in the form of missing MC_ACK frames. The server should make sure that clients leave channels that experience heavy sustained loss. If several clients that are subscribed to the same channels experience loss of the same packets, this might indicate an underlying issue further upstream. Contrarily, if a client experiences loss on several channels at once it might indicate an issue on its own end. As such, it should react by lowering its max rate parameter. ### New frames Our extension introduces a total of 9 new frames. Some of these behave similar to already existing frames, such as the MC_ACK frame that has the same structure as a regular ACK frame with an added field for the channel ID to identify for which channel a packet gets acknowledged. We will now give explanations for the need of some of the new frames. #### 4.4.1 Mc_announce As previously mentioned, servers use Announce frames to send clients information that are required to later join multicast channels. Servers should send these frames for any channel the client could potentially be asked to join in the future. The reason for this is that the IDs used for channels share their space with the connection IDs used by the client. As such there is a potential for a collision that should be cleared up by the client before it is asked to join the channel by the server. Another advantage of decoupling the announce from the join is that it allows for less overhead in cases when a client has to temporarily leave a channel, i.e. because it suffered a degradation of connection quality. #### 4.4.2 Mc_join, leave, retire and state These 4 frames are used to handle state changes for announced channels. Join, leave and retire are sent by the server to instruct the client to switch the server into the specified state, retire indicating that the client should drop any stored state associated with that channel. State frames are sent by the client to the server to indicate that the state change was successful. Figure 2 shows the transition diagram for a QUIC channel from a clients point of view. #### 4.4.3 Mc_Integrity integrity frames are essentially a list of hashes for packets that are received via multicast. Each frame also includes the packet number for the packet the hash list starts from. The first integrity frame can act as the root of a Merkle tree. As such, it has to be transmitted using the unicast connection, as only that is initially secured by TLS against injection of packets by third parties. However, to reduce the unicast overhead even further, any subsequent frames can be included in packets sent over multicast as their integrity has already been verified by previous frames, either by ones included in earlier packets sent via multicast of the original unicast frame. The exact calculation of the hashes is of course dependent on the chosen hash algorithm. As mentioned, to reduce the size of the hashes, it could be considered to use algorithms that are less secure against collision attacks, especially for integrity frames that are being sent over unicast. #### 4.4.4 Mc_key The MC_KEY frame is used to send a secret. The secret is then used to derive the IV and key in the same way the TLS traffic key is calculated. While instead of using a KDF a key could be transmitted directly, as the secret is only used to derive one key and not multiple ones, there are several reasons for using this approach. For one, the secret itself can be smaller than the key as the KDF extends it. This results in reduced overhead as the KEY frames can be smaller. Additionally, many of the built in decryption functions in various QUIC (software and hardware) implementations are based on secrets and KDFs instead of plain keys. This means that there is less adaptation required when implementing multicast support and optimizations can be more easily carried over. The server is periodically sending out secrets from which new keys will be derived. The reason this is done this way instead of the usual QUIC key rotation utilizing the key phase is to prevent the sharing of secrets to unauthorized subscribers of the multicast channel. If the secret remained constant for the entire lifetime of a channel, a single bad acting client could leak it and compromise the channel going forward. For clients that are newly joining a channel, the key has to be distributed via unicast. However, similar to integrity frames, subsequent key updates could also be done via the multicast channel itself to further reduce unicast overhead. To provide forward secrecy, key updates are occasionally required to be done over unicast to prevent any clients that are no longer connected via unicast from accessing the content of the channel. To allow the client to know which secret is relevant for which packet, each Key frame also includes a packet number, indicating starting from which packet the key has to be used. A key remains active until a new key with a higher packet number is received. As the packet number is part of the protected portion of the QUIC header, the header protection key has to remain the same for the lifetime of the channel as clients would otherwise not be able to decrypt the header and thus not know when to apply a new key. ## 5 A live streaming use case One of the main use cases we envision for MC-QUIC is the streaming of live content. One possible deployment would utilize Twitchs WARP Figure 2: Transition state diagram for a channel from a client’s perspective. [14] protocol that can be used to stream video over QUIC. In it, each video segment is sent on a separate QUIC stream. Streams rise in priority to make sure newer segments get transferred first in case of congestion. We can combine this mechanism with MCQUIC by having several channels for the different offered stream qualities. Clients would be told to join the channel which is most suitable, according to the limits they specified to the server. Since each video segment is its own stream, clients do not have to worry about receiving data on a stream that has already been active before they joined. Thus, they don't have to consider an already existing Stream offset when joining a channel. It should be noted that for browser based use cases for MCQUIC, the common request-response semantics no longer apply. As such, for browser deployment, WebTransport[26] push or similar would be required. The given example is just the most basic use case. It is conceivable that in the future mechanisms like layered video could be used to combine several channels to achieve a higher overall quality while still allowing for each channel to be useable on its own. The IETF has only very recently started to focus on media over QUIC and we hope to also be able to contribute to this effort in the future. ## 6 Related work As mentioned in the introduction, there recently has been an influx of work on multicast. One major ongoing effort is the TreeDN [1] project championed by Juniper. It tries to solve a similar problem of delivery of live video content via multicast. However, instead of aiming at a browser based implementation it mainly makes use of AMT and the AMT module of the VLC media player. It provides the ability to stream videos from the MBONE to domestic networks that do not support native multicast reception by going through an AMT relay that is deployed on the MBONE. It also provides a content portal, similar to YouTube or Twitch, on which content can be found and streamed. Finally, it includes a service that can automatically translate a unicast stream to multicast and ingest it into the MBONE. The second major ongoing work relating to multicast is the mentioned BIER [23], which tries to change the way multicast routing is done. Instead of the traditional approach of constructing multicast trees, it works by separating the Internet into multiple BIER domains. These could take the form of a company network, an autonomous system or even an entire ISPs network. If a multicast packet enters a BIER domain through an ingest router, it gets tagged with IDs for all routers, either inside the domain or on its edge, it has to be delivered to. This means that routers no longer have to keep state for multicast trees as all the necessary information for routing is included in the BIER header of each individual packet. BIER has been implemented and evaluated in P4 [15]. The downside to this approach is that unlike PIM, it is not a general purpose protocol but instead requires a different mechanism for the tagging of packets at the ingress router depending on the use base, e.g. there are BIER extensions for ISIS [15], OSPF [24], multicast VPN [18] etc. In general, the advantages of using multicast for content delivery have been well known. Research in this area reaches from multicast in mobile networks [25] over using overlay networks to connect multicast islands [10] to multimedia conferencing with multicast [10]. ## 7 Conclusion In this paper we introduced a new extension to the QUIC transport protocol that enables the delivery of (mass media) content via multicast. It overcomes many of the obstacles traditional multicast implementations and deployments face by utilizing a base unicast QUIC connection as a security anchor for multicast channels. It provides the well established scalability benefits of multicast while only adding a small amount of overhead. Integrity of delivered packets is guaranteed as well as fallback to unicast delivery in networks that don't support multicast. There are many opportunities for future research related to MCQUIC. For one, it is the first time multicast senders can get telemetry information from all their receivers. This information can be used to optimize the management of channels as well as when to switch between unicast and multicast. Finding good heuristics or models to determine these configurations is part of possible future work. On the other hand, enabling multicast delivery for media content allows to reevaluate other technologies that have so far been deemed unsuitable or inefficient in unicast scenarios. One example for this is scalable video encoding, which sees no use currently but might be a good way to combine multiple multicast channels into one media stream.
2309.10783
Language as the Medium: Multimodal Video Classification through text only
Despite an exciting new wave of multimodal machine learning models, current approaches still struggle to interpret the complex contextual relationships between the different modalities present in videos. Going beyond existing methods that emphasize simple activities or objects, we propose a new model-agnostic approach for generating detailed textual descriptions that captures multimodal video information. Our method leverages the extensive knowledge learnt by large language models, such as GPT-3.5 or Llama2, to reason about textual descriptions of the visual and aural modalities, obtained from BLIP-2, Whisper and ImageBind. Without needing additional finetuning of video-text models or datasets, we demonstrate that available LLMs have the ability to use these multimodal textual descriptions as proxies for ``sight'' or ``hearing'' and perform zero-shot multimodal classification of videos in-context. Our evaluations on popular action recognition benchmarks, such as UCF-101 or Kinetics, show these context-rich descriptions can be successfully used in video understanding tasks. This method points towards a promising new research direction in multimodal classification, demonstrating how an interplay between textual, visual and auditory machine learning models can enable more holistic video understanding.
Laura Hanu, Anita L. Verő, James Thewlis
2023-09-19T17:32:21Z
http://arxiv.org/abs/2309.10783v1
# Language as the Medium: Multimodal Video Classification through text only ###### Abstract Despite an exciting new wave of multimodal machine learning models, current approaches still struggle to interpret the complex contextual relationships between the different modalities present in videos. Going beyond existing methods that emphasize simple activities or objects, we propose a new model-agnostic approach for generating detailed textual descriptions that captures multimodal video information. Our method leverages the extensive knowledge learnt by large language models, such as GPT-3.5 or Llama2, to reason about textual descriptions of the visual and aural modalities, obtained from BLIP-2, Whisper and ImageBind. Without needing additional finentuning of video-text models or datasets, we demonstrate that available LLMs have the ability to use these multimodal textual descriptions as proxies for "sight" or "hearing" and perform zero-shot multimodal classification of videos in-context. Our evaluations on popular action recognition benchmarks, such as UCF-101 or Kinetics, show these context-rich descriptions can be successfully used in video understanding tasks. This method points towards a promising new research direction in multimodal classification, demonstrating how an interplay between textual, visual and auditory machine learning models can enable more holistic video understanding. ## 1 Introduction Imagine it is the year 2008 and you have just watched the latest episode of Breaking Bad - a highly multimodal experience featuring moving pictures, speech and sound effects. Suddenly you receive a text message on your mobile phone - it is your colleague, who is urgently requesting a description of the episode so that they may participate in water cooler discussions without arousing suspicion. You must now convey to your colleague, using only text messages, a description of the episode that will stand up to scrutiny. Although reducing the vast amount of pixels and audio samples you have just consumed down to a few words seems like a daunting task, you recognize that by combining succinct descriptions of key images, speech and sounds with your colleague's inherent ability to fill in gaps using contextual reasoning you will be able to provide a comprehensive recount of the episode without the need for the direct experience. In this work, we explore to what extent Large Language Models (LLMs) are able to perform a similar task, namely classifying the action in videos when receiving only textual clues about the video contents from other models. The last few years have seen remarkable progress in large language models for text, which have shown unprecedented capabilities and performance on downstream tasks [7, 17, 14, 18]. This has led to methods trying to bridge the gap between vision and language. Contrastive methods such as CLIP train joint vision language representations [15]. Perceiver IO [12] offers a generic scheme to encode arbitrary modalities. Kosmos [11] is a large multimodal model trained from scratch on web-scale image and text data. GPT-4 [14] accepts image input, but this feature is not currently publicly available. Numerous works adapt pretrained LLMs in order to understand information from different modalities. Flamingo [5] injects representations of images and short videos into the language model backbone using Gated X-attn layers. BLIP-2 [13] introduces a Q-Former which provides "soft visual prompts" to condition an LLM on visual information. Mini-GPT4 [19] leverages the Q-Former to provide a soft prompt to a Llama-based model. In contrast to these techniques, we demonstrate that using only text as the medium can convey multimodal information to downstream LLMs. This has several key advantages. Firstly, this approach ensures a straightforward "plug and play" interface for chaining models without extra adaptation. This is particularly relevant with the rise of API-based language models that prohibit modifications. Secondly, inter-model communication becomes transparent and interpretable in natural language. Crucially, this method simplifies tasks like multimodal video classification into two phases: a "perception" phase using unimodal or multimodal models as surrogates for various senses, followed by a "reasoning" phase where a foundation model consolidates diverse inputs to create a comprehensive video narrative. More recent methods such as LENS [6] or Video Chat Captioner [8] explore similar textual interactions between models. While LENS only explores the ability of LLMs to reason over visual question answering tasks given visual clues about images, Video ChatCaptioner proposes chaining together BLIP-2 and ChatGPT in order to have conversations about images. Our method goes beyond just question answering tasks, demonstrating that both visual and auditory clues can be used by LLMs for video classification. In summary, our contributions are: 1) We introduce a new multimodal classification approach consisting of two phases: a "perception" phase where models act as sensory proxies and a "reasoning" phase that consolidates multimodal textual inputs into a coherent narrative. 2) We demonstrate the efficacy of text as the primary medium of interpreting multimodal data. 3) For the first time, we showcase that textual representations of visual and auditory cues alone can effectively classify actions within videos. ## 2 Method Perception modelsTo extract visual captions from video frames, we use the BLIP-2 [13] model. We process only 5 equidistant frames per video to ensure a diverse sampling of the video content. We use Whisper [16] to obtain audio transcripts, specifically the Faster Whisper version [1] which has been optimised for fast inference. We use a temperature of 0, a beam size of 5 and the VAD filter to exclude the parts of the video that don't have any speech. In order to generate audio tags, we leverage ImageBind [9] to get audio embeddings and compute the similarity with the textual embeddings of the AudioSet labels. We then only select the labels that have a similarity over a certain threshold, which can be obtained by qualitatively checking a few examples. Reasoning modelsFor our reasoning module, we test out 3 different state-of-the-art large language models. We use the GPT completion API, specifically the GPT3.5-turbo version [2]. Additionally, we use the newly launched function calling feature which allows the user to specify a json schema for the output. The second LLM we evaluate is Claude-instant-1 [3], which has reported similar performance and capabilities to GPT3.5-turbo. For Llama2 we use the Llama-2-13b-chat variant [18], which has 13 Billion parameters and is specialised for conversation. We use a temperature of 0 or near 0 for all reasoning models to ensure more consistent outputs that are able to better adhere to the instructions given. The prompts we use for classification follow this simple template with slight variations among the different LLMs to accomodate specific prompt guidelines: _Given this {multimodal clues} and these action recognition labels: {labels} Please return the 5 labels that apply the most to the video in a json format, from the most likely to the least likely._ Structured OutputLLMs usually generate free-flowing natural language outputs, however, for the task of classification we want the model to provide us with 5 ranked guesses from a set of pre-defined class names. To accomplish this with GPT we use the function calling API, providing the model with a JSON Schema of the function to call, where the schema contains an enum of the possible class names. For Claude, we provide the class names in the prompt and ask for the results to be returned as JSON, which, in Figure 1: Our method combines a “perception” module, which uses visual and auditory models to get multimodal textual descriptors as sensory proxies for “sight” and “hearing”, and a “reasoning” module that processes these textual inputs to form a coherent narrative and identify the likeliest content in the video, completed by justifications. the majority of cases, results in a JSON object with a "labels" key containing a list of most likely labels, or an object whose keys are the classes and values are the rank. For Llama2, we provide class names in the System prompt, and observe that predictions are usually included as a numbered list in the output, hence we simply parse lines beginning with a number. To compare with ground truth, we normalise to remove spaces and convert to lowercase. For all models, occasionally the output cannot be parsed (such as hallucinated class names or extra characters), and in this case we consider the prediction to be incorrect. ## 3 Evaluation Datasets Ucf101The UCF-101 test set comprises of 13,320 short video clips from YouTube spanning 101 action categories, providing a diverse set of everyday human actions, ranging from playing instruments to sports activities. Kinetics400The Kinetics400 test set contains 10s Youtube video clips and 400 human action classes. In order to circumvent API costs, since the test set contains 38,685 video clips, we construct a smaller representative subset of 2000 videos, sampling 5 videos per category. ## 4 Experiments First, we run experiments to see the role of each modality in classifying videos on the UCF-101 test and the 2k subset of Kinetics400. As Table 1 shows, the language model is able to benefit from additional audio information. In Table 2 we compare how well the 3 different large language models used are able to interpret the visual and auditory information given. We find that both GPT3.5-turbo and Claude-instant-1 outperform Llama2, with Claude-instant-1 obtaining on average the highest accuracy. In Figure 2 we test the effect of including more or less frame captions. Interestingly, while both GPT3.5 and Claude-1 benefit from "seeing" more captions, LLama2's performance is negatively affected. We hypothesise this is due to the model becoming overwhelmed with the redundant information, making it more likely to pick a word from the captions rather than the label list. ## 6 Conclusion In this work, we have introduced a new framework for multimodal video classification that leverages text as the primary medium for combining signals across modalities. We demonstrate for the first time that chaining together perception models for vision, speech and audio with large language models can enable zero-shot video classification using only textual representations of multimodal signals. Our work highlights the potential of using natural language as a flexible interface for integrating signals across modalities.
2307.16886
Irregularity scales for Gaussian processes: Hausdorff dimensions and hitting probabilities
Let $X$ be a $d$-dimensional Gaussian process in $[0,1]$, where the component are independent copies of a scalar Gaussian process $X_0$ on $[0,1]$ with a given general variance function $\gamma^2(r)=\operatorname{Var}\left(X_0(r)\right)$ and a canonical metric $\delta(t,s):=(\mathbb{E}\left(X_0(t)-X_0(s)\right)^2)^{1/2}$ which is commensurate with $\gamma(t-s)$. Under a weak regularity condition on $\gamma$, referred to below as $\mathbf{(C_{0+})}$, which allows $\gamma$ to be far from H\"older-continuous, we prove that for any Borel set $E\subset [0,1]$, the Hausdorff dimension of the image $X(E)$ and of the graph $Gr_E(X)$ are constant almost surely. Furthermore, we show that these constants can be explicitly expressed in terms of $\dim_{\delta}(E)$ and $d$. However, when $\mathbf{(C_{0+})}$ is not satisfied, the classical methods may yield different upper and lower bounds for the underlying Hausdorff dimensions. This case is illustrated via a class of highly irregular processes known as logBm. Even in such cases, we employ a new method to establish that the Hausdorff dimensions of $X(E)$ and $Gr_E(X)$ are almost surely constant. The method uses the Karhunen-Lo\`eve expansion of $X$ to prove that these Hausdorff dimensions are measurable with respect to the expansion's tail sigma-field. Under similarly mild conditions on $\gamma$, we derive upper and lower bounds on the probability that the process $X$ can reach the Borel set $F$ in $\mathbb{R}^d$ from the Borel set $E$ in $[0,1]$. These bounds are obtained by considering the Hausdorff measure and the Bessel-Riesz capacity of $E\times F$ in an appropriate metric $\rho_{\delta}$ on the product space, relative to appropriate orders. Moreover, we demonstrate that the dimension $d$ plays a critical role in determining whether $X\lvert_E$ hits $F$ or not.
Youssef Hakiki, Frederi Viens
2023-07-31T17:52:15Z
http://arxiv.org/abs/2307.16886v1
# Irregularity scales for Gaussian processes: Hausdorff dimensions and hitting probabilities ###### Abstract Let \(X\) be a \(d\)-dimensional Gaussian process in \([0,1]\), where the component are independent copies of a scalar Gaussian process \(X_{0}\) on \([0,1]\) with a given general variance function \(\gamma^{2}(r)=\mathrm{Var}\left(X_{0}(r)\right)\) and a canonical metric \(\delta(t,s):=\left(\mathbb{E}\left(X_{0}(t)-X_{0}(s)\right)^{2}\right)^{1/2}\) which is commensurate with \(\gamma(t-s)\). Under a weak regularity condition on \(\gamma\), referred to below as \(\left(\mathbf{C_{0+}}\right)\), which allows \(\gamma\) to be far from Holder-continuous, we prove that for any Borel set \(E\subset[0,1]\), the Hausdorff dimension of the image \(X(E)\) and of the graph \(Gr_{E}(X)\) are constant almost surely. Furthermore, we show that these constants can be explicitly expressed in terms of \(\dim_{\delta}(E)\) and \(d\). However, when \(\left(\mathbf{C_{0+}}\right)\) is not satisfied, the classical methods may yield different upper and lower bounds for the underlying Hausdorff dimensions. This case is illustrated via a class of highly irregular processes known as logBm. Even in such cases, we employ a new method to establish that the Hausdorff dimensions of \(X(E)\) and \(Gr_{E}(X)\) are almost surely constant. The method uses the Karhunen-Loeve expansion of \(X\) to prove that these Hausdorff dimensions are measurable with respect to the expansion's tail sigma-field. Under similarly mild conditions on \(\gamma\), we derive upper and lower bounds on the probability that the process \(X\) can reach the Borel set \(F\) in \(\mathbb{R}^{d}\) from the Borel set \(E\) in \([0,1]\). These bounds are obtained by considering the Hausdorff measure and the Bessel-Riesz capacity of \(E\times F\) in an appropriate metric \(\rho_{\delta}\) on the product space, relative to appropriate orders. Moreover, we demonstrate that the dimension \(d\) plays a critical role in determining whether \(X|_{E}\) hits \(F\) or not. For this purpose, we introduce a further condition, denoted as \(\left(C_{\ell}\right)\), which is satisfied by all relevant examples from \(\left(C_{0+}\right)\). When \(E\) is an Ahlfors-David-regular compact set in the metric \(\delta\), we obtain precise upper and lower bounds on the hitting probability of \(F\) by \(X\) from \(E\) in terms of Hausdorff measure and capacity in the Euclidean metric, utilizing specific kernels. These bounds facilitate the proof of an undecidability property, by which there are examples of sets \(E\times F\) which have the same Hausdorff dimensions relative to \(\rho_{\delta}\) but for which one target set \(F\) has a positive hitting probability while the other does not. **Keywords:** Gaussian process, Karhunen-Loeve expansion, hitting probabilities, Hausdorff dimension, capacity. **Mathematics Subject Classification** 60J45, 60G17, 28A78, 60G15 Introduction This paper studies some fractal properties for Gaussian processes with a general covariance structure. Properties of interest include the Hausdorff dimension of the image sets and the graph sets, and corresponding hitting probabilities. One of our motivations is to understand better the high path irregularity exhibited by certain Gaussian processes \(X\) started from \(0\). For example the family of processes \(X=B^{\gamma}\) defined in [17], which for any given function \(\gamma\) on \(\mathbb{R}_{+}\) such that \(\gamma^{2}\) is of class \(\mathcal{C}^{2}\) on \(\mathbb{R}_{+}\), with \(\lim_{0}\gamma=0\), and \(\gamma^{2}\) is increasing and concave near the origin, is defined by the following Volterra representation \[B^{\gamma}(t):=\int_{0}^{t}\sqrt{\left(\frac{d\gamma^{2}}{dt}\right)(t-s)}dW( s), \tag{1.1}\] where \(W\) is a standard Brownian motion. In the particular case \(\gamma(r):=\log^{-\beta}(1/r)\), where \(\beta>1/2\), the process \(B^{\gamma}\) is an element of the family of Gaussian processes called logarithmic Brownian motions (logBm). The condition \(\beta>1/2\) ensures that \(B^{\gamma}\) has continuous paths as guaranteed by the so-called Dudley-Fernique theorem (see for instance [1]). This one-parameter family of logBm processes spans a wide range of highly irregular continuous Gaussian processes, which are not Holder-continuous. For general \(\gamma\), the Dudley-Fernique theorem can be used generically to show that \(B^{\gamma}\) admits the function \(h:r\mapsto\gamma(r)\log^{1/2}(1/r)\) as a uniform modulus of continuity almost surely, which is an indication of the non-Holder-continuity of logBm. That property can in turn be established "by hand". Indications of how to do so are in Section 2, a full treatment being left to the interested reader. In any case, the logBm scale is instructive since it extends to the edge of continuous processes and beyond in a one-parameter family. The broader model class defined via the Volterra representation (1.1) is interesting and convenient for several reasons. It involves a simple kernel which makes it amenable to calculations. It produces a process \(X=B^{\gamma}\) which, while not having stationary increments, has increments which are nonetheless roughly stationary. Proposition 1 in the original reference [17] explains how the canonical metric \(\delta(s,t)\) of \(X\), for \(s,t\in\mathbb{R}_{+}\) is commensurate with \(\gamma(t-s)\), for processes which are more irregular than the Wiener process, i.e. as soon as \(r=o(\gamma^{2}(r))\). The variance of the process \(X=B^{\gamma}\) at time \(t\) is precisely \(\gamma^{2}(t)\), which implies that the process starts at \(0\), and that the scale of the process behaves similarly to the popular class of self-similar models, like fractional Brownian motion and related Gaussian processes, for which the variance equals \(t^{2H}\) for self-similarity parameter \(H\). Note for instance that the process \(X=B^{\gamma}\) with \(\gamma(r)=r^{H}\) yields a self-similar process known as the Riemann-Liouville fractional Brownian motion. It is \(H\)-self-similar, does not have stationary increments, but has increments whose variance is commensurate with the variance \(|t-s|^{2H}\) of standard fractional Brownian motion (fBm). Aside from the fBm and logBm scales, many other scales of continuity can be obtained from \(B^{\gamma}\), some of which yield interesting properties when examined from the lens of Hausdorff dimensions, as we will see. For instance, the choice \(\gamma(x)=\exp(-\log^{q}(1/x))\), introduced at the end of Section 2.1, provides a process which is less irregular than logBm, but is more irregular than any Holder-continuous process, such as fBm and Riemann-Liouville fBm. Again, this process does not have stationary increments, but it does satisfy the commensurability condition between \(\delta\) and \(\gamma\) (see Condition \((\mathbf{\Gamma})\), i.e. the relations (2.1) at the start of Section 2), and thus its increments can be deemed roughly stationary. Since this regularity scale defines processes which are intermediate between the extremely irregular logBm, and the Holder-continuous processes, these processes provide a good test of our methods' applicability. Interestingly, we will see that those processes share some desirable hitting probability features with Holder-regular processes, which the logBm processes are too irregular to possess. Most of the results in the literature about the fractal properties for Gaussian processes do not apply to the case of logBm, or to the processes which are more regular than logBm but not Holder-continuous. For the question of hitting probabilities, see for example [2, 19]; for the Hausdorff dimension of the image and the graph sets, see [10]. This inapplicability stems from those references' assumptions which imply some form of Holder continuity. To wit, the conditions in those references imply that, for some \(\alpha\in(0,1)\), we have \(\gamma\left(r\right)\lesssim r^{\alpha}\) near the origin. To make matters more delicate yet, there are many regularity scales between the Holder continuity scale and the logarithmic scale of logBm mentioned above, the aforementioned case of the choice \(\gamma(x)=\exp(-\log^{q}(1/x))\) being only one such instance. This motivates us to study the fractal properties for Gaussian processes \(X\) with more general covariance structure, under flexible conditions which would encompass the entire class of a.s. continuous Volterra processes \(B^{\gamma}\) in (1.1). We thus investigate these problems under some general conditions on the standard deviation function \(\gamma\) only, with no direct reference to any regularity scale, and no assumption that our processes be given in a particular form such as the Volterra representation (1.1), so that our results may be satisfied by large classes of processes within and/or beyond the Holder scale. We concentrate our efforts on handling the broadest possible class of processes which satisfy the commensurability condition \(\delta(s,t)\asymp\gamma(|t-s|)\), namely Condition (\(\mathbf{\Gamma}\)) from relations (2.1). By concentrating only on Condition (\(\mathbf{\Gamma}\)), i.e. relations (2.1), we are able to relax the restriction of stationarity of increments (see Proposition 5 in [17]), and to break away from the confines of Holder continuity, as illustrated above by the logBm class and other non-Holder processes. Apart from the paper [21], and the original paper [17] where logBm was introduced, few authors have studied precise regularity results for Gaussian processes beyond the Holder (fractional) scale. See [20] for a study of various regularity classes, some of which interpolate between logBm and fBm, in the context of central limit theorems for Gaussian time series with memory. Recently in [8], logBm was proposed as a model for very rough volatility, making the ideas introduced in [24] more quantitative when one leaves the Holder scale. Recently, the logBm was employed to study the \(\mathcal{C}^{\infty}\)-Regularization of ODEs by noise as in [9], the idea behind using logBm for this purpose is that the local time of logBm is highly regular (it is \(\mathcal{C}^{\infty}\) in its space variable) due to the high irregularity of paths of the underlying process. Another interesting class of Gaussian processes with non-stationary increments, which satisfy relations (2.1), are the evolution-sense solutions of the linear stochastic heat equation, see those studied in [22, 23]. The processes resulting from the models in those papers have complex Holder regularity in space and in time, but stochastic heat equations driven by noises with logBm-type behavior or other non-Holder noises, will have evolution-sense solutions which inherit those non-Holder regularities. One has every reason to expect that these examples of processes will still satisfy Condition (\(\mathbf{\Gamma}\)) (relations (2.1)), which can be shown by employing arguments similar to the proof of Proposition 1 in [17]. These details are omitted, since the purpose of this paper is to remain at a scope which encompasses all these regularity scales simultaneously by requiring only the commensurability Condition (\(\mathbf{\Gamma}\)), and interpreting our results via \(\gamma\) only, not in reference to any specific scale. To be clear, the Volterra-type processes \(B^{\gamma}\) in (1.1) are convenient for generating examples of processes which satisfy Condition (\(\mathbf{\Gamma}\)) and other general technical conditions. For instance, that logBm satisfies Condition (\(\mathbf{\Gamma}\)) with \(l=2\) was established in Proposition 1 in [17]. We will use such examples as illustrations, while our theorems and results are stated and established under more general conditions such as Condition \((\mathbf{\Gamma})\). We now provide a summary of the results which we establish in this paper, and how they are articulated. In Section 2, we provide some general hypotheses on \(\gamma\), which are important to ensure some desirable properties for the process \(X\). Some preliminaries on Hausdorff measures, Bessel-Riesz capacities and Hausdorff dimension on \(\mathbb{R}_{+}\) and \(\mathbb{R}_{+}\times\mathbb{R}^{d}\), in a general context, are also given here. All these preliminaries allow us to provide optimal upper and lower bounds for the Hausdorff dimension of the image \(X(E)\) and the graph \(Gr_{E}(X)\), where \(E\subset[0,1]\), and for the hitting probabilities estimates, in the sections 3 and 4 respectively. The choice to present results relative to subsets of \([0,1]\) in the time variable, as opposed to another time interval, is arbitrary, and used for convenience. Section 2 is also where we recall and establish important results on the process \(X\) that imply lower bounds for hitting probabilities, and upper bounds for hitting probabilities and Hausdorff dimensions of images and of graphs. Those results are respectively Lemma 2.4, which proves a so-called 2-point local non-determinism property, and Lemma 2.5, which is a type of small-ball probability estimate (probability of reaching a small ball in space over a small ball in time of similar diameter). These are proved under the commensurability Condition \((\mathbf{\Gamma})\), i.e. relations (2.1). Moreover we interpret these results under various general conditions on \(\gamma\) which are not hard to check and are satisfied by large classes of regularity scales of interest to us and to others. With these tools in hands, and with the additional definitions and basic results recalled in Section 2 about Hausdorff dimensions relative to general metrics, we are able to provide the exact value of the Hausdorff dimension of the image \(X(E)\) and the graph \(Gr_{E}(X)\), where \(E\subset[0,1]\), in the section 3, under mild regularity conditions which extend far beyond the Holder scale. Similarly these tools help us provides some optimal lower and upper bounds for hitting probabilities in Section 4. The choice to present results relative to subsets of \([0,1]\) in the time variable, as opposed to another time interval, is arbitrary, and is used for convenience. We finish this introduction with a detailed narrative description of the main results in Sections 3 and 4 and their ramifications. Recall that in [10], Hawkes resolved the problem of computing the Hausdorff dimension of the image and of the graph of a Gaussian process \(X\) with stationary increments, i.e. assuming \(\delta(s,t)=\gamma(|t-s|)\), under the strong condition \(\mathrm{ind}_{*}(\gamma)>0\), where \(\mathrm{ind}_{*}(\cdot)\) is the lower index, which will be defined in (2.16). A positive lower index for \(\gamma\) implies \(\alpha\)-Holder-continuity of the paths of \(X\) for all \(\alpha\in(0,\mathrm{ind}_{*}(\gamma))\). In section 3, we relax those two conditions used by Hawkes. We consider functions \(\gamma\) which satisfy a very mild regularity condition: the general condition labeled as \((\mathbf{C_{0+}})\), by which the inequality (2.25)) holds for all \(\varepsilon\in(0,1)\). Assuming these, using methods from potential theory and covering arguments, we prove in Section 3.1 that for all Borel set \(E\subset[0,1]\), the Hausdorff dimension of \(X(E)\) and \(Gr_{E}(X)\) are constants almost surely, which are provided explicitly in terms of \(\mathrm{dim}_{\delta}(E)\) and \(d\), where \(\mathrm{dim}_{\delta}(\cdot)\) denotes the Hausdorff dimension associated with the canonical metric \(\delta\), and \(d\) is the dimension of the ambient image space. In this same Section 3.1 we also show in Lemma 3.1 that the condition "\(\mathrm{ind}_{*}(\gamma)>0\)" used by Hawkes implies the regularity condition \((\mathbf{C_{0+}})\); however, we also know from Example 2.2 that condition \((\mathbf{C_{0+}})\) goes significantly further than "\(\mathrm{ind}_{*}(\gamma)>0\)" since it is satisfied by the aforementioned important regularity class where \(\gamma(x)=\exp(-\log^{q}(1/x))\), for which \(\mathrm{ind}_{*}(\gamma)=0\). On the other hand, in some regularity scales, condition \((\mathbf{C_{0+}})\) fails to hold. Without this condition, the method of using potential theory and covering arguments may lead to different upper and lower bounds for the Hausdorff dimension, both for the image and for the graph of \(X\). For instance, in the logBm case, \(({\bf C_{0+}})\) fails because (2.25) holds only for some, though not all, \(\varepsilon\in(0,1)\). Therefore, in Section 3.2, we develop a general method that enables us to prove that the Hausdorff dimension of the image and of the graph are almost surely constants, which hold for any continuous Gaussian process \(X\). The idea we introduce is to use the Karhunen-Loeve representation of \(X\) and to prove that, for any Borel set \(E\subset[0,1]\) the Hausdorff dimensions of \(X(E)\) and \(Gr_{E}(X)\) are measurable with respect to certain tail sigma-fields, so we can apply a Kolmogorov zero-one law, showing that these random variables are almost surely constants. These constants depends on \(E\), and when \(({\bf C_{0+}})\) fails, they are not given explicitly, but for example, in the scale of logBm, the upper and lower bounds which we obtain with the capacity+chaining method are explicit and become nearly optimal towards the upper end of the logBm scale, i.e. when \(\beta\gg 1/2\). To be specific, for instance in the case of the graph's dimension, while Section 3.1 shows using a general argument that Condition \(({\bf C}_{\varepsilon})\) for fixed \(\varepsilon\) implies, for an appropriate metric \(\rho_{\delta}\) defined in (2.37), that \(\dim_{\rho_{\delta}}\left(Gr_{E}(X)\right)\) is bounded below by \(\dim_{\delta}(E)\) and above by \(\dim_{\delta}(E)+\varepsilon\,d\), in Section 3.2, in the specific case where \(\gamma(r)\) is commensurate with \(\log^{-\beta}(1/r)\), a slightly finer analysis implies that the upper bound can be replaced by \(\dim_{\delta}(E)\beta/(\beta-1/2)\). When \(\beta\) is large, i.e. towards the higher regularity range of logBm, this is equivalent to \(\dim_{\delta}(E)(1+1/(2\beta))\). The factor \(\beta/(\beta-1/2)\) is not an improvement over the general result in Section 3.1 on the lower end of the logBm scale, since it explodes when \(\beta\) approaches \(1/2\), but it is an improvement on the general result when the logarithmic-scale Hausdorff dimension \(\dim_{\log}(E)\) is finite (see equation (3.15) and following line for the definition and relevant property of \(\dim_{\log}(\cdot)\)). Indeed, the \(\gamma\) of the logBm scale satisfies Condition \(({\bf C}_{\varepsilon})\) with \(\varepsilon=1/(2\beta)\) and, noting that \(\dim_{\delta}(E)=\beta^{-1}\dim_{\log}(E)\), where \(\dim_{\log}(E)\) is intrinsic to \(E\) (i.e. does not depend on \(\beta\)), thus one only needs to require \(\dim_{\log}(E)<\beta\,d\) to get an improved upper bound. This requirement, and the corresponding improvement on the upper bound, which incidentally is dimension-independent, holds for large \(\beta\) as soon as \(\dim_{\log}(E)\) is finite. In section 4, our investigation focuses on the hitting-probabilities problem, i.e. estimating the probability of the event \(\{X(E)\cap F\neq\varnothing\}\) where \(E\subset[0,1]\) and \(F\subset\mathbb{R}^{d}\) are Borel sets. Assuming that functions \(\gamma\) satisfy Condition \(({\bf C}_{\varepsilon})\) for some fixed \(\varepsilon\in(0,1)\) and a slightly strengthened concavity condition near the origin (Hypothesis 2.2), again using the capacity+chaining method, we obtain upper and lower bounds on the probability in question in terms of the Hausdorff measures and the Bessel-Riesz capacities of \(E\times F\), relative to appropriate metrics and orders. These results are established in the first subsection of Section 4. These bounds suggest that, under condition \(({\bf C_{0+}})\), the dimension \(d\) is a critical value for the dimension of \(E\times F\) in the intrinsic metric. In the second subsection of Section 4, we do in fact prove that under the slightly stronger condition \(({\bf C_{0}})\), we can improve our results quantitatively, by making mild regularity assumptions (Ahlfors-David regularity) on either the set \(E\) or the set \(F\). We show in this subsection that the aforementioned criticality follows, by proving that, for any process \(X\) satisfying a condition \(({\bf C}_{\ell})\), defined therein, which is an intermediate condition between the weaker condition \(({\bf C_{0+}})\) and the stronger condition \(({\bf C_{0}})\), whether or not a set can be reached by \(X\) with positive probability cannot be decided when the dimension of \(E\times F\) is critical. This condition is satisfied by all our examples of functions \(\gamma\) of interest with zero index satisfying \(({\bf C}_{0+})\). In particular, the case \(\gamma(x)=\exp\left(-\log^{q}(1/x)\right)\) with \(q\in(0,1)\) satisfies \(({\bf C}_{\ell})\). We provide references in Section 4 and we explain therein how our results improve on prior known criticality studies, where processes \(X\) were restricted to being Holder-continuous and sets \(E\) were restricted to being intervals. As a final application of our general result on hitting probabilities, in the last subsection of Section 4, we show first, under condition \((\mathbf{C_{0+}})\), that the so-called stochastic co-dimension of \(X(E)\) exists and is given by \(d-\dim_{\delta}(E)\) under a mild regularity condition on \(E\). On the other hand, when condition \((\mathbf{C_{0+}})\) fails to hold, the method may lead to some upper and lower bounds for the hitting probabilities which are not necessarily optimal. We use the logBm case to illustrate this lack of optimality. In this case, the hitting-probabilities estimates do not help to compute the stochastic co-dimension of \(X(E)\). However, since we proved in Section 3 that the Hausdorff dimension of \(X(E)\) is almost surely constant, denoting this constant by \(\zeta(E)\), then it is well within the realm of the possible, under some regularity condition on \(E\) (e.g. similar to the Ahlfors-David regularity), that the stochastic co-dimension of \(X(E)\) might be equal to \(d-\zeta(E)\). This is an open problem at this point, and we do not have a well-developed strategy to resolve it, leaving it as a conjecture. ## 2 Preliminaries This section collects and establishes general facts about Gaussian processes whose variance function \(\gamma^{2}\) is an increasing function starting from \(0\), particularly those whose canonical metric is commensurate with \(\gamma\), a property referred to below as Condition \((\mathbf{\Gamma})\) given by relations (2.1). The key technical estimate for upper bounds on Hausdorff measures of images and graphs is Lemma 2.5 below. It holds without any regularity assumptions on \(\gamma\). We provide mild technical conditions which imply various levels of regularity, including corresponding estimates of the integral \(f_{\gamma}\) featured in this lemma. Examples illustrating the various regularity behaviors are provided. Lemma 2.4 is a two-point local non-determinism property which will help us establish lower bounds on hitting probabilities. It assumes a mild concavity property near the origin, referred to below as Hypothesis 2.2. The second part of this section provides the definitions of Hausdorff measures and Riesz-Bessel capacities needed to understand and quantify the results in this paper. Since we work beyond Holder regularity scales, notions of capacities and Hausdorff measures with respect to power functions apply when modified to be relative to non-Holder metrics, using balls and distances relative to our processes' regularity scales, e.g. the processes' canonical metrics rather than powers of Euclidean distance; Hausdorff dimensions are thus relative to those metrics. General results expressing equivalent formulations of these Hausdorff dimensions are collected and justified in this section. Some of our results later in the paper will also relate to Euclidean-metric Hausdorff dimensions. ### Gaussian processes with general variance function and commensurate squared canonical metric In this entire paper we will work with \(\{X_{0}(t),t\in\mathbb{R}_{+}\}\) a real-valued mean-zero continuous Gaussian process defined on a complete probability space \((\Omega,\mathcal{F},\mathbb{P})\), with canonical metric \(\delta\) of \(X_{0}\) on \((\mathbb{R}_{+})^{2}\) defined by \[\delta(s,t):=\left(\mathbb{E}(X_{0}(s)-X_{0}(t))^{2}\right)^{1/2}.\] Let \(\gamma\) be continuous increasing function on \(\mathbb{R}_{+}\) (or possibly only on a neighborhood of \(0\) in \(\mathbb{R}_{+}\)), such that \(\lim_{0+}\gamma=0\). We assume the following throughout, which we refer to as Condition \((\mathbf{\Gamma})\): for some constant \(l\geq 1\) we have, for all \(s,t\in\mathbb{R}_{+}\), or possibly only all \(s,t\) in the neighborhood of \(0\) where \(\gamma\) is defined, \[(\mathbf{\Gamma}):\left\{\begin{aligned} &\mathbb{E}\,(X_{0}(t))^{2}=\gamma^{2}(t) \\ &\qquad\qquad\qquad\qquad\text{ and }\\ & 1/\sqrt{l}\,\gamma\,(|t-s|)\leq\delta(t,s)\leq\sqrt{l}\,\gamma(|t-s| ).\end{aligned}\right. \tag{2.1}\] Now, we consider the \(\mathbb{R}^{d}\)-valued process \(X=\{X(t):t\in\mathbb{R}_{+}\}\) defined by \[X(t)=(X_{1}(t),...,X_{d}(t)),\quad t\in\mathbb{R}_{+}, \tag{2.2}\] where \(X_{1},...,X_{d}\) are independent copies of \(X_{0}\). Let us consider the following hypotheses **Hypothesis 2.1**.: The increasing function \(\gamma\) is concave in a neighborhood of the origin, and for all \(0<a<\infty\), there exists \(\varepsilon>0\) such that \(\gamma^{\prime}(\varepsilon+)>\sqrt{l}\,\gamma^{\prime}(a-)\). **Hypothesis 2.2**.: For all \(0<a<b<\infty\), there exists \(\varepsilon>0\) and \(\mathfrak{c}_{0}\in(0,1/\sqrt{l})\), such that \[\gamma(t)-\gamma(s)\leq\mathfrak{c}_{0}\gamma(t-s)\quad\text{ for all }\,s,t\in[a,b]\,\text{ with }\,0<t-s\leq\varepsilon. \tag{2.3}\] The following lemma shows that Hypothesis 2.1 implies Hypothesis 2.2, and under the strong but typical condition \(\gamma^{\prime}(0+)=\infty\), the constant \(\mathfrak{c}_{0}\) in (2.3) can be chosen arbitrarily small. The proof is given in [21]. **Lemma 2.3**.: _Hypothesis 2.1 implies Hypothesis 2.2. Moreover if \(\gamma^{\prime}(0+)=+\infty\), then for all \(0<a<b<\infty\) and all \(\mathfrak{c}_{0}>0\), there exists \(\varepsilon>0\) such that_ \[\gamma(t)-\gamma(s)\leq\mathfrak{c}_{0}\,\gamma(t-s)\quad\text{ for all }\,t,s\in[a,b]\,\text{ with }\,0<t-s<\varepsilon.\] The following lemma is also proven in [21]. **Lemma 2.4**.: _Assume Hypothesis 2.2. Then for all \(0<a<b<\infty\), there exist constants \(\varepsilon>0\) and \(\mathfrak{c}_{1}>0\) depending only on \(a,b\), such that for all \(s,t\in[a,b]\) with \(|t-s|\leq\varepsilon\),_ \[Var\,(X_{0}(t)|X_{0}(s))\geq\mathfrak{c}_{1}\,\delta^{2}(s,t)\geq(\mathfrak{c }_{1}/l)\,\gamma^{2}(|t-s|). \tag{2.4}\] Condition (2.4) is called _two-point local non-determinism_. We denote by \(B_{\delta}(t,r)=\{s\in\mathbb{R}_{+}:\delta(s,t)\leq r\}\) the closed ball of center \(t\) and radius \(r\) in the metric \(\delta\). The following lemma is useful for the proof of the upper bounds for the Hausdorff dimension in Theorem 3.2. It is an improvement of both of proposition 3.1. and proposition 4.1. in [21]. The proof that we give here uses similar arguments to those of [4, Proposition 4.4.]. **Lemma 2.5**.: _Assume that \(\gamma\) satisfies the commensurability condition \((\mathbf{\Gamma})\), i.e. relations (2.1). Let \(0<a<b<\infty\), and \(I:=[a,b]\). Then for all \(M>0\), there exist positive constants \(\mathfrak{c}_{2}\) and \(r_{0}\) such that for all \(r\in(0,r_{0})\), \(t\in I\) and \(z\in[-M,M]^{d}\) we have_ \[\mathbb{P}\left\{\inf_{s\in B_{\delta}(t,r)\cap I}\|X(s)-z\|\leq r\right\} \leqslant\mathfrak{c}_{2}(r+f_{\gamma}(r))^{d}, \tag{2.5}\] _where \(\|\cdot\|\) is the Euclidean metric, and \(f_{\gamma}\) is defined by_ \[f_{\gamma}(r):=\int_{0}^{1/2}\frac{\gamma\left(\gamma^{-1}(l^{1/2}\,r)\,y \right)}{y\sqrt{\log(1/y)}}dy.\] Proof.: We begin by observing that, for all \(M>0\) and \(z=(z_{1},\ldots,z_{d})\in[-M,M]^{d}\), we have \[\left\{\inf_{s\in B_{\delta}(t,r)\cap I}\|X(s)-z\|\leqslant r\right\}\subseteq \bigcap_{i=1}^{d}\left\{\inf_{s\in B_{\delta}(t,r)\cap I}|X_{i}(s)-z_{i}| \leqslant r\right\}.\] Then since the coordinate processes of \(X\) are independent copies of \(X_{0}\), it is sufficient to prove (2.5) for \(d=1\). Note that for any \(s,t\in I\), we have \[\mathbb{E}\left(X_{0}(s)\mid X_{0}(t)\right)=\frac{\mathbb{E}\left(X_{0}(s)X_{ 0}(t)\right)}{\mathbb{E}\left(X_{0}(t)^{2}\right)}X_{0}(t):=c(s,t)X_{0}(t). \tag{2.6}\] This implies that the Gaussian process \((R(s))_{s\in I}\) defined by \[R(s):=X_{0}(s)-c(s,t)X_{0}(t), \tag{2.7}\] is uncorrelated with and thus independent of \(X_{0}(t)\), since these two processes are jointly Gaussian. Let \[Z(t,r):=\sup_{s\in B_{\delta}(t,r)\cap I}|X_{0}(s)-c(s,t)X_{0}(t)|\,.\] Then \[\begin{split}\mathbb{P}&\left\{\inf_{s\in B_{ \delta}(t,r)\cap I}|X_{0}(s)-z_{0}|\leq r\right\}\\ &\leq\mathbb{P}\left\{\inf_{s\in B_{\rho}(t,r)\cap I}|c(s,t) \left(X_{0}(t)-z_{0}\right)|\leq r+Z(t,r)+\sup_{s\in B_{\delta}(t,r)\cap I}|(1 -c(s,t))z_{0}|\right\}\end{split} \tag{2.8}\] By the Cauchy-Schwarz inequality and relations (2.1), we have for all \(s,t\in I\), \[\begin{split}|1-c(s,t)|&=\frac{|\mathbb{E}\left[X_ {0}(t)\left(X_{0}(t)-X_{0}(s)\right)\right]|}{\mathbb{E}\left(X_{0}(t)^{2} \right)}\\ &\leq\frac{\left(\mathbb{E}(X_{0}(t))^{2}\right)^{1/2}\left( \mathbb{E}(X_{0}(t)-X_{0}(s))^{2}\right)^{1/2}}{\mathbb{E}\left(X_{0}(t)^{2} \right)}=\frac{\delta(s,t)}{\gamma(t)}\\ &\leq\,\mathsf{c}_{3}\,\delta(s,t),\end{split} \tag{2.9}\] where \(\mathsf{c}_{3}=(\gamma(a))^{-1}\). Let \(r_{0}:=1/2\mathsf{c}_{3}\), then (2.9) implies that for all \(0<r<r_{0}\) and \(s\in B_{\delta}(t,r)\cap I\), we have \(1/2\leq c(s,t)\leq 3/2\). Furthermore, for \(0<r\leq r_{0}\), \(s\in B_{\delta}(t,r)\), and \(z_{0}\in[-M,M]\), we have \[|(1-c(s,t))z_{0}|\leq\,\mathsf{c}_{3}\,M\,r.\] Combining this inequality with (2.8), we derive that \[\begin{split}\mathbb{P}\left\{\inf_{s\in B_{\delta}(t,r)\cap I}| X_{0}(s)-z|\leq r\right\}&\leq\mathbb{P}\left\{|X_{0}(t)-z|\leq 2 \left(\mathsf{c}_{3}\,M+1\right)r+2Z(t,r)\right\}\\ &\leq\mathsf{c}_{4}\left(r+\mathbb{E}\left[Z(t,r)\right]\right), \end{split} \tag{2.10}\] for all \(z_{0}\in[-M,M]\) and \(0<r<r_{0}\), where the constant \(\mathsf{c}_{4}\) depends on \(M\), \(a\), \(b\), \(l\) and \(\mathsf{c}_{3}\) only. The last inequality follow from the independence between \(X_{0}(t)\) and \(Z(t,r)\). Now we bound \(\mathbb{E}\left[Z(t,r)\right]\). Indeed, we have \[Z(t,r)\leq Z_{1}(t,r)+Z_{2}(t,r), \tag{2.11}\] where \[Z_{1}(t,r) :=|X_{0}(t)|\sup_{s\in B_{\delta}(t,r)\cap I}\lvert 1-c(s,t)\rvert\] \[Z_{2}(t,r) :=\sup_{s\in B_{\delta}(t,r)\cap I}\lvert X_{0}(s)-X_{0}(t)\rvert.\] Using (2.9) and Cauchy-Schwartz inequality we get that \[\mathbb{E}\left[Z_{1}(t,r)\right]\leq\mathsf{c}_{5}\,r, \tag{2.12}\] where \(\mathsf{c}_{5}:=\gamma(b)/\gamma(a)\). Recall that relations (2.1) ensure that \(B_{\delta}(t,r)\subseteq\{s\in\mathbb{R}_{+}\,:\,\lvert t-s\rvert\leq\gamma^{- 1}(l^{1/2}\,r)\}\). Therefore \[Z_{2}(t,r)\leq\sup_{\begin{subarray}{c}\lvert t-s\rvert\leq\gamma^{-1}(l^{1/ 2}\,r)\\ s\in I\end{subarray}}\lvert X_{0}(t)-X_{0}(s)\rvert.\] Now, using the fact that \(\delta(s,t)\leq\sqrt{l}\gamma(\lvert t-s\rvert)\) then [16, Lemma 7.2.2] ensures that \[\begin{split}\mathbb{E}\left[Z_{2}(t,r)\right]& \leq\,\mathbb{E}\left[\sup_{\begin{subarray}{c}\lvert t-s\rvert \leq\gamma^{-1}(l^{1/2}\,r)\\ s\in I\end{subarray}}\lvert X_{0}(t)-X_{0}(s)\rvert\right]\\ &\leq\,\mathsf{c}_{6}\,\left(\gamma\left(\gamma^{-1}(l^{1/2}\,r) \right)+\int_{0}^{1/2}\frac{\gamma\left(\gamma^{-1}(l^{1/2}r)\,y\right)}{y\, \log^{1/2}(1/y)}dy\right)\\ &\leq\,\mathsf{c}_{7}\,\left(r+f_{\gamma}(r)\right),\end{split} \tag{2.13}\] where \(\mathsf{c}_{6}\) is a universal constant which depends on \(l\) only, and \(\mathsf{c}_{7}=\sqrt{l}\,\mathsf{c}_{6}\). Combining (2.10),...,(2.13) the desired upper bound (2.5) follows immediately. Lemma 2.5 is quantitatively efficient when \(r\) and \(f_{\gamma}(r)\) are of the same order as \(r\to 0\). The following condition \((\mathbf{C_{0}})\) describes this situation: \((\mathbf{C_{0}})\): There exist two constants \(\mathsf{c}_{8}>0\) and \(x_{0}\in(0,1)\) such that \[\int_{0}^{1/2}\gamma(xy)\frac{dy}{y\sqrt{\log(1/y)}}\leq\mathsf{c}_{8}\,\gamma (x)\quad\text{ for all }x\in[0,x_{0}]. \tag{2.14}\] **Corollary 2.6**.: _If \(\gamma\) satisfies the condition \((\mathbf{C_{0}})\), then for all \(M>0\), there exists some constant \(\mathsf{c}_{9}\) depending on \(\gamma\), \(I\), \(r_{0}\), \(x_{0}\) and \(M\), such that for all \(z\in[-M,M]^{d}\) and for all \(r\in(0,r_{0}\wedge\gamma(x_{0}))\) we have_ \[\mathbb{P}\left\{\inf_{s\in B_{\delta}(t,r)\cap I}\|X(s)-z\|\leqslant r\right\} \leqslant\mathsf{c}_{9}\,r^{d}. \tag{2.15}\] It is immediate that all power functions satisfy (2.14). Moreover, we will see in the sequel that (2.14) is satisfied by all regularly varying functions of index \(\alpha\in(0,1]\). We include some facts here about indexes for the reader's reference. Let \(\gamma:(0,1]\to\mathbb{R}_{+}\) be a continuous function which is increasing near zero and \(\lim_{x\downarrow 0}\gamma(x)=0\). Then its lower and upper indexes \(\operatorname{ind}_{*}(\gamma)\) and \(\operatorname{ind}^{*}(\gamma)\) are defined respectively as \[\operatorname{ind}_{*}\left(\gamma\right) : =\sup\{\alpha:\gamma(x)=o\left(x^{\alpha}\right)\} \tag{2.16}\] \[=\left(\inf\{\beta:\gamma(x)=o\left(x^{1/\beta}\right)\}\right)^{ -1}.\] and \[\operatorname{ind}^{*}\left(\gamma\right) :=\inf\left\{\alpha\geq 0:\,x^{\alpha}=o\left(\gamma(x)\right)\right\} \tag{2.17}\] \[=\sup\left\{\alpha\geq 0:\,\liminf_{x\downarrow 0}\left(\frac{ \gamma(x)}{x^{\alpha}}\right)=0\right\}.\] It is well known that \(\operatorname{ind}_{*}(\gamma)\leq\operatorname{ind}^{*}(\gamma)\). Moreover we have the following statement **Lemma 2.7**.: _If \(\gamma\) is differentiable near \(0\), then_ \[\operatorname{ind}_{*}\left(\gamma\right)\geq\liminf_{r\downarrow 0}\left( \frac{r\,\gamma^{\prime}(r)}{\gamma(r)}\right)\quad\text{ and }\quad\operatorname{ind}^{*}\left(\gamma\right)\leq\limsup_{r \downarrow 0}\left(\frac{r\,\gamma^{\prime}(r)}{\gamma(r)}\right). \tag{2.18}\] Proof.: We start with the left hand term of (2.18). We assume that \(\liminf_{r\downarrow 0}\left(r\,\gamma^{\prime}(r)/\gamma(r)\right)>0\) otherwise there is nothing to prove. Let us fix \(0<\alpha^{\prime}<\alpha<\liminf_{r\downarrow 0}\left(r\,\gamma^{\prime}(r)/ \gamma(r)\right)\), then there is \(r_{0}>0\) such that \(\alpha/r\leq\gamma^{\prime}(r)/\gamma(r)\) for any \(r\in(0,r_{0}]\). Next, for \(r_{1}<r_{2}\in(0,r_{0}]\) we integrate over \([r_{1},r_{2}]\) both of elements of the last inequality, we obtain that \(\log\left(r_{2}/r_{1}\right)^{\alpha}\leq\log\left(\gamma(r_{2})/\gamma(r_{1})\right)\), this implies immediately that \(r\mapsto\gamma(r)/r^{\alpha}\) is nondecreasing on \((0,r_{0}]\), and thence \(\lim_{r\downarrow 0}\gamma(r)/r^{\alpha}\) exists and finite. Since \(\alpha^{\prime}<\alpha\), we get \(\lim_{r\downarrow 0}\gamma(r)/r^{\alpha^{\prime}}=0\) and then \(\alpha^{\prime}\leq\operatorname{ind}(\gamma)\). Since \(\alpha^{\prime}\) and \(\alpha\) are arbitrarily chosen, the desired inequality holds by letting \(\alpha^{\prime}\uparrow\alpha\) and \(\alpha\uparrow\liminf_{r\downarrow 0}\left(r\,\gamma^{\prime}(r)/\gamma(r)\right)\). For the upper inequality in (2.18), we assume that \(\limsup_{r\downarrow 0}\left(r\,\gamma^{\prime}(r)/\gamma(r)\right)<\infty\) otherwise there is nothing to prove. We fix \(\alpha^{\prime}>\alpha>\limsup_{r\downarrow 0}\left(r\,\gamma^{\prime}(r)/ \gamma(r)\right)\). By a similar argument as above there exists \(r_{1}>0\) such that \(r\mapsto\gamma(r)/r^{\alpha}\) is nonincreasing on \((0,r_{1}]\), and then \(\lim_{r\downarrow 0}\gamma(r)/r^{\alpha}\) exists and positive. Therefore \(\lim_{r\downarrow 0}\gamma(r)/r^{\alpha^{\prime}}=\infty\) and thence \(\operatorname{ind}^{*}(\gamma)\leq\alpha^{\prime}\). Hence, by letting \(\alpha^{\prime}\downarrow\alpha\) and \(\alpha\downarrow\limsup_{r\downarrow 0}\left(r\,\gamma^{\prime}(r)/\gamma(r)\right)\), we obtain the desired inequality. **Remark 2.8**.: Notice that if in addition \(\gamma\) is concave then \(\limsup_{r\downarrow 0}\left(\frac{r\,\gamma^{\prime}(r)}{\gamma(r)}\right)\leq 1\). Recall that \(\gamma\) is said to be a _regularly varying function near \(0\)_ with index \(\alpha\in(0,1]\) if it can be represented as \[\gamma(x)=x^{\alpha}\,L(x),\] for all \(x\in(0,x_{0})\) for some \(x_{0}>0\), where \(L:(0,x_{0})\to[0,\infty)\) is a slowly varying function at \(0\) in the sense of Karamata, see for example [3]. Moreover such a slowly varying function can be represented as \[L(x)=\exp\left(\eta(x)+\int_{x}^{x_{0}}\frac{\varepsilon(t)}{t}dt\right), \tag{2.19}\] where \(\eta,\varepsilon:[0,x_{0})\to\mathbb{R}\), are Borel measurable and bounded functions, such that \[\lim_{x\to 0}\eta(x)=\eta_{0}\in(0,\infty)\quad\text{ and }\quad\lim_{x\to 0} \varepsilon(x)=0.\] For more details one can see Theorem 1.3.1 in [3]. It is known from Theorem 1.3.3 and Proposition 1.3.4 in [3] and the ensuing discussion that there exists \(\widetilde{L}:(0,x_{0}]\to\mathbb{R}_{+}\) which is \(\mathcal{C}^{\infty}\) near zero such that \(L(x)\thicksim\widetilde{L}(x)\) as \(x\to 0\), and \(\widetilde{L}(\cdot)\) has the following form \[\widetilde{L}(x)=\mathsf{c}_{10}\,\exp\left(\int_{x}^{x_{0}}\frac{\widetilde{ \varepsilon}(t)}{t}dt\right), \tag{2.20}\] for some positive constant \(\mathsf{c}_{10}\). Such function is called normalized slowly varying function (Kohlbecker [14]), and in this case \[\widetilde{\varepsilon}(x)=-x\,\widetilde{L}^{\prime}(x)/\widetilde{L}(x) \quad\text{for all}\,\,\,x\in(0,x_{0}). \tag{2.21}\] For more properties of regularly varying functions see Seneta [25] or Bingham et al. [3]. **Remark 2.9**.: It is remarkable that Lemma 2.7 implies that when the limit \(\alpha:=\lim_{r\downarrow 0}\left(\frac{r\gamma^{\prime}(r)}{\gamma(r)}\right)\) exists, then \(\operatorname{ind}_{*}(\gamma)=\operatorname{ind}^{*}(\gamma)=\alpha\). Moreover one then readily checks that if \(\alpha>0\), then \(\gamma(\cdot)\) is regularly varying with index \(\alpha\), and in this case, \(\gamma(\cdot)\) can be represented as \(\gamma(x)=x^{\alpha}\,L(x)\) for all \(x\in(0,x_{0}]\) for some \(x_{0}\in(0,1)\), where \(L(x)=\mathsf{c}_{10}\,\exp\left(\int_{x}^{x_{0}}\frac{\varepsilon(t)}{t}dt\right)\), and \(\varepsilon(x)=-\frac{x\,L^{\prime}(x)}{L(x)}=\alpha-\frac{x\gamma^{\prime}(x) }{\gamma(x)}\). The following result ensures that all regularly varying functions with indexes in \((0,1)\) satisfy (2.14). **Proposition 2.10**.: _Let \(\gamma\) be a regularly varying function near 0, with index \(\alpha\in(0,1]\). Then \(\gamma\) satisfies (2.14)._ Proof.: Since \(\gamma\) is a regularly varying function we represent it as \(\gamma(x)=x^{\alpha}\,L(x)\) for all \(x\in(0,x_{0})\) as discussed above. By a result of Adamovic [3, Proposition 1.3.4], since we are interested only in the asymptotic behavior of \(\gamma\) near 0, we may assume without loss of generality that the slowly varying part \(L(\cdot)\) is \(\mathcal{C}^{\infty}\) and has the representation (2.20). Now let \[I(x):=\frac{1}{\gamma(x)}\int_{0}^{1/2}\gamma(xy)\frac{dy}{y\sqrt{\log(1/y)}}.\] Then we only need to show that \(I(x)\) is bounded as \(x\) approaches 0. We first have \[\begin{split} I(x)=\frac{x^{\alpha}}{\gamma(x)}\,\int_{0}^{1/2}L (xy)\frac{dy}{y^{1-\alpha}\sqrt{\log(1/y)}}&\leq\frac{\log^{-1/2 }(2)\,x^{\alpha}}{\gamma(x)}\,\int_{0}^{1/2}L(xy)\frac{dy}{y^{1-\alpha}}\\ &\leq\frac{\log^{-1/2}(2)}{\gamma(x)}\,\int_{0}^{x}L(z)\frac{dz}{ z^{1-\alpha}}.\end{split} \tag{2.22}\] It is easy to check that \(\gamma^{\prime}(x)=x^{\alpha-1}\,L(x)\,(\alpha-\varepsilon(x))\). Thus we may apply l'Hopital's rule to get that \[\begin{split}\limsup_{x\downarrow 0}\frac{1}{\gamma(x)}\int_{0}^{1/2} \gamma(xy)\frac{dy}{y\sqrt{\log(1/y)}}&\leq\lim_{x\downarrow 0}\frac{\log^{-1/2 }(2)}{\gamma(x)}\int_{0}^{x}L(z)z^{\alpha-1}dz\\ &=\lim_{x\downarrow 0}\frac{\log^{-1/2}(2)\,x^{\alpha-1}\,L(x)}{x^{ \alpha-1}\,L(x)\,(\alpha-\varepsilon(x))}=\log^{-1/2}(2)/\alpha<\infty,\end{split}\] since \(\alpha>0\). This finishes the proof. Here are some examples of regularly varying functions which immediately satisfy Condition \((\mathbf{C_{0}})\). **Example 2.1**.: 1. \(\gamma_{\alpha,\beta}(r):=r^{\alpha}\log^{\beta}(1/r)\) _for_ \(\beta\in\mathbb{R}\) _and_ \(\alpha\in(0,1)\)_,_ 2. \(\gamma_{\alpha,\beta}(x):=x^{\alpha}\,\exp\left(\log^{q}(1/x)\right)\) _for_ \(q\in(0,1)\) _and_ \(\alpha\in(0,1)\)_,_ 3. \(\gamma_{\alpha}(x):=x^{\alpha}\,\exp\left(\frac{\log(1/x)}{\log(\log(1/x))}\right)\) _for_ \(\alpha\in(0,1)\)_._ On the other hand, one of our goals in this paper is to study path properties for continuous Gaussian processes, satisfying Condition \((\mathbf{\Gamma})\), i.e. relations (2.1), within or beyond the Holder scale. If \(\operatorname{ind}_{*}(\gamma)>0\), it is not difficult to check that all trajectories of \(X\) are \(\beta\)-Holder continuous for any \(\beta\in(0,\operatorname{ind}_{*}(\gamma))\). When \(\operatorname{ind}_{*}(\gamma)=\operatorname{ind}^{*}(\gamma)=0\), the trajectories of \(X\) are never Holder continuous. Since all continuous Gaussian processes must live at least in the logarithmic scale, i.e we should have \(\gamma(x)=o\left(\log^{-\beta}(1/r)\right)\) for some \(\beta\geq 1/2\). Thinking of this logarithmic scale as the most irregular one, there are several other regularity scales which interpolate between Holder-continuity scale and the aforementioned logarithmic scale. This compels us to ask the following question: Is there a continuous and increasing function \(\gamma\) with \(\operatorname{ind}_{*}(\gamma)=\operatorname{ind}^{*}(\gamma)=0\) which satisfies (2.14)? Noting that most examples of interest of function \(\gamma\) with \(\operatorname{ind}_{*}(\gamma)=\operatorname{ind}^{*}(\gamma)=0\) are slowly varying in the sense of Karamata, for any such function \(\gamma\), [3, Proposition 1.3.4] ensures that \(\gamma\) is commensurate with a \(\mathcal{C}^{\infty}\) function \(\gamma_{0}\) which satisfies \(\lim_{x\downarrow 0}\frac{x\,\gamma_{0}^{\prime}(x)}{\gamma_{0}(x)}=0\). Then the following proposition addresses the aforementioned compelling question, essentially providing a negative answer. **Proposition 2.11**.: _Let \(\gamma:[0,1]\to\mathbb{R}_{+}\) be a differentiable increasing function and assume that \(\lim_{x\downarrow 0}x\,\gamma^{\prime}(x)/\gamma(x)=0\). Then_ \[\lim_{x\downarrow 0}\left(\frac{1}{\gamma(x)}\int_{0}^{1/2}\gamma(x\,y)\frac{ dy}{y\sqrt{\log(1/y)}}\right)=\infty. \tag{2.23}\] Proof.: From Lemma 2.7, since \(\lim_{x\downarrow 0}\frac{x\,\gamma^{\prime}(x)}{\gamma(x)}=0\) implies that \(\operatorname{ind}_{*}(\gamma)=\operatorname{ind}^{*}(\gamma)=0\), and \(\gamma(\cdot)\) is normalized regularly varying at zero, hence it can be represented as \(\gamma(x)=\mathsf{c}_{8}\,\exp\left(\int_{x}^{x_{0}}\varepsilon(t)/tdt\right)\) where \(\varepsilon(x):=-\frac{x\gamma^{\prime}(x)}{\gamma(x)}\) for some fixed \(x_{0}\in(0,1)\). Then using Fatou's Lemma we obtain \[\liminf_{x\downarrow 0}\left(\frac{1}{\gamma(x)}\int_{0}^{1/2} \gamma(x\,y)\frac{dy}{y\sqrt{\log(1/y)}}\right) \geq\int_{0}^{1/2}\lim_{x\downarrow 0}\left(\frac{\gamma(x\,y)}{ \gamma(x)}\right)\frac{dy}{y\sqrt{\log(1/y)}} \tag{2.24}\] \[=\int_{0}^{1/2}\exp\left(\lim_{x\downarrow 0}\int_{xy}^{x} \varepsilon(t)/tdt\right)\frac{dy}{y\sqrt{\log(1/y)}}\] \[=\int_{0}^{1/2}\frac{dy}{y\sqrt{\log(1/y)}}=\infty,\] where, from the second to the third line, we used the facts that, for any fixed \(y\in(0,1/2)\), we have \[\left|\int_{xy}^{x}\varepsilon(t)/tdt\right|\leq\log(1/y)\,\sup_{t\in(0,x)}| \varepsilon(t)|,\] for all \(x\in(0,x_{0})\), and that \(\lim_{x\downarrow 0}|\varepsilon(x)|=0\). This finishes the proof. The last result shows that condition **(C0)** fails for a wide array of functions \(\gamma\) with zero index. Thus condition **(C0)** will not help to provide information on the upper bounds of the Hausdorff dimension of image and graph and the hitting probabilities for Gaussian processes whose modulus of continuity is slowly varying. We must therefore devise a weaker condition than **(C0)**, satisfied by a larger class of \(\gamma\)'s, including slowly varying functions. First of all, for \(\varepsilon>0\) we propose the following condition. (**C\({}_{\varepsilon}\)**): There exist three constants \(\varepsilon\in(0,1)\), \({\sf c}_{\varepsilon}>0\) and \(x_{\varepsilon}>0\), such that \[\int_{0}^{1/2}\gamma(xy)\frac{dy}{y\sqrt{\log(1/y)}}\leq{\sf c}_{\varepsilon} \,\left(\gamma(x)\right)^{1-\varepsilon}\quad\text{ for all }0<x<x_{\varepsilon}. \tag{2.25}\] The following condition, denoted by \(({\bf C_{0+}})\), is weaker than \(({\bf C_{0}})\) and it will be helpful to give some optimal upper bounds for the Hausdorff dimension of the image and graphe of \(X\) and the hitting probabilities. (**C0+**): For all \(\varepsilon>0\) there exist two constants \({\sf c}_{\varepsilon}>0\) and \(x_{\varepsilon}>0\), such that (2.25) is satisfied. The following example shows that the weaker condition \(({\bf C_{0+}})\) is satisfied by a large class of functions \(\gamma\) with \({\rm ind}_{*}(\gamma)={\rm ind}^{*}(\gamma)=0\). **Example 2.2**.: _Let \(q\in(0,1)\) and let \(\gamma_{q}\) be the function defined by \(\gamma_{q}(x):=\exp\left(-\log^{q}(1/x)\right)\) for \(x\in[0,1]\). Then \(\gamma_{q}\) satisfies \(({\bf C_{0+}})\)._ **Remark 2.12**.: Let us prove the claim in Example 2.2. We have \[\begin{split}\int_{0}^{1/2}\gamma_{q}(xy)\frac{dy}{y\sqrt{\log(1/ y)}}&=\int_{0}^{1/2}\exp\left(-\left(\log(1/x)+\log(1/y)\right)^{q} \right)\frac{dy}{y\sqrt{\log(1/y)}}\\ &=\int_{\log 2}^{\infty}\exp\left(-\left(\log(1/x)+z\right)^{q} \right)\frac{dz}{\sqrt{z}},\end{split} \tag{2.26}\] where we used the change of variable \(z=\log(1/y)\). Using the fact that, for all \({\sf c}\in(0,1)\) there is some \(N:=N({\sf c})>0\) large enough, so that \[(1+u)^{q}\geq 1+{\sf c}\,u^{q}\quad\text{for all }u\geq N, \tag{2.27}\] we may fix \({\sf c}\in(0,1)\), and its corresponding \(N({\sf c})\). Then we break the integral in (2.26) into the intervals \([\log(2),N\log(1/x))\) and \([N\log(1/x),+\infty)\) and denote them by \({\cal I}_{1}\) and \({\cal I}_{2}\), respectively. We write \[(\log(1/x)+z)^{q}=\log^{q}(1/x)\times\left(1+z/\log(1/x)\right)^{q},\] and we note that the second term is bounded from below by \(1+{\sf c}\left(\frac{z}{\log(1/x)}\right)^{q}\) when \(z\geq N\log(1/x)\) due to (2.27), and bounded from below by \(1\) when \(z<N\log(1/x)\). Therefore \[{\cal I}_{1}\leq\exp\left(-\log^{q}(1/x)\right)\,\int_{0}^{N\log(1/x)}\frac{dz }{\sqrt{z}}=2\,\gamma_{q}(x)\,\sqrt{N\,\log(1/x)}. \tag{2.28}\] On the other hand \[{\cal I}_{2}\leq\exp\left(-\log^{q}(1/x)\right)\,\int_{0}^{\infty}{\rm e}^{- \varepsilon z^{q}}\,\frac{dz}{\sqrt{z}}={\sf c}(q)\,\gamma_{q}(x). \tag{2.29}\] Combining (2.28), (2.29) and the fact \(\sqrt{\log(1/x)}=o\left(\gamma_{q}^{-\varepsilon}(x)\right)\) for all \(\varepsilon>0\), the proof of the claim in Example 2.2 is complete. As announced in the introduction, we spend some effort in this paper to study the Hausdorff dimensions of image sets and graphs, and associated hitting probabilities, for extremely irregular continuous Gaussian processes, those which satisfy Condition \((\mathbf{C}_{\varepsilon})\) for some \(\varepsilon\in(0,1)\). We use the logBm processes as a main source of examples. Proving that logBm is non-Holder-continuous can be done "by hand" by employing a classical technique to establish a liminf on the gauge function in the Holder-modulus of continuity, as is done for Brownian motion. It can also be established by invoking Fernique's zero-one law regarding gauge functions of Gaussian processes, which states that any gauge function of the path of such a process must be a sub-Gaussian variable, and must thus have a finite expected value. This property can then be combined with the known optimality of Dudley's so-called entropy integral as an upper and lower bound for Gaussian processes with stationary increments, up to multiplicative constants. This proof strategy must be adapted to deal with the issue that the increments of \(B^{\gamma}\) are only roughly stationary in the sense of commensurability (as defined as in relations (2.1)). The same proof structure also works to show that the process \(B^{\gamma_{q}}\) defined using \(\gamma_{q}\) in Example 2.2 is not Holder-continuous, and similarly to prove that that an a.s. modulus of continuity for \(B^{\gamma_{q}}\) is not an a.s. modulus of continuity for any logBm. The details of these proofs are not within the scope of this paper, and are left to the interested reader, who will find [1, 16, 21] and results in the current section herein instructive. In justifying Example 2.2, we proved that the standard deviation function \(\gamma\) of \(B^{\gamma_{q}}\) satisfies \((\mathbf{C_{0+}})\); the reader will easily check that the standard deviation function \(\gamma\) of logBm satisfies \((\mathbf{C_{1/2\beta}})\) but fails to satisfy \((\mathbf{C}_{\varepsilon})\) for all \(\varepsilon\in(0,1/2\beta)\). Hausdorff measure, Hausdorff dimension and Riesz-Bessel capacity on \(\mathbb{R}_{+}\) and \(\mathbb{R}_{+}\times\mathbb{R}^{d}\) equipped with general metrics To give formula for the Hausdorff dimension of the image \(X(E)\) and the graph \(Gr_{E}(X)\) under some general conditions on \(\gamma\), we must first provide appropriate notions of Hausdorff measure and Hausdorff dimension associated with a general metric \(\delta\), since these will apply in particular with \(\delta\) equal to the canonical metric \(\delta\). Let \(\delta:[0,1]\times[0,1]\to\mathbb{R}_{+}\) be a metric on \([0,1]\). For \(\beta>0\) and \(E\subset[0,1]\), the \(\beta\)-dimensional Hausdorff measure of \(E\) in the metric \(\delta\) is defined by \[\mathcal{H}_{\delta}^{\beta}(E):=\lim_{\eta\to 0}\inf\left\{\sum_{n=1}^{ \infty}\left(2r_{n}\right)^{\beta}:E\subseteq\bigcup_{n=1}^{\infty}B_{\delta} \left(r_{n}\right),r_{n}\leqslant\eta\right\}. \tag{2.30}\] The associated Hausdorff dimension is defined as \[\dim_{\delta}(E):=\sup\left\{\beta>0:\mathcal{H}_{\delta}^{\beta}(E)>0\right\}. \tag{2.31}\] The Bessel-Riesz capacity of order \(\beta\) in the metric \(\delta\) is defined by \[\mathcal{C}_{\delta}^{\beta}(E):=\left[\inf_{\nu\in\mathcal{P}(E)}\mathcal{E} _{\delta,\beta}(\nu)\right]^{-1}, \tag{2.32}\] where \(\mathcal{E}_{\delta,\beta}(\nu)\) denote the \(\beta\)-energy of a measure \(\nu\in\mathcal{P}(E)\) in the metric space \(\delta\), defined as \[\mathcal{E}_{\delta,\beta}(\nu):=\int_{\mathbb{R}_{+}}\int_{\mathbb{R}_{+}} \frac{\nu(dt)\nu(ds)}{(\delta(t,s))^{\beta}}.\] If \(\delta\) is the Euclidean metric on \(\mathbb{R}^{n}\) for some \(n\) we denote the associated \(\beta\)-energy by \(\mathcal{E}_{\text{euc},\beta}(\cdot)\) and the corresponding Bessel-Riesz capacity by \(\mathcal{C}_{\text{euc}}^{\beta}(\cdot)\). There exists an alternative expression for the Hausdorff dimension given through the Bessel-Riesz capacities by \[\dim_{\delta}(E)=\sup\left\{\beta>0:\mathcal{C}_{\delta}^{\beta}(E)>0\right\}. \tag{2.33}\] It is useful to understand from whence formula (2.33) comes. The fact that the right hand of (2.33) is a lower bound for \(\dim_{\delta}(E)\) is due to the so-called energy method (see for example Theorem 4.27 in [18]). That it is an upper bound comes from an application of Frostman's Lemma in the metric space \(([0,1],\delta)\), as we now explain. Since capacities are non-negative, if \(\dim_{\delta}(E)=0\), then the upper bound in (2.33) holds. We thus assume that \(\dim_{\delta}(E)>0\). It was proven in [11] that, if \(E\) is any subset of some general metric space \((Z,\delta)\) then we have \[\dim_{\delta}(E)=\sup\left\{\beta\,:\,\exists r_{0}>0,\mathsf{c}_{0}>0,\text{ and }\nu\in\mathcal{P}(E):\nu\left(B_{\delta}(z,r)\right)\leq\mathsf{c}_{0}\,r^{ \beta}\text{ for all }r<r_{0}\text{ and }z\in Z\right\}. \tag{2.34}\] See for example Proposition 5 and Note 12 in [11] for a good understanding of this last formulation, which we now use to prove the remaining inequality in (2.33). Let \(\alpha\in(0,\dim_{\delta}(E))\), and fix some \(\beta\in(\alpha,\dim_{\delta}(E))\). Equality (2.34) implies that there exists \(\nu\in\mathcal{P}(E)\), \(0<r_{0}<1\), and \(0<\mathsf{c}_{0}<\infty\) such that \[\nu\left(B_{\delta}(z,r)\right)\leq\mathsf{c}_{0}\,r^{\beta}\quad\text{ for all }r<r_{0}\text{ and }z\in Z. \tag{2.35}\] For a fixed \(t\in E\), since (2.35) ensures that \(\nu\) has no atom, we derive the following decomposition: \[\begin{split}\int_{E}\frac{\nu(ds)}{\delta(t,s)^{\alpha}}=\sum_{ k=1}^{\infty}\int_{\delta(t,s)\in(2^{-k},2^{-k+1})}\frac{\nu(ds)}{\delta(t,s)^{ \alpha}}&\leq\sum_{k=1}^{\infty}2^{k\alpha}\nu\left(B_{\delta}(t,2^{-k+1})\right)\\ &\leq\mathsf{c}_{1}\,\sum_{k=1}^{\infty}2^{-k(\beta-\alpha)},\end{split} \tag{2.36}\] with \(\mathsf{c}_{1}=2^{\beta}\,\mathsf{c}_{0}\). The last sum is finite since \(\alpha<\beta\), and does not depend on \(t\in E\). Using the fact that \(\nu\) is a probability measure, we deduce that \(\mathcal{E}_{\delta,\alpha}(\nu)<+\infty\). which finishes the proof of the upper bound part in (2.33). We will also need Hausdorff-dimension notions to quantify the size of the graphs of our processes as subsets of \(\mathbb{R}_{+}\times\mathbb{R}^{d}\). Let \(\rho_{\delta}\) be the metric defined on \(\mathbb{R}_{+}\times\mathbb{R}^{d}\) via \[\rho_{\delta}\left((s,x),(t,y)\right):=\max\{\delta(t,s),\|x-y\|\},\quad\text { for all }(s,x),(t,y)\in\mathbb{R}_{+}\times\mathbb{R}^{d}. \tag{2.37}\] For \(\beta>0\) and \(G\subseteq\mathbb{R}_{+}\times\mathbb{R}^{d}\) be a Borel set, the \(\beta\)-dimensional Hausdorff measure of \(G\) in the metric \(\rho_{\delta}\) is defined by \[\mathcal{H}_{\rho_{\delta}}^{\beta}(G)=\lim_{\eta\to 0}\inf\left\{\sum_{n=1}^{ \infty}\left(2r_{n}\right)^{\beta}:G\subseteq\bigcup_{n=1}^{\infty}B_{\rho_{ \delta}}\left(r_{n}\right),r_{n}\leqslant\eta\right\}. \tag{2.38}\] Let us also recall the so called \(\beta\)-Hausdorff content in the metric \(\rho_{\delta}\), which is defined as follows \[\mathcal{H}_{\rho_{\delta},\infty}^{\beta}\left(G\right)=\inf\left\{\sum_{i= 1}^{\infty}\left|G_{i}\right|_{\rho_{\delta}}^{\beta}:G\subset\bigcup_{i=1}^{ \infty}G_{i}\right\}, \tag{2.39}\] where the infimum is taken over all possible covering of \(G\), not merely ball coverings, and where \(|\cdot|_{\rho_{\delta}}\) denotes the diameter in the metric \(\rho_{\delta}\). The corresponding Hausdorff dimension of \(G\) is defined and characterized by \[\dim_{\rho_{\delta}}(G):=\inf\{\beta\geq 0:\mathcal{H}_{\rho_{\delta}}^{\beta}(G)=0 \}=\inf\{\beta\geq 0:\mathcal{H}_{\rho_{\delta},\infty}^{\beta}(G)=0\}. \tag{2.40}\] For the proof of the second equality above one can see Proposition 4.9 in [18]. The Bessel-Riesz capacity of order \(\alpha\) of \(G\), in the metric \(\rho_{\delta}\), is defined by \[\mathcal{C}_{\rho_{\delta}}^{\alpha}(G)=\left[\inf_{\mu\in\mathcal{P}(E)}\int_ {\mathbb{R}_{+}\times\mathbb{R}^{d}}\int_{\mathbb{R}_{+}\times\mathbb{R}^{d}} \frac{\mu(du)\mu(dv)}{(\rho_{\delta}(u,v))^{\alpha}}\right]^{-1}. \tag{2.41}\] Using the same arguments (2.34) and (2.36), used for (2.33), we can deduce the following alternative expression of \(\dim_{\rho_{\delta}}(\cdot)\) in terms of Bessel-Riesz capacities: \[\dim_{\rho_{\delta}}(G)=\sup\left\{\alpha\geq 0:\mathcal{C}_{\rho_{\delta}}^{ \alpha}(G)>0\right\}. \tag{2.42}\] ## 3 Hausdorff dimension for the rank \(X(E)\) and graph \(Gr_{E}(X)\) ### Less irregular Processes Let \(E\subset[0,1]\) be a general Borel set. Our goal in this subsection is to give minimal conditions on \(\gamma\) under which upper and lower bounds for the Hausdorff dimension of the image \(X(E)\) and the graph \(Gr_{E}(X)\) are well quantified, and are preferably explicit. When \(X\) has stationary increments and \(\operatorname{ind}_{*}(\gamma)>0\), an explicit formula for the Hausdorff dimension of \(X(E)\) under the Euclidean metric was provided by Hawkes in [10, Theorem 2]. The following lemma shows that the condition \(\operatorname{ind}_{*}\left(\gamma\right)>0\) generically ensures that \(\gamma\) satisfies Condition \((\mathbf{C_{0+}})\). We also saw in the previous section that the converse if far from true, since \((\mathbf{C_{0+}})\) allows regularity classes with zero index. **Lemma 3.1**.: _Let \(\gamma\) be continuous, increasing, and concave near the origin. If we assume that \(\operatorname{ind}_{*}(\gamma)>0\), then \(\gamma\) satisfies Condition \((\mathbf{C_{0+}})\)._ Proof.: By a change of variable and an integration by part, we obtain that for \(x\in(0,1)\) sufficiently small, we have \[I(x) :=\int_{0}^{1/2}\gamma(xy)\frac{dy}{y\sqrt{\log(1/y)}} \tag{3.1}\] \[=\int_{0}^{x/2}\sqrt{\log\left(x/y\right)}d\gamma(y)-\sqrt{\log( 2)}\gamma(x/2)\] \[\leq\int_{0}^{x}\sqrt{\log\left(1/y\right)}d\gamma(y)=\int_{0}^{ \gamma(x)}\sqrt{\log\left(\frac{1}{\gamma^{-1}(u)}\right)}du.\] Fix an arbitrary \(\alpha\in(0,\operatorname{ind}(\gamma))\), then \(\gamma(x)=o(x^{\alpha})\) near zero and so \(u^{1/\alpha}=o\left(\gamma^{-1}(u)\right)\) near zero also. Therefore, for any fixed \(\varepsilon\in(0,1)\), there exists \(\mathsf{c}_{\varepsilon}<\infty\) and \(x_{\varepsilon}\in(0,1/2]\) such that for all \(x\in(0,x_{\varepsilon}]\), \[I(x) \leq\alpha^{-1/2}\,\int_{0}^{\gamma(x)}\sqrt{\log\left(1/u\right)}du\] \[=\alpha^{-1/2}\left(\gamma(x)\,\sqrt{\log\left(\frac{1}{\gamma(x)} \right)}+\int_{0}^{\gamma(x)}\frac{dy}{\sqrt{\log(1/y)}}\right)\] \[\leq 2\,\alpha^{-1/2}\,\gamma(x)\,\sqrt{\log\left(\frac{1}{\gamma( x)}\right)}\] \[<\mathsf{c}_{\varepsilon}\left(\gamma(x)\right)^{1-\varepsilon}.\] Since \(\varepsilon\) is arbitrarily small, the proof is complete. We relax the stationarity of increments, by assuming only that \(\delta\), the canonical metric of \(X\), is commensurate with \(\gamma\), i.e. \(\gamma\) satisfies relations (2.1). Then we have the following result, which also eliminates the need for a positive index. **Theorem 3.2**.: _Let \(X:[0,1]\to\mathbb{R}^{d}\) be a continuous \(d\)-dimensional centered Gaussian process with i.i.d. scalar components who all share a canonical metric \(\delta\) satisfying Condition \((\mathbf{\Gamma})\), i.e. relations (2.1). The following statements hold._ * _For any Borel set_ \(E\subset[0,1]\)_,_ \[\dim_{\mathrm{euc}}(X(E))\geq d\wedge\dim_{\delta}(E)\quad\text{ a.s.}\] (3.2) _and_ \[\dim_{\rho_{\delta}}\left(Gr_{E}(X)\right)\geq\dim_{\delta}(E)\quad\text{ a.s.}\] (3.3) * _Assume in addition that the function_ \(\gamma\) _in Condition_ \((\mathbf{\Gamma})\) _satisfies Condition_ \((\mathbf{C}_{\varepsilon})\) _for some_ \(\varepsilon\in(0,1)\)_. Then for any Borel set_ \(E\subset[0,1]\)_,_ \[\dim_{\delta}(E)\wedge d\leq\dim_{\mathrm{euc}}(X(E))\leq d\wedge(\dim_{ \delta}(E)+\varepsilon\,d)\quad\text{ a.s.}\] (3.4) _and_ \[\dim_{\delta}(E)\leq\dim_{\rho_{\delta}}\left(Gr_{E}(X)\right)\leq\dim_{ \delta}(E)+\varepsilon\,d\quad\text{ a.s.}\] (3.5) _where_ \(\dim_{\mathrm{euc}}(\cdot)\) _denote the Hausdorff dimension associated with the Euclidean metric._ **Corollary 3.3**.: _Let \(X:[0,1]\to\mathbb{R}^{d}\) be a Gaussian process as in Theorem 3.2 such that \(\delta\) Condition \((\mathbf{\Gamma})\). If \(\gamma\) satisfies Condition \((\mathbf{C}_{\mathbf{0}+})\) then we have_ \[\dim_{\mathrm{euc}}(X(E))=d\wedge\dim_{\delta}(E)\quad\text{ and }\quad\dim_{ \rho_{\delta}}\left(Gr_{E}(X)\right)=\dim_{\delta}(E)\quad\text{ almost surely.} \tag{3.6}\] Before proving this Theorem 3.2 we introduce some notation. Let \(\mathfrak{I}=\bigcup_{n=0}^{\infty}\mathfrak{I}_{n}\) be the class of all \(\gamma\)-dyadic subintervals of \([0,1]\) such that the elements of each subclass \(\mathfrak{I}_{n}\) are of the form \[I_{j,n}:=[(j-1)\gamma^{-1}(2^{-n}),j\gamma^{-1}(2^{-n})],\] for \(n\in\mathbb{N}\) and \(1\leq j\leq\left(\gamma^{-1}(2^{-n})\right)^{-1}\). By using relations (2.1) and substituting \(\delta\)-balls by \(\gamma\)-dyadic intervals in the definition of Hausdorff measure, we obtain another family of outer measures \(\{\widetilde{H}_{\delta}^{\beta}(\cdot)\,:\,\beta>0\}\). Making use of relations (2.1) we can check that for all fixed \(\beta\), the measures \(\mathcal{H}_{\delta}^{\beta}(\cdot)\) and \(\widetilde{H}_{\delta}^{\beta}(\cdot)\) are commensurate and then are equivalent. The detailed proof of this equivalence, omitted here for brevity, follows the lines of Taylor and Watson [26] p. 326., which applies immediately due to the concavity of \(\gamma\) on a neighborhood of \(0\). Proof of Theorem 3.2.: We begin by proving \((i)\). Let \(\zeta<d\wedge\dim_{\delta}(E)\), then (2.33) implies that there is a probability measure \(\nu\) supported on \(E\) such that \[\int_{E}\int_{E}\frac{\nu(ds)\nu(dt)}{\left(\delta\left(s,t\right)\right)^{ \zeta}}<\infty. \tag{3.7}\] Let \(\mu:=\nu\circ X^{-1}\) be the image of \(\nu\) by the process \(X\), then by transfer theorem, Fubini's theorem and scaling property we have \[\begin{split}\mathbb{E}\left(\int_{\mathbb{R}^{2d}}\frac{\mu(dx )\mu(dy)}{\|x-y\|^{\zeta}}\right)&=\int_{E^{2}}\mathbb{E}\left( \frac{1}{\|X(t)-X(s)\|^{\zeta}}\right)\nu(ds)\nu(dt)\\ &=\mathsf{c}_{1,\zeta}\,\int_{E^{2}}\frac{\nu(ds)\nu(dt)}{\delta (t,s)^{\zeta}}<\infty,\end{split} \tag{3.8}\] where \(\mathsf{c}_{1,\zeta}:=\mathbb{E}\left(1/\|Z\|^{\zeta}\right)\) with \(Z\sim\mathcal{N}(0,I_{d})\), which is finite because \(\zeta<d\). Then \(\mathcal{C}_{\text{euc}}^{\zeta}(X(E))>0\) a.s. Hence the classical Frostman theorem ensures that \(\dim_{\text{euc}}\left(X(E)\right)\geq\zeta\,\) a.s., and letting \(\zeta\uparrow d\wedge\dim_{\delta}(E)\) we obtain (3.2). Let us now prove (3.3), let \(\zeta<\dim_{\delta}(E)\) be arbitrary and let \(\nu\) be the probability measure such that \(\mathcal{E}_{\delta,\alpha}(\nu)<\infty\). Let \(\widetilde{\mu}:=\nu\circ Gr(X)^{-1}\) be the image of \(\nu\) by the map \(t\mapsto(t,X(t))\), then again transfer theorem, Fubini's theorem and scaling property imply that \[\begin{split}\mathbb{E}\left(\int_{(\mathbb{R}_{+}\times \mathbb{R}^{d})^{2}}\frac{\widetilde{\mu}(dx)\widetilde{\mu}(dy)}{\left(\rho_ {\delta}((t,x),(s,y)\right)^{\zeta}}\right)&=\int_{E^{2}} \mathbb{E}\left(\frac{1}{\left(\delta(t,s)\vee\|X(t)-X(s)\|^{\zeta}\right)} \right)\nu(ds)\nu(dt)\\ &=\mathsf{c}_{2,\zeta}\,\int_{E^{2}}\frac{\nu(ds)\nu(dt)}{\delta (t,s)^{\zeta}}<\infty,\end{split} \tag{3.9}\] where \(\mathsf{c}_{2,\zeta}:=\mathbb{P}[\|Z\|\leq 1]+\mathbb{E}\left[\|Z\|^{-\zeta} \,1_{[\|Z\|\geq 1]}\right]\) with \(Z\sim\mathcal{N}(0,I_{d})\), which is finite whenever \(\zeta\) is. Then \(\mathcal{C}_{\rho_{\delta}}^{\zeta}\left(Gr_{E}(X)\right)>0\,\) a.s. Hence (2.42) implies that \(\dim_{\rho_{\delta}}Gr_{E}(X)\geq\zeta\,\) a.s. and by letting \(\zeta\uparrow\dim_{\delta}(E)\) the desired lower bound (3.3) follows. Now let us prove \((ii)\), the lower bounds follow from (i), so it is sufficient to establish the upper bounds. We only prove (3.5), and the assertion in (3.4) follows from a projection argument. Let \(\zeta>\dim_{\delta}(E)\), by definition of Hausdorff dimension we have \(\mathcal{H}_{\delta}^{\zeta}(E)=0\) and then \(\widetilde{\mathcal{H}}_{\delta}^{\zeta}(E)=0\). Let \(\eta>0\) be arbitrary, then there is a family of \(\gamma\)-dyadic interval \((I_{k})_{k\geq 1}\) such that for every \(k\geq 1\) there is \(n_{k}\in\mathbb{N}\), \(1\leq j_{k}\leq\left(\gamma^{-1}(2^{-n_{k}})\right)^{-1}\) and \(I_{k}:=\left[\left(j_{k}-1\right)\gamma^{-1}\left(2^{-n_{k}}\right),j_{k}\, \gamma^{-1}\left(2^{-n_{k}}\right)\right]\) and we have \[E\subset\bigcup_{k=1}^{\infty}I_{k}\quad\text{ and }\quad\sum_{k=1}^{\infty}|I_{k}|_{ \delta}^{\zeta}<\eta, \tag{3.10}\] where \(|\cdot|_{\delta}\) denote the diameter associated to the metric \(\delta\). For all fixed \(n\geq 1\), let \(M_{n}\) be the number of indices \(k\) for which \(n_{k}=n\), which is obviously finite due to right hand part of (3.10). Let us denote the corresponding \(\gamma\)-dyadic intervals by \(I_{i}^{n}\) for \(i=1,\ldots,M_{n}\). It is not hard to check, using the commensurability condition \((\mathbf{\Gamma})\), i.e. (2.1), that for all \(i=1,\ldots,M_{n}\) we have \(\mathsf{c}_{3}\,2^{-n}\leq|I_{i}^{n}|_{\delta}\leq\mathsf{c}_{4}\,2^{-n}\) where the constants \(\mathsf{c}_{3}\) and \(\mathsf{c}_{4}\) depend on \(l\) only. Then \[\sum_{n=1}^{\infty}M_{n}2^{-n\,\zeta}<\eta/\mathsf{c}_{3}. \tag{3.11}\] Let \(K\subset\mathbb{R}^{d+1}\) be an arbitrary compact set, we will construct an adequate covering of \(Gr_{E}\left(X\right)\cap K\). To simplify we suppose that \(K=[0,1]^{d+1}\). For every \(n\geq 1\) let \(\mathfrak{C}_{n}\) be the collection of Euclidean dyadic subcubes of \([0,1]^{d}\) of side length \(2^{-n}\), and for all \(i=1,...,M_{n}\) let \(\mathcal{G}_{n,i}\) be the collection of cubes \(C\in\mathfrak{C}_{n}\) such that \(X\left(I_{i}^{n}\right)\cap C\neq\emptyset\). Then we have \[Gr_{E}\left(X\right)\cap[0,1]^{d+1}\subseteq\bigcup_{n=1}^{\infty}\,\bigcup _{i=1}^{M_{n}}\,\bigcup_{C\in\mathcal{G}_{n,i}}I_{i}^{n}\times C. \tag{3.12}\] Let \(\varepsilon\in(0,1)\) such that \(\gamma\) satisfies Condition \((\mathbf{C}_{\varepsilon})\). For all \(n\geq 1\), \(i\in\{1,...,M_{n}\}\) and \(C\in\mathfrak{C}_{n}\), (2.5) and (2.25) imply that \[\mathbb{P}\left\{C\in\mathcal{G}_{n,i}\right\}\leq\mathsf{c}_{5}2^{-n\,(1- \varepsilon)d}, \tag{3.13}\] where \(\mathsf{c}_{5}\) depends on \(\varepsilon\) only. Combining (3.11), (3.12) and (3.13) we obtain \[\begin{split}\mathbb{E}\left(\mathcal{H}_{\rho_{\delta},\infty}^ {\zeta+\varepsilon\,d}\left(Gr_{E}(X)\cap[0,1]^{d}\right)\right)& \leq\mathsf{c}_{6}\,\sum_{n=1}^{\infty}\sum_{i=1}^{M_{n}}\sum_{I \in\mathfrak{C}_{n}}2^{-n(\zeta+\varepsilon\,d)}\mathbb{P}\{C\in\mathcal{G}_ {n,i}\}\\ &\leq\mathsf{c}_{7}\sum_{n=1}^{\infty}M_{n}\operatorname{Card}( \mathfrak{C}_{n})2^{-n(d+\zeta)}\\ &=\mathsf{c}_{7}\sum_{n=1}^{\infty}M_{n}2^{-n\,\zeta}<\mathsf{c} _{8}\,\eta,\end{split} \tag{3.14}\] where \(\mathcal{H}_{\rho_{\delta},\infty}^{\alpha}(\cdot)\) represent the \(\alpha\)-Hausdorff content in the metric \(\rho_{\delta}\) which is defined in (2.39) and the constants \(\mathsf{c}_{6}\), \(\mathsf{c}_{7}\) and \(\mathsf{c}_{8}\) depend on \(\varepsilon\) only. Since \(\eta>0\) is arbitrary we get that \[\mathcal{H}_{\rho_{\delta},\infty}^{\zeta+\varepsilon\,d}\left(Gr_{E}(X)\cap K \right)=0\quad\text{ almost surely},\] and therefore, by using (2.40), we have \(\dim_{\rho_{\delta}}\left(Gr_{E}(X)\cap K\right)\leq\zeta+\varepsilon\,d\,\) a.s. for all \(K\subset\mathbb{R}_{+}\times\mathbb{R}^{d}\). Hence by the countable stability of Hausdorff dimension and by making \(\varepsilon\downarrow 0\) and \(\zeta\downarrow\dim_{\delta}(E)\) we get the desired upper bound in (3.5). Finally, the upper bound in (3.4) follows directly from the facts that Hausdorff dimension does not increase by taking projection. Here are some interesting cases that are covered by our study in this section **Example 3.1**.: 1. _Lipschitz scale: Let_ \(\gamma\) _be defined near_ \(0\) _by_ \(\gamma(r):=r\,L(r)\)_, where_ \(L(\cdot)\) _is a slowly varying function at_ \(0\) _with_ \(\lim_{0+}L\left(r\right)\in(0,+\infty]\)_, and let_ \(\delta\) _such that relations (_2.1_) (Condition_ \((\mathbf{\Gamma})\)_) are satisfied. Then it is not difficult to show that_ \(\dim_{\delta}(E)=\dim_{\mathrm{euc}}(E)\)_, where_ \(\dim_{\mathrm{euc}}(\cdot)\) _denote the Hausdorff dimension associated to the Euclidean metric on_ \(\mathbb{R}_{+}\)_._ _._ 2. _Holder scale: For_ \(\alpha\in(0,1)\) _let_ \(\gamma\) _be defined defined near_ \(0\) _by_ \(\gamma(r)=r^{\alpha}L\left(r\right)\)_, where_ \(L(\cdot)\) _is a slowly varying function at_ \(0\)_, and let_ \(\delta\) _satisfying_ \((\mathbf{\Gamma})\)_. Then it can be shown easily, using the slowly varying property of_ \(L(\cdot)\)_, that_ \(\dim_{\delta}(E)=\dim_{\mathrm{euc}}(E)/\alpha\)_._ 3. _Beyond the Holder scale: For_ \(q\in(0,1)\) _let_ \(\gamma\) _be defined by_ \(\gamma_{q}(x):=\exp\left(-\log^{q}(1/x)\right)\) _and_ \(\delta\) _such that (_2.1_) holds. First, note that for any Borel set_ \(E\subset[0,1]\) _such that_ \(\dim_{\delta}(E)<\infty\)_, by using the fact that_ \(r^{\alpha}=o\left(\gamma(r)\right)\) _for any_ \(\alpha>0\)_, one can show that_ \(\dim_{\mathrm{euc}}(E)=0\)_. Hence the Euclidean metric is not sufficient to describe the geometry of some Borel sets._ ### Most irregular processes (LogBm) Now, when \(\gamma(x)=\log^{-\beta}(1/x)\,\) for some \(\beta>1/2\), Condition \((\mathbf{C_{0+}})\) fails to holds, we only have \[\int_{0}^{1/2}\gamma(xy)\frac{dy}{y\sqrt{\log(1/y)}}\asymp\gamma(x)\,\sqrt{ \log(1/x)}=(\gamma(x))^{1-1/2\beta}\,,\] which means that \(\gamma\) satisfies Condition \((\mathbf{C_{1/2\beta}})\), but none of Conditions \((\mathbf{C_{\varepsilon}})\) for \(\varepsilon\in(0,1/2\beta)\) are satisfied. On the other hand, since \(\delta(t,s)\asymp\log^{-\beta}\left(\frac{1}{|t-s|}\right)\), it follows that \[\dim_{\delta}(E)=\dim_{\log}(E)/\beta, \tag{3.15}\] where \(\dim_{\log}(\cdot)\) is the Hausdorff dimension in the metric \(\delta_{\log}(t,s):=\log^{-1}(1/|t-s|)\). Therefore Theorem 3.2 ensures that \[\frac{\dim_{\log}(E)}{\beta}\wedge d\leq\dim_{\mathrm{euc}}X(E)\leq\frac{1}{ \beta}\left(\dim_{\log}(E)+\frac{d}{2}\right)\wedge d \tag{3.16}\] and \[\frac{\dim_{\log}(E)}{\beta}\leq\dim_{\rho_{\delta}}Gr_{E}(X)\leq\frac{1}{ \beta}\left(\dim_{\log}(E)+\frac{d}{2}\right). \tag{3.17}\] The upper bounds above might be improved, by using an alternative covering argument based on the uniform modulus of continuity of \(X\). This is what the following proposition shows **Proposition 3.4**.: _Let \(X\) be a \(d\)-dimensional Gaussian process such that the canonical metric \(\delta\) is commensurate with \(\gamma(r)=\log^{-\beta}(1/r)\) for some \(\beta>1/2\). Then almost surely_ \[\dim_{\mathrm{euc}}X(E)\leq\frac{\dim_{\log}(E)}{\beta-1/2}\wedge d\quad\text{ and }\quad\dim_{\rho_{\delta}}Gr_{E}(X)\leq\frac{\dim_{\log}(E)}{\beta-1/2}, \tag{3.18}\] _for all \(E\subset[0,1]\)._ Proof.: First, by relations (2.1) and the fact that \(\gamma\) is increasing near the origin with \(\gamma(0)=0\), we have that \[\Phi_{\gamma}(r):=\gamma(r)\sqrt{\log(1/r)}+\int_{0}^{r}\frac{\gamma(y)}{y \sqrt{\log(1/r)}}dy, \tag{3.19}\] is a uniform modulus of continuity for \(X\), see for example [16, Theoerem 7.2.1 p. 304]). Then there is \(\Omega_{0}\subset\Omega\) such that \(\mathbb{P}(\Omega_{0})=1\) and for all \(\omega\in\Omega\) there exists a random number \(\eta_{0}(\omega)\in(0,1)\) such that \[\sup_{|t-s|\leq\eta}|X(t)-X(s)|\leq\mathsf{c}_{1}\,\Phi_{\gamma}(\eta)\quad \text{ for all }0\leq\eta<\eta_{0}(\omega), \tag{3.20}\] where \(\mathsf{c}_{1}\) is a positive constant. Since \(\Phi_{\gamma}(\eta)=O\left(\log^{-(\beta-1/2)}(\eta)\right)\), then (3.20) ensures that for all \(0<r<\log^{-1}(\frac{1}{\eta_{0}(\omega)})\) the image of any ball \(B_{\delta_{\log}}(t,r)\) by \(X(\cdot,\omega)\) has a diameter smaller than \(\mathsf{c}_{1}(2r)^{\beta-1/2}\). Let \(\omega\in\Omega_{0}\) be fixed and let \(E\subseteq[0,1]\) such that \(\dim_{\delta}(E)<\infty\). Then for any \(\xi>\dim_{\log}(E)\), there is a covering of \(E\) by balls \(\left\{B_{\delta_{\log}}(t_{i},r_{i}):i\geq 1\right\}\) such that \(\sum_{i=1}^{\infty}(2r_{i})^{\xi}\leq\varepsilon\) for some \(\varepsilon\) arbitrarily small which we choose such that \(\varepsilon^{1/\xi}\leq 2\log^{-1}(\frac{1}{\eta_{0}(\omega)})\), then \(Gr_{E}(X)\) is covered by the family \(\left\{B_{\delta_{\log}}(t_{i},r_{i})\times X\left(B_{\delta_{\log}}(t_{i},r_ {i})\right):i\geq 1\right\}\) and we have \[\mathcal{H}_{\rho_{\delta},\infty}^{\xi/(\beta-1/2)}\left(Gr_{E} (X)\right) \leq\sum_{i=1}^{\infty}\left(\left|B_{\delta_{\log}}(t_{i},r_{i}) \times X\left(B_{\delta_{\log}}(t_{i},r_{i})\right)\right|_{\rho_{\delta}} \right)^{\xi/(\beta-1/2)}\] \[\leq\mathsf{c}_{2}\,\sum_{i=1}^{\infty}(2r_{i})^{\xi}\leq \mathsf{c}_{2}\,\varepsilon.\] Since \(\varepsilon\) is arbitrarily small we get \(\mathcal{H}_{\rho_{\delta},\infty}^{\xi/(\beta-1/2)}\left(Gr_{E}(X)\right)=0\) and consequently \(\dim_{\rho_{\delta}}Gr_{E}(X)\leq\xi/(\beta-1/2)\). By letting \(\xi\downarrow\dim_{\log}(E)\) the proof is complete. **Remark 3.5**.: * The upper bounds in (3.18) are uniform in the sense that the negligible set does not depend on \(E\). The covering method used in this proof can be adapted to show that, under the following stronger condition (\(\widetilde{\mathbf{C}}_{0+}\)): "\(\Phi_{\gamma}(r)=o\left(\gamma^{1-\varepsilon}(r)\right)\) near zero for all \(\varepsilon>0\) small enough", the upper bounds \(\dim_{\delta}(E)\wedge d\) and \(\dim_{\delta}(E)\) are uniform for \(X(E)\) and \(Gr_{E}(X)\), respectively. * Let \(E\subset[0,1]\) such that \(0<\dim_{\log}(E)<\infty\) then by combining (3.2), (3.16) and (3.18) we obtain \[\frac{\dim_{\log}(E)}{\beta}\wedge d\leq\dim_{\mathrm{euc}}(X(E))\leq\frac{ \dim_{\log}(E)}{\beta-1/2}\wedge d\quad\text{ a.s.}\] This is due to the fact \(\frac{1}{\beta}\left(\dim_{\log}(E)+\frac{d}{2}\right)\geq\frac{\dim_{\log}( E)}{\beta-1/2}\wedge d\). Hence the upper bound nearly agrees with the lower bound near the upper (less irregular) end of the logarithmic scale, i.e. for large \(\beta\). Since the previous methods lead to different upper and lower bounds for Hausdorff dimensions of the image and the graph in the logarithmic scale, it is interesting to ask the following question: Are the random variables \(\dim_{\rho_{\delta}}\left(Gr_{E}(X)\right)\) and \(\dim_{\mathrm{euc}}X(E)\) constant almost surely in this logarithmic scale? The main goal of the remaining part of this section is to answer this question. The key probabilistic idea is to use the Karhunen-Loeve expansion of the process \(X\) so that we can show that the random variables \(\dim_{\rho_{\delta}}\left(Gr_{E}(X)\right)\) and \(\dim_{\mathrm{euc}}X(E)\) are measurable with respect to a tail sigma-field, and therefore by the zero-one law of Kolmogorov they should be almost surely constants. Let us first recall the Karhunen-Loeve expansion, which says that \(X\) has the following \(\mathcal{L}^{2}\)-representation, see for example [1, Theorem 3.7 p. 70 and (3.25) p. 76] : \[X(t)=\sum_{i=1}^{\infty}\lambda_{i}^{1/2}\,\xi_{i}\,\psi_{i}(t), \tag{3.21}\] where \((\xi_{i})_{i\geq 1}\) is an i.i.d. sequence of \(N(0,I_{d})\) standard Gaussian vectors, and \((\lambda_{i})_{i\geq 1}\) and \((\psi_{i})_{i\geq 1}\) are respectively eigenvalues and eigenvectors of the covariance operator of \(Q_{X}\), defined on \(L^{2}([0,1])\) by \[(Q_{X}\psi)(t)=\int_{0}^{1}Q(s,t)\psi(s)ds,\] where \(Q(s,t):=\mathbb{E}\left[X_{0}(s)X_{0}(t)\right]\) is the covariance function of each component of \(X\). It is easy to see from (3.21) that the canonical metric \(\delta\) has the following representation \[\delta(s,t)=\left(\sum_{i=1}^{\infty}\lambda_{i}(\psi_{i}(t)-\psi_{i}(s))^{2} \right)^{1/2}. \tag{3.22}\] In addition, this formula shows that every eigenfunction \(\psi_{i}\) is continuous, since all eigenfunctions share \(\delta\) as a modulus of continuity up to a multiplicative constant, i.e. \(|\psi_{i}(t)-\psi_{i}(s)|\leq\lambda_{i}^{-1/2}\delta(s,t)\). **Theorem 3.6**.: _Let \(\{X(t):t\in[0,1]\}\) be a \(d\)-dimensional continuous Gaussian process as defined in (2.2), satisfying the commensurability condition \((\mathbf{\Gamma})\), i.e. relations (2.1), such that_ \[\lim_{r\to 0}\gamma(r)\log^{1/2}(1/r)=0. \tag{3.23}\] _Then for all Borel set \(E\subset(0,1)\) there is a non-random constant \(\mathbf{C}(E)\in[0,+\infty]\) such that_ \[\dim_{\rho_{\delta}}\left(Gr_{E}(X)\right)=\mathbf{C}(E)\quad a.s. \tag{3.24}\] The following deterministic lemma is a key to prove Theorem 3.6. **Lemma 3.7**.: _Let \(f:\,[a,b]\to\mathbb{R}^{d}\) be a Borel measurable function and \(g:\,[a,b]\to\mathbb{R}^{d}\) be a Lipschitz in the metric \(\delta\), i.e._ \[\|g(t)-g(s)\|\leq\mathsf{C}_{g}\,\delta(t,s)\quad\text{ for all }s,t\in[a,b], \tag{3.25}\] _for some positive constant \(\mathsf{C}_{g}\). Then for all Borel set \(E\subseteq[a,b]\) we have_ \[\dim_{\rho_{\delta}}\left(Gr_{E}(f+g)\right)=\dim_{\rho_{\delta}}\left(Gr_{E }(f)\right). \tag{3.26}\] Proof.: Let \(\alpha:=\dim_{\rho_{\delta}}\left(Gr_{E}(f)\right).\) Then \(\;\mathcal{H}_{\rho_{\delta}}^{\alpha+\varepsilon}\left(Gr_{E}(f)\right)=0\) for all \(\varepsilon>0\). Therefore we fix \(\varepsilon>0\) and \(\eta>0\) to be arbitrary so that there exists a cover \(\left(B_{\delta}(t_{i},r_{i})\times B(x_{i},r_{i})\right)_{j\geq 1}\) of \(Gr_{E}(f)\) such that \[\sum_{j=1}^{\infty}r_{i}^{\alpha+\varepsilon}<\eta. \tag{3.27}\] From this last cover of \(Gr_{E}(f)\) we will construct another cover of \(Gr_{E}(f+g)\). Indeed, by using (3.25) if \(t\in B_{\delta}(t_{i},r_{i})\) for some \(i\geq 1\), then \[\|g(t)-g(t_{i})\|\leq\mathsf{C}_{g}\,r_{i}. \tag{3.28}\] Now let \(i\geq 1\) such that \(t\in B_{\delta}(t_{i},r_{i})\) and \(f(t)\in B(x_{i},r_{i})\), we then deduce from this and from (3.28) that \((f+g)(t)\in B\left(\widetilde{x}_{i},\widetilde{r}_{i}\right)\) where \(\widetilde{x}_{i}:=x_{i}+g(t_{i})\) and \(\widetilde{r}_{i}:=(1+\mathsf{C}_{g})r_{i}\). Therefore the collection of balls \(\left(B_{\delta}\left(t_{i},\widetilde{r}_{i}\right)\times B(\widetilde{x}_{ i},\widetilde{r}_{i})\right)\) is a cover of \(Gr_{E}(f+g)\) and we have \[\mathcal{H}_{\rho_{\delta},\infty}^{\alpha+\varepsilon}\left(Gr_{E}(f+g) \right)\leq(1+\mathsf{C}_{g})^{\alpha+\varepsilon}\sum_{j=1}^{\infty}r_{i}^{ \alpha+\varepsilon}\leq(1+\mathsf{C}_{g})^{\alpha+\varepsilon}\,\eta.\] Since \(\eta>0\) is arbitrary, this shows that \(\mathcal{H}_{\rho_{\delta},\infty}^{\alpha+\varepsilon}\left(Gr_{E}(f+g)\right)=0\) for all \(\varepsilon>0\). Hence (2.40) ensures that \[\dim_{\rho_{\delta}}\left(Gr_{E}(f+g)\right)\leq\alpha=\dim_{\rho_{\delta}} \left(Gr_{E}(f)\right). \tag{3.29}\] The other inequality follows from (3.29) with \(\widetilde{f}:=f+g\) and \(\widetilde{g}:=-g\). Proof of Theorem 3.6.: First let us note that (3.23) implies that \(X\) has a continuous version, then by using [1, Theorem 3.8] the series in (3.21) converge uniformly on \([0,1]\) a.s.2, thus it is a concrete version of \(X\). Considering this version, we define for all \(n\geq 1\) the finite and infinite parts of \(X\), denoted by \(X_{1,n}\) and \(X_{n,\infty}\) as follows \[X_{1,n}(t):=\sum_{i=1}^{n}\lambda_{i}^{1/2}\,\xi_{i}\,\psi_{i}(t)\quad\text{ and }\quad X_{n,\infty}(t):=X(t)-X_{1,n}(t)\quad\text{for all }t\in[0,1].\] Then we have \[\|X_{1,n}(t)-X_{1,n}(s)\|\leq\left(\sum_{i=1}^{n}\lvert\xi_{i}\rvert\right)\, \sup_{1\leq i\leq n}\lambda_{i}^{1/2}\lvert\psi_{i}(t)-\psi_{i}(s)\rvert\leq \left(\sum_{i=1}^{n}\lvert\xi_{i}\rvert\right)\,\delta(t,s), \tag{3.30}\] for all \(s,t\in[0,1]\), almost surely, where we used (3.22) in the last inequality. We fix \(E\subset[0,1]\) to be a Borel set. By making use of (3.30), Lemma 3.7 applies for almost every \(\omega\); specifically, for fixed \(n\), this is the set of \(\omega\)'s such that \(\sum_{i=1}^{n}\lvert\xi_{i}\rvert\) is finite. Lemma 3.7 thus ensures that, by countable intersection, almost surely, \[\dim_{\rho_{\delta}}Gr_{E}(X)=\dim_{\rho_{\delta}}Gr_{E}(X_{n,\infty})\qquad \text{ for all }n\geq 1.\] This shows that the random variable \(\dim_{\rho_{\delta}}\left(Gr_{E}(X)\right)\) is measurable with respect to the tail \(\sigma\)-algebra \(\bigcap_{n=1}^{\infty}\sigma\left(\left\{\xi_{i},i\geq n+1\right\}\right)\). Hence the Kolmogorov's 0-1 law ensures that this random variable is constant almost surely. **Remark 3.8**.: The proof of Theorem 3.6 relies on the fact that the dimension of the graph of the process \(X\) is in the tail sigma-algebra of a sequence of i.i.d random variables. But the Karhunen-Loeve expansion of \(X\) may have only finitely many non-zero terms, making that tail sigma-algebra property arguably artificial. Still, the proof's argument carries through, though the result of the theorem can be obtained more directly. Indeed, if \(\lambda_{i}=0\) for all \(i\) greater than some fixed \(n_{0}\), and assuming that the eigenfunctions \(\psi_{i}\) are differentiable for all \(i\leq n_{0}\) and at least one of them satisfies the fact that: \(\lvert\psi_{i}(t)-\psi_{i}(s)\rvert\geq\mathsf{c}_{i}\,\lvert t-s\rvert\) for \(s,t\in J\) for some \(i\leq n_{0}\) and some interval \(J\subset[0,1]\). Then the canonical metric of the process is commensurate with the Euclidean metric on \(J\), and Corollary 3.3 proves that, for all \(E\subset J\), the Hausdorff dimension of \(Gr_{E}(X)\) equals the usual Hausdorff dimension of \(E\). More generally, still assuming that all \(\lambda_{i}=0\) for all \(i\) greater than some fixed \(n_{0}\), but without assuming that the eigenfunctions \(\psi_{i}\) are differentiable, by applying Lemma 3.7 with \(f=X_{n_{0},\infty}\equiv 0\) and \(g=X_{1,n_{0}}=X\), we get \(\dim_{\rho_{\delta}}(Gr_{E}(X))=\dim_{\rho_{\delta}}(Gr_{E}(0))=\dim_{\rho_{ \delta}}(E\times 0)=\dim_{\delta}(E)\). Due to the complex structure of the image compared to the graph, the previous methodology, which is based on a covering argument and Hausdorff measures techniques to show that \(\dim_{\rho_{\delta}}Gr_{E}(X)\) is measurable with respect to a tail sigma-field, is difficult to be applied to the image case, which pushes us to seek other methods. To prove a similar result for \(\dim_{\rm euc}X(E)\) we will proceed differently, trying to use the Karhunen-Loeve expansion again combined with a potential theoretical approach to be able to prove that \(\dim_{\rm euc}X(E)\) is measurable with respect to the tail sigma-field associated with the sequence of Gaussian random variables appearing in the Karhunen-Loeve expansion. **Theorem 3.9**.: _Under the same conditions of Theorem 3.6 we have for all Borel set \(E\subset[0,1]\) there exists a non-random constant \({\bf c}(E)\in[0,d]\) such that_ \[\dim_{\rm euc}\left(X(E)\right)={\bf c}(E)\quad a.s. \tag{3.31}\] **Remark 3.10**.: Just as in Theorem 3.6, the proof of Theorem 3.9 also seems to use the Kolmogorov 0-1 law artificially when there is only a finite number of nonzero Karhunen-Loeve eigenvalues \(\lambda_{i}\). Yet the same arguments as in Remark 3.8 lead to a direct proof that the dimension of the image is non-random, and in fact, \(\dim_{\rm euc}(X(E))=\dim_{\delta}(E)\wedge d\). **Remark 3.11**.: We believe that the situation in the previous remark can never occur if condition \(({\bf C}_{0+})\) does not hold. We know of two classes of examples where no such situation can be constructed because all processes that violate condition \(({\bf C}_{0+})\) in those classes have infinitely many non-zero Karhunen-Loeve eigenvalues. Recall the Volterra processes in (1.1). Then we can prove that every eigenfunction \(\psi_{i}\) of such a process is \(\alpha\)-Holder-continuous on \([0,1]\) for any \(0<\alpha<1\). The details are left to the reader. For such a process, if its Karhunen-Loeve expansion had only finitely many non-zero terms, then the process would also be \(\alpha\)-Holder-continuous, almost surely, which would imply, using the lower-bound side of the commensurability condition \((\boldsymbol{\Gamma})\) in (2.1), that its standard deviation function \(\gamma\) has a positive lower index, and thus that condition \(({\bf C}_{0+})\) holds because of Lemma 3.1; again details are omitted. We also leave it to the reader to check that, in the case of processes with stationary increments, the same argument via Holder-continuity holds. Thus, for both Volterra processes and processes with stationary increments satisfying condition \((\boldsymbol{\Gamma})\), we can prove by contrapositive that if condition \(({\bf C}_{0+})\) is violated, then the Karhunen-Loeve expansion had infinitely many non-zero terms. In order to prove Theorem 3.9 we need some preliminaries. First we start by a classical result, whose proof is an application of Hahn-Banach theorem, see for example Theorem 1.20 p. 17 in [15]. **Lemma 3.12**.: _Let \((E,\rho)\) be a compact metric space and \(f\,:\,E\,\to\mathbb{R}^{d}\) be a continuous function. Then for any probability measure \(\mu\) on \(f(E)\) there exists a probability measure \(\nu\) on \(E\) such that \(\mu=\nu\circ f^{-1}\)._ Recall that the Karhunen-Loeve expansion provides a concrete continuous version of the Gaussian process \(X\), that all its eigenfunctions are continuous, and that using the notation \(X_{1,n}\) and \(X_{n,\infty}\) defined in the proof of Theorem 3.6, the function \(X_{1,n}\), as a finite (random) linear combination of eigenfunctions, is continuous, and therefore, \(X_{n,\infty}\) is continuous as a difference of two continuous processes. All these statements are to be understood almost surely. Let us denote by \(\mathbb{Q}_{1,n}\) and \(\mathbb{Q}_{n,\infty}\) their distributions on the space of continuous functions, and by \(\delta_{1,n}\) and \(\delta_{n,\infty}\) their associated canonical metrics respectively. The expression (3.22) then immediately implies \[\delta_{1,n}^{2}(s,t)=\sum_{i=1}^{n}\lambda_{i}\left(\psi_{i}(t)-\psi_{i}(s) \right)^{2}\quad\text{ and }\quad\delta_{n,\infty}^{2}(s,t)=\sum_{i=n+1}^{\infty} \lambda_{i}\left(\psi_{i}(t)-\psi_{i}(s)\right)^{2}, \tag{3.32}\] and these two processes are independent by construction. Therefore we have the equality in distribution \((X,\mathbb{P})\stackrel{{ d}}{{=}}(X_{1,n}+X_{n,\infty},\mathbb{Q}_{1,n }\otimes\mathbb{Q}_{n,\infty})\). For convenience, we denote by \(\Omega_{1,n}\) and \(\Omega_{n,\infty}\) two copies of the space of continuous functions; the measures \(\mathbb{Q}_{1,n}\) and \(\mathbb{Q}_{n,\infty}\) are defined on these two spaces. We may also choose to define the law of \(X\) on the set of continuous functions \(\Omega=\Omega_{1,n}\times\Omega_{n,\infty}\), and for \(\omega\in\Omega\) the paths \(X_{1,n}(\omega)\) and \(X_{n,\infty}(\omega)\) can be understood using the obvious projection. Proof of Theorem 3.9.: For all \(n\geq 1\) and all Borel set \(E\subset[0,1]\) we denote by \(K_{n}(\cdot)\) the following random kernel \[K_{n}(s,t,\omega):=\left(\delta_{1,n}\left(s,t\right)\vee\|X_{n,\infty}(s, \omega)-X_{n,\infty}(t,\omega)\|\right)^{-1}\quad\text{ for all }s,t\in[0,1]\text{ and }\omega\in\Omega. \tag{3.33}\] Let \(\nu\) be a probability measure on \(E\). Denote by \(\zeta_{n}\left(E,\cdot\right)\) the random variable defined as follow \[\zeta_{n}\left(E\right):=\sup\left\{\zeta>0\,:\,\inf_{\nu\in\mathcal{P}(E)} \int_{E}\int_{E}\left[K_{n}(s,t,\cdot)\right]^{\zeta}\nu(ds)\nu(dt)<\infty \right\}. \tag{3.34}\] We will show that for any fixed integer \(n\geq 1\) and for all Borel set \(E\subset[0,1]\) we have \[\dim_{\text{euc}}X(E)=\zeta_{n}(E)\wedge d\quad\text{ almost surely.} \tag{3.35}\] Since the integers are countable, (3.35) holds almost surely for all \(n\geq 1\) simultaneously. In particular, almost surely, \(\zeta_{n}(E)\wedge d\) does not depend on \(n\). Indeed, let \(n\geq 1\) be fixed and \(E\subseteq[0,1]\) be a Borel set, we will first prove that \(\dim_{\text{euc}}X(E)\leq\zeta_{n}(E)\wedge d\,\) a.s. Let \(\omega\in\Omega_{n}:=\{\,\max_{i\leq n}\|\xi_{i}\|<\infty\,\}\), and assume that \(\zeta_{n}(E)(\omega)<d\) otherwise there is nothing to prove. Then (3.34) implies that for all \(\zeta>\zeta_{n}(E)(\omega)\) we have \[\int_{E}\int_{E}\left[K_{n}(s,t,\omega)\right]^{\zeta}\nu(ds)\nu(dt)=\infty \quad\text{for all }\nu\in\mathcal{P}(E). \tag{3.36}\] On the other hand, we note that for all \(s,t\in[0,1]\) we have \[\begin{split}\|X(t,\omega)-X(s,\omega)\|&\leq\|X_{ 1,n}(t,\omega)-X_{1,n}(s,\omega)\|+\|X_{n,\infty}(t,\omega)-X_{n,\infty}(s, \omega)\|\\ &\leq\left(\max_{i\leq n}\|\xi_{i}(\omega)\|\,\,\right)\delta_{1, n}\left(t,s\right)+\|X_{n,\infty}(t,\omega)-X_{n,\infty}(s,\omega)\|\\ &\leq\left(\max_{i\leq n}\|\xi_{i}(\omega)\|+1\right)\,\left[ \delta_{1,n}\left(t,s\right)\vee\|X_{n,\infty}(t,\omega)-X_{n,\infty}(s, \omega)\|\right]\\ &=\left(\max_{i\leq n}\|\xi_{i}(\omega)\|+1\right)\,\left[K_{n}( s,t,\omega)\right]^{-1}.\end{split} \tag{3.37}\] Thus by (3.36) and (3.37) we infer that \[\int_{E}\int_{E}\frac{\nu(ds)\nu(dt)}{\|X(t,\omega)-X(s,\omega)\|^{\zeta}}= \infty\quad\text{ for all }\nu\in\mathcal{P}(E). \tag{3.38}\] Using Lemma 3.12, any probability measure \(\mu\) on \(X(E,\omega)\) may be written as \(\mu=\nu\circ X^{-1}(\cdot,\omega)\) for some \(\nu\in\mathcal{P}(E)\), so using this fact as well as (3.38) we obtain \(\mathcal{C}_{\text{euc}}^{\zeta}\)\((X(E,\omega))=0\) and then by (2.33) we have \(\dim_{\text{euc}}X(E,\omega)\leq\zeta\). Letting \(\zeta\downarrow\zeta_{n}(E)(\omega)\) we get \(\dim_{\text{euc}}X(E,\omega)\leq\zeta_{n}(E)(\omega)\). Since \(\mathbb{P}(\Omega_{n})=1\), the desired upper bound hold almost surely for fixed \(n\), and then as we mentioned, for all \(n\) simultaneously. We will now show that \(\dim X(E)\geq\zeta_{n}(E)\wedge d\,\) a.s. First, we remark that the random variable \(\zeta_{n}(E)\) is measurable with respect to \(\sigma(\{\xi_{i}\,:\,i\geq n+1\,\})\) and therefore it is independent from \(X_{1,n}\). Let \(n\in\mathbb{N}\) and \(\omega_{2}\in\Omega_{n,\infty}\) be fixed, we assume that \(\zeta_{n}(E)(\omega_{2})>0\) otherwise there is nothing to prove. Let \(0<\zeta<\zeta_{n}(E)(\omega_{2})\wedge d\) be arbitrary, then there exists a probability measure \(\nu_{\omega_{2}}\in\mathcal{P}(E)\) such that \[\int_{E}\int_{E}\left[K_{n}(s,t,\omega_{2})\right]^{\zeta}\nu_{ \omega_{2}}(ds)\nu_{\omega_{2}}(dt)<\infty. \tag{3.39}\] Now for any \(\omega_{1}\in\Omega_{1,n}\) we consider the random probability measure \(\mu_{\omega_{1},\omega_{2}}\) defined on \(X(E)\) via \[\mu_{\omega_{1},\omega_{2}}(F):=\nu_{\omega_{2}}\left(\{s\in E\, :\,X\left(t,(\omega_{1},\omega_{2})\right)\in\,F\}\right)\quad\text{ for all }F\subset X(E).\] Our aim is to show that \[\mathcal{E}_{euc,\zeta}\left(\mu_{\omega_{1},\omega_{2}}\right)< \infty\quad\text{ for }\mathbb{Q}_{1,n}\text{-almost all }\omega_{1}\in\Omega_{1,n}. \tag{3.40}\] In fact, for \(\omega_{2}\in\Omega_{n,\infty}\) being fixed, taking expectation with respect to \(\mathbb{Q}_{1,n}(d\omega_{1})\) and using a transfer theorem and Fubini's theorem we obtain that \[\mathbb{E}_{\mathbb{Q}_{1,n}}\left(\mathcal{E}_{euc,\zeta}\left( \mu_{\cdot,\omega_{2}}\right)\right)=\int_{E}\int_{E}\underbrace{\mathbb{E}_{ \mathbb{Q}_{1,n}}\left(\frac{1}{\|X_{1,n}(t)-X_{1,n}(s)+X_{n,\infty}(t,\omega_ {2})-X_{n,\infty}(s,\omega_{2})\|^{\zeta}}\right)}_{:=I_{n,\zeta}(s,t,\, \omega_{2})}\nu_{\omega_{2}}(ds)\nu_{\omega_{2}}(dt). \tag{3.41}\] In order to prove (3.40) we only need to show that \[I_{n,\zeta}(t,s,\omega_{2})\leq\mathsf{c}_{0}\left[K_{n}(s,t, \omega_{2})\right]^{\zeta}\quad\text{for all }s,t\in E, \tag{3.42}\] where \(\mathsf{c}_{0}\) is a positive constant. Let \(s,t\in E\), if \(K_{n}(s,t,\omega_{2})=\infty\) the above inequality is obvious. So we assume that \(K_{n}(s,t,\omega_{2})<\infty\). Then for simplicity we let \[\mathsf{u}:=\delta_{1,n}(s,t)\quad\text{and}\quad\mathsf{v}( \omega_{2}):=X_{n,\infty}(t,\omega_{2})-X_{n,\infty}(s,\omega_{2}).\] Then using the Gaussian scaling property and the independence between \(X_{1,n}\) and \(X_{n,\infty}\) we have \[I_{n,\zeta}(s,t,\omega_{2})=\mathbb{E}_{\mathbb{Q}_{1,n}}\left( \frac{1}{\|\mathsf{u}\,Z+\mathsf{v}(\omega_{2})\|^{\zeta}}\right)=\int_{ \mathbb{R}^{d}}\frac{1}{\|\mathsf{u}\,x+\mathsf{v}(\omega_{2})\|^{\zeta}} \frac{e^{-\frac{\|x\|^{2}}{2}}}{(2\pi)^{d/2}}dx, \tag{3.43}\] where \(Z\) is a standard Gaussian vector \(N(0,I_{d})\). There are four possible cases: (i) \(\mathsf{u}=0<\|\mathsf{v}(\omega_{2})\|\), (ii) \(\|\mathsf{v}(\omega_{2})\|=0<\mathsf{u}\), (iii) \(0<\|\mathsf{v}(\omega_{2})\|\leq\mathsf{u}\) and (iv) \(0<\mathsf{u}\leq\|\mathsf{v}(\omega_{2})\|\). Since \(\zeta<d\), the inequality (3.42) is trivial in the first two cases, let us then prove it only in the cases (iii) and (iv). First, for \(\mathsf{w}:=\mathsf{v}(\omega_{2})/\mathsf{u}\,\) let \(J(\mathsf{w})\) be defined as \[J\left(\mathsf{w}\right):=\int_{\mathbb{R}^{d}}\frac{1}{\|\,x+ \mathsf{w}(\omega_{2})\|^{\zeta}}\frac{e^{-\frac{\|x\|^{2}}{2}}}{(2\pi)^{d/2} }dx. \tag{3.44}\] One can remark that \(I_{n}(s,t)\,=\,u^{-\zeta}\,J(\mathsf{w})\). When \(0<\|\mathsf{v}(\omega_{2})\|\leq\mathsf{u}\), using the fact that the functions \(x\mapsto e^{-\|x\|^{2}/2}\) and \(\,x\mapsto\|x\|^{-\zeta}\) have the same monotony as functions of \(\|x\|\), then for all \(\mathsf{w}\in\mathbb{R}^{d}\) we have \[\int_{\mathbb{R}^{d}}(e^{-\|x+\mathsf{w}\|^{2}/2}-e^{-\|x\|^{2}/2})(\|x+ \mathsf{w}\|^{-\zeta}-\|x\|^{-\zeta})dx\geq 0. \tag{3.45}\] Hence using a change of variables we obtain \[J\left(\mathsf{w}\right)\leq 2\int_{\mathbb{R}^{d}}\frac{1}{\|\,x\|^{\zeta} }\frac{e^{-\frac{\|x\|^{2}}{2}}}{(2\pi)^{d/2}}dx=:\mathsf{c}_{1,\zeta}, \tag{3.46}\] where \(\mathsf{c}_{1}=\mathsf{c}_{1,\zeta}=2(2\pi)^{-d/2}\,\int_{\mathbb{R}}r^{d- \zeta-1}\,e^{-r^{2}/2}dr<\infty\) since \(\zeta<d\). Then multiplying \(J(\mathsf{w})\) by \(\mathsf{u}^{-\zeta}\) and using the upper bound (3.47), we get \[I_{n,\zeta}(s,t,\omega_{2})\leq\mathsf{c}_{1}\,\mathsf{u}^{-\zeta}=\mathsf{c} _{1}\,[K(s,t,\omega_{2})]^{\zeta}. \tag{3.47}\] This gives the desired inequality in the case (iii). On the other hand, when \(0<\mathsf{u}\)'\(<\|\mathsf{v}(\omega_{2})\|\) we upper bound the integral \(J(\mathsf{w})\) \[J(\mathsf{w}) =(2\pi)^{-d/2}\,\left(\int_{\|x+\mathsf{w}\|\geq\|\mathsf{w}\|/2 }\frac{1}{\|\,x+\mathsf{w}\|^{\zeta}}e^{-\frac{\|x\|^{2}}{2}}dx+\int_{\|x+ \mathsf{w}\|<\|\mathsf{w}\|/2}\frac{1}{\|\,x+\mathsf{w}\|^{\zeta}}e^{-\frac{ \|x\|^{2}}{2}}dx\right) \tag{3.48}\] \[\leq(2\pi)^{-d/2}\,\left(\|\mathsf{w}\|^{-\zeta}\int_{\mathbb{R} ^{d}}e^{-\|x\|^{2}/2}dx+e^{-\|\mathsf{w}\|^{2}/8}\int_{\|x+\mathsf{w}\|<\| \mathsf{w}\|/2}\frac{dx}{\|x+\mathsf{w}\|^{\zeta}}\right)\] \[\leq\mathsf{c}_{2}\,\left(\|\mathsf{w}\|^{-\zeta}+e^{-\|\mathsf{ w}\|^{2}/8}\,\|\mathsf{w}\|^{d-\zeta}\right)\] \[\leq\mathsf{c}_{3}\,\|\mathsf{w}\|^{-\zeta},\] where, in the first inequality, the bound of the second term follows from the fact that \(\|x\|\geq\|\mathsf{w}\|/2\), the second and third inequalities follow from passing to polar coordinates and using the facts that \(\zeta<d\) and that \(\sup\limits_{r\in\mathbb{R}}r^{d}\,e^{-r^{2}/2}<\infty\). Thus multiplying \(J(\mathsf{w})\) by \(\mathsf{u}^{-\zeta}\) and using the upper bound (3.48) we obtain \[I_{n}(s,t,\omega_{2})\leq\mathsf{c}_{3}\,\|\mathsf{v}(\omega_{2})\|^{-\zeta}= \mathsf{c}_{3}\,[K(s,t,\omega_{2})]^{\zeta}, \tag{3.49}\] which finishes the proof in the case \((iv)\). Now using (3.39), (3.41) and (3.42) we obtain that \(\mathbb{E}_{\mathbb{Q}_{1,n}}\left(\mathcal{E}_{euc,\zeta}\left(\mu_{\cdot \omega_{2}}\right)\right)<\infty\). Therefore \(\mathcal{E}_{euc,\zeta}\left(\mu_{\omega_{1},\omega_{2}}\right)<\infty\) for \(\mathbb{Q}_{1,n}\)-almost all \(\omega_{1}\in\Omega_{1,n}\), which implies that \(\dim_{\rm euc}X(E,(\omega_{1},\omega_{2}))\geq\zeta\) for \(\mathbb{Q}_{1,n}\)-almost all \(\omega_{1}\in\Omega_{1,n}\) and for all \(\zeta<d\wedge\zeta_{n}(E)(\omega_{2})\). Hence by letting \(\zeta\uparrow d\wedge\zeta_{n}(E)(\omega_{2})\) we get that \[\dim_{\rm euc}X(E,(\omega_{1},\omega_{2}))\geq d\wedge\zeta_{n}(E)(\omega_{2}) \quad\text{for $\mathbb{Q}_{1,n}$-almost all $\omega_{1}\in\Omega_{1,n}$}. \tag{3.50}\] Accordingly, since \(\omega_{2}\in\Omega_{n,\infty}\) is arbitrarily chosen, then using Fubini's theorem and (3.50) we obtain that \[\begin{split}\mathbb{P}\left[\dim_{\rm euc}X(E)&\geq d \wedge\zeta_{n}(E)\right]\\ &=\mathbb{Q}_{1,n}\otimes\mathbb{Q}_{n,\infty}\left\{(\omega_{1}, \omega_{2})\,:\,\dim_{\rm euc}X(E,(\omega_{1},\omega_{2}))\geq d\wedge\zeta_{ n}(E)(\omega_{2})\right\}\\ &=\int_{\Omega_{n,\infty}}\mathbb{Q}_{1,n}\left[\omega_{1}\in \Omega_{1,n}\,:\,\dim_{\rm euc}X(E,(\omega_{1},\omega_{2}))\geq d\wedge\zeta_{ n}(E)(\omega_{2})\right]\mathbb{Q}_{n,\infty}(d\omega_{2})\\ &=1.\end{split} \tag{3.51}\] Hence the proof of (3.35) is complete. Now, since \(\zeta_{n}(E)\wedge d\) does not depend on \(n\), and since for all \(n\geq 1\) we have \(\zeta_{n}(E)\) measurable with respect to \(\sigma\left(\left\{\xi_{i}:i\geq n+1\right\}\right)\), then \(\dim_{\mathrm{euc}}X(E)\) is measurable with respect to the tail sigma-field of \((\xi_{i})_{i\geq 1}\) and hence by the 0-1 law of Kolmogorov, it is constant almost surely, which finishes the proof. **Remark 3.13**.: The previous theorems 3.6 and 3.9 only use condition (3.23) which is sufficient for the mere existence of a continuous modification for \(X\). Moreover, it was shown in Theorem 3.2 under Condition \((\mathbf{C_{0+}})\) that the constant \(\mathbf{c}(E)\) and \(\mathbf{C}(E)\) are nothing but \(\dim_{\delta}(E)\wedge d\) and \(\dim_{\delta}(E)\), respectively. But even if Condition \((\mathbf{C_{0+}})\) fails, Theorems 3.6 and 3.9 show that the Hausdorff dimension of the image and graph are almost surely constants, and this is valid for the entire class of continuous Gaussian processes, including logBm and other extremely irregular continuous processes. ## 4 Criteria on hitting probabilities In this section we develop criteria for hitting probabilities of a Gaussian process \(X\) where, as before, its canonical metric \(\delta\) satisfies the commensurability condition \((\mathbf{\Gamma})\). The concavity Hypothesis 2.2 for the standard deviation function \(\gamma\) will also be generically required. We also assume that \(\gamma\) satisfies Condition \((\mathbf{C_{0}})\), or merely \((\mathbf{C}_{\varepsilon})\). Under these mild conditions, we will establish lower bounds for the probability that \(X\) will hit a set \(F\) from a set \(E\), namely \(\mathbb{P}\left\{X(E)\cap F\neq\varnothing\right\}\), in terms of capacities of \(E\times F\), and upper bounds on that hitting probability in terms of Hausdorff measures of \(E\times F\). Our conditions are general enough to apply to large classes of Gaussian processes within and beyond the Holder scale. In the first subsection below, we present the main results of this section, which provide estimates under both Conditions \((\mathbf{C_{0}})\) and \((\mathbf{C}_{\varepsilon})\) for fixed \(\varepsilon\in(0,1)\). These results suggest that a critical dimension can be identified under \((\mathbf{C_{0+}})\), i.e. for those processes which satisfy \((\mathbf{C}_{\varepsilon})\) for every \(\varepsilon\). This is the topic of the second subsection, wherein we show that in the critical dimension case, under \((\mathbf{C_{0+}})\), the hitting probability's positivity cannot be decided merely based on dimensions. In the third subsection, we investigate the so-called co-dimension of the image set \(X(E)\), and we show in particular that it has an explicit expression under a mild regularity condition on the set \(E\). ### General hitting probability estimates Recall the metric \(\rho_{\delta}\) on the product space, defined in (2.37). Our general result is the following. **Theorem 4.1**.: _Let \(X\) be a \(d\)-dimensional Gaussian process with i.i.d. components satisfying the commensurability condition \((\mathbf{\Gamma})\). Let \(0<a<b<\infty\) and \(M>0\), and let \(E\subset[a,b]\) and \(F\subset[-M,M]^{d}\) be two Borel sets. With the notation and conditions in Section 2, the following holds._ * _If Hypothesis_ 2.2 _is satisfied, then there exists a constant_ \(\mathsf{c}_{1}>0\) _depending only on_ \(a,b,M\) _and the law of_ \(X\)_, such that_ \[\mathsf{c}_{1}\,\mathcal{C}_{\rho_{\delta}}^{d}(E\times F)\leq\mathbb{P}\left\{ X(E)\cap F\neq\emptyset\right\}.\] (4.1) _._ * _If Condition_ \((\mathbf{C_{0}})\) _is satisfied, then there exists a constant_ \(\mathsf{c}_{2}>0\) _also depending only on_ \(a,b,M\)_, and the law of_ \(X\)_, such that_ \[\mathbb{P}\left\{X(E)\cap F\neq\emptyset\right\}\leq\mathsf{c}_{2}\,\mathcal{H }_{\rho_{\delta}}^{d}\left(E\times F\right).\] (4.2) * _If Condition_ \((\mathbf{C}_{\varepsilon})\) _is satisfied for some_ \(\varepsilon\in(0,1)\)_, then there exists a constant_ \(\mathsf{c}_{\varepsilon,3}>0\) _depending on_ \(a,b,M,\varepsilon\)_, and the law of_ \(X\)_, such that_ \[\mathbb{P}\left\{X(E)\cap F\neq\emptyset\right\}\leq\mathsf{c}_{3,\varepsilon} \,\mathcal{H}_{\rho_{\delta}}^{d(1-\varepsilon)}(E\times F).\] (4.3) Proof.: We begin by proving the lower bound in (4.1). Assume that \(\mathcal{C}_{\rho_{\delta}}^{d}(E\times F)>0\) otherwise there is nothing to prove. This implies the existence of a probability measure \(\mu\in\mathcal{P}(E\times F)\) such that \[\mathcal{E}_{\rho_{\delta},d}(\mu):=\int_{\mathbb{R}_{+}\times\mathbb{R}^{d}} \int_{\mathbb{R}_{+}\times\mathbb{R}^{d}}\frac{\mu(du)\mu(dv)}{(\rho_{\delta}( u,v))^{d}}\leq\frac{2}{\mathcal{C}_{\rho_{\delta}}^{d}(E\times F)}. \tag{4.4}\] Consider the sequence of random measures \((m_{n})_{n\geq 1}\) on \(E\times F\) defined as \[m_{n}(dtdx) =(2\pi n)^{d/2}\exp\left(-\frac{n\|X(t)-x\|^{2}}{2}\right)\mu(dtdx)\] \[=\int_{\mathbb{R}^{d}}\exp\left(-\frac{\|\xi\|^{2}}{2n}+i\langle \xi,X(t)-x\rangle\right)d\xi\,\mu(dtdx).\] Denote the total mass of \(m_{n}\) by \(\|m_{n}\|=m_{n}(E\times F)\). Let us first verify the following claim on the moments of \(\|m_{n}\|\): \[\mathbb{E}\left(\|m_{n}\|\right)\geq\mathsf{c}_{1},\quad\text{ and }\quad \mathbb{E}\left(\|m_{n}\|^{2}\right)\leq\mathsf{c}_{2}\mathcal{E}_{\rho_{ \delta},d}(\mu), \tag{4.5}\] where the constants \(\mathsf{c}_{1}\) and \(\mathsf{c}_{2}\) are independent of \(n\) and \(\mu\). First, we have \[\mathbb{E}\left(\|m_{n}\|\right) =\int_{E\times F}\int_{\mathbb{R}^{d}}\exp\left(-\frac{\|\xi\|^{ 2}}{2}\left(\frac{1}{n}+\gamma^{2}(t)\right)-i\langle\xi,x\rangle\right)d\xi \mu(dtdx)\] \[\geq\int_{E\times F}\frac{(2\pi)^{d/2}}{\left(1+\gamma^{2}(t) \right)^{d/2}}\exp\left(-\frac{\|x\|^{2}}{2\gamma^{2}(t)}\right)\mu(dtdx) \tag{4.6}\] \[\geq\frac{(2\pi)^{d/2}}{(1+\gamma^{2}(b)^{d/2}}\exp\left(-\frac{ dM^{2}}{2\gamma^{2}(a)}\right)\int_{E\times F}\mu(dtdx)=:\mathsf{c}_{1},\] This proves the first inequality in (4.5). We have also \[\mathbb{E}\left(\|m_{n}\|^{2}\right)=\int_{(E\times F)^{2}}\int_{\mathbb{R}^ {2d}}\!\!e^{-i\left(\langle\xi,x\rangle+\langle\eta,y\rangle\right)}\,\times \exp\left(-\frac{1}{2}(\xi,\eta)\Gamma_{n}(t,s)(\xi,\eta)^{T}\right)d\xi\,d \eta\,\mu(dtdx)\mu(dsdy), \tag{4.7}\] where \(\Gamma_{n}(t,s)=(n^{-1}I_{2d}+\operatorname{Cov}(X(s),X(t)))\), where \(I_{2d}\) denotes the \(2d\times 2d\) identity matrix, and where \(\operatorname{Cov}(X(s),X(t))\) is the \(2d\)-covariance matrix of \((X(s),X(t))\). Now let \(\varepsilon>0\) so that (2.4) is satisfied for all \(s,t\in[a,b]\) such that \(|t-s|<\varepsilon\). Using the same lines as Step 1 and Step 2 of the proof of Theorem 2.5 in [21] we obtain that \[\mathbb{E}\left(\|m_{n}\|^{2}\right)\leq J_{1}+J_{2},\] where \[J_{1} :=\int_{(E\times F)^{2}\cap D(\varepsilon)}\frac{(2\pi)^{d}}{ \left(\sqrt{\det\left(\Phi_{n}(s,t)\right)}\right)^{d}}\exp\left(-\frac{c_{2}} {2}\frac{\|x-y\|^{2}}{\det\left(\Phi_{n}(s,t)\right)}\right)\mu(dtdx)\mu(dsdy)\] \[J_{2} :=\int_{(E\times F)^{2}\setminus D(\varepsilon)}\frac{(2\pi)^{d} }{\left(\sqrt{\det\left(\Phi_{n}(s,t)\right)}\right)^{d}}\,\mu(dtdx)\mu(dsdy),\] where \(D(\varepsilon):=\{((t,x),(s,y)):|t-s|<\varepsilon\}\) and \(\Phi_{n}(s,t):=n^{-1}I_{2}+\mathrm{Cov}(X_{0}(s),X_{0}(t))\). First we bound \(J_{2}\). Observe that \[\det\left(\Phi_{n}(s,t)\right)\geq\mathbb{E}(X_{0}^{2}(s))\mathbb{E}(X_{0}^{ 2}(t))-\left(\mathbb{E}X_{0}(t)X_{0}(s)\right)^{2}=:h(s,t). \tag{4.8}\] By the Cauchy-Schwartz inequality, the function \((s,t)\mapsto h(s,t)\) is nonnegative, and since \(\gamma(r)=0\Leftrightarrow r=0\), this function is strictly positive and continuous away from the diagonal \(\{s=t\}\). Therefore, for all \(s,t\in[a,b]\) with \(|t-s|>\varepsilon\), \(\det\left(\Phi_{n}(s,t)\right)\geq\mathsf{c}_{3}\), where \(\mathsf{c}_{3}\) is a positive constant depending on \([a,b]\). Hence \[J_{2} \leq(2\pi/\mathsf{c}_{3}^{1/2})^{d}\,\int_{(E\times F)^{2} \setminus D(\varepsilon)}\mu(dtdx)\mu(dsdy)\] \[\leq(2\pi/\mathsf{c}_{3}^{1/2})^{d}\,\sup_{(u,v)\in(E\times F)^{ 2}}\left(\rho_{\delta}\left(u,v\right)\right)^{d}\int_{(E\times F)^{2}}\frac{ \mu(du)\mu(dv)}{\rho_{\delta}\left((t,x),(s,y)\right)^{d}}=\mathsf{c}_{4}\, \mathcal{E}_{\rho_{\delta},d}(\mu). \tag{4.9}\] Let us now bound \(J_{1}\). If \(((t,x),(s,y))\in D(\varepsilon)\) then (4.8) and Lemma 2.4 ensures that for some constant \(\mathsf{c}_{5}>0\) \[\det\left(\Phi_{n}(s,t)\right)\geq\mathsf{c}_{5}\,\gamma^{2}(a)\,\delta^{2}(s,t).\] Observe that if \(\det\left(\Phi_{n}(s,t)\right)<\|x-y\|^{2}\), using the fact that \(\sup_{x\in\mathbb{R}}x^{d/2}e^{-c\,x}<\infty\), then \[\frac{(2\pi)^{d}}{\left(\det\left(\Phi_{n}(s,t)\right)\right)^{d/2}}\exp\left( -\frac{\mathsf{c}_{3}}{2}\frac{\|x-y\|^{2}}{\det\left(\Phi_{n}(s,t)\right)} \right)\leq\frac{\mathsf{c}_{6}}{\|x-y\|^{d}}.\] On the other hand, when \(\det\left(\Phi_{n}(s,t)\right)\geq\|x-y\|^{2}\) we get \[\frac{(2\pi)^{d}}{\left(\det\left(\Phi_{n}(s,t)\right)\right)^{d/2}}\exp\left( -\frac{\mathsf{c}_{3}}{2}\frac{\|x-y\|^{2}}{\det\left(\Phi_{n}(s,t)\right)} \right)\leq\frac{(2\pi)^{d}}{\mathsf{c}_{5}^{d/2}\,\gamma^{d}(a)\delta(s,t)^{ d}}.\] Therefore we conclude that \[J_{1}\leq\mathsf{c}_{7}\,\int_{(E\times F)^{2}}\frac{\mu(dtdx)\mu(dsdy)}{ \left(\max\{\delta(s,t),\|x-y\|\}\right)^{d}}=\mathsf{c}_{7}\,\mathcal{E}_{ \rho_{\delta},d}(\mu), \tag{4.10}\] for some constant \(\mathsf{c}_{7}\). The proof of our moment estimates in claim (4.5) is complete. Now, using these moment estimates in (4.5) and the Paley-Zygmund inequality (c.f. Kahane [12], p.8), one can check that \(\{m_{n},n\geq 1\}\) has a subsequence that converges weakly to a finite random measure \(m_{\infty}\) supported on the set \(\{(s,x)\in E\times F:X(s)=x\}\), which is positive on an event of positive probability and also satisfying the moment estimates of (4.5). Therefore, using again the Paley-Zygmund inequality, we conclude that \[\mathbb{P}\left\{X(E)\cap F\neq\varnothing\right\}\geq\mathbb{P}\left\{\|m_{ \infty}\|>0\right\}\geq\frac{\mathbb{E}(\|m_{\infty}\|)^{2}}{\mathbb{E} \left(\|m_{\infty}\|^{2}\right)}\geq\frac{\mathsf{c}_{1}^{2}}{\mathsf{c}_{2} \mathsf{c}_{2}\mathsf{c}_{\rho_{\delta},d}(\mu)}.\] By definition of capacity, this finishes the proof of (4.1). For the upper bound in (4.2), we use a simple covering argument. We choose an arbitrary constant \(\zeta>\mathcal{H}_{\rho_{\delta}}^{d}(E\times F)\). Then there is a covering of \(E\times F\) by balls \(\{B_{\rho_{\delta}}((t_{i},x_{i}),r_{i}),i\geq 1\}\) in \(\left(\mathbb{R}_{+}\times\mathbb{R}^{d},\rho_{\delta}\right)\) with small radii \(r_{i}\), such that \[E\times F\subseteq\bigcup_{i=1}^{\infty}B_{\rho_{\delta}}((t_{i},x_{i}),r_{i })\quad\text{with}\quad\sum_{i=1}^{\infty}(2r_{i})^{d}\leq\zeta. \tag{4.11}\] It follows that \[\{X(E)\cap F\neq\emptyset\} =\bigcup_{i=1}^{\infty}\left\{\,X\left(B_{\delta}(t_{i},r_{i}) \right)\cap B(x_{i},r_{i})\neq\varnothing\right\}\] \[\subseteq\bigcup_{i=1}^{\infty}\left\{\inf_{t\in B_{\delta}(t_{i },r_{i})}\|X(t)-x_{i}\|\leqslant r_{i}\right\}. \tag{4.12}\] Since Condition (2.14) is satisfied, using Corollary 2.6 and (4.12) we obtain \[\mathbb{P}\left\{X(E)\cap F\neq\emptyset\right\} \leq\sum_{i=1}^{\infty}\mathbb{P}\left\{\inf_{t\in B_{\delta}(t_{i },r_{i})}\|X(t)-x_{i}\|\leqslant r_{i}\right\}\] \[\leq\mathsf{c}_{8}\,\sum_{i=1}^{\infty}(2r_{i})^{d}\leq\mathsf{c} _{8}\,\zeta. \tag{4.13}\] Let \(\zeta\downarrow\mathcal{H}_{\rho_{\delta}}^{d}(E\times F)\), the upper bound in (4.2) follows. For the upper bound in (4.3), first note that condition (2.25) ensures that \[\mathbb{P}\left\{\inf_{t\in B_{\delta}(t,r)}\|X(t)-x\|\leqslant r\right\} \leq\mathsf{c}_{9}\,r^{d(1-\varepsilon)}\quad\text{ for all }0<r<r_{0}\text{ and }x\in[-M,M]^{d} \tag{4.14}\] where \(r_{0}\) and \(\mathsf{c}_{9}\) are two positive constants. Hence the proof of (4.3) follows from the same argument as in (4.12), (4.11) and (4.13), and by using (4.14) instead of Corollary 2.6. The following corollary suggests that \(\dim_{\rho_{\delta}}(E\times F)=d\) is a critical dimension for computing hitting probabilities. **Corollary 4.2**.: _Let \(E,F\) be two bounded Borel sets in \(\mathbb{R}_{+}\) and \(\mathbb{R}^{d}\) respectively. Under Hypothesis 2.2 and Condition \((\mathbf{C_{0+}})\) we have_ \[\mathbb{P}\left\{X(E)\cap F\neq\varnothing\right\}\left\{\begin{array}{ll} >0&\text{ if }\dim_{\rho_{\delta}}(E\times F)>d\\ =0&\text{ if }\dim_{\rho_{\delta}}(E\times F)<d\end{array}\right.. \tag{4.15}\] We explore this criticality in the next subsection, using general sets \(E\) and processes \(X\). ### Hitting probabilities: undecidability in the critical dimension case We now show that the critical dimension case, \(\dim_{\rho_{\delta}}(E\times F)=d\), is undecidable, for a large class of functions \(\gamma\) satisfying \((\mathbf{C}_{0+})\), in the following sense: there exist compact sets \(E_{1},E_{2}\subset[0,1]\) and \(F_{1},F_{2}\subset[M,M]^{d}\) such that \(\dim_{\rho_{\delta}}(E_{1}\times F_{1})=\dim_{\rho_{\delta}}(E_{2}\times F_{ 2})=d\) and \[\mathbb{P}\left\{X(E_{1})\cap F_{2}\neq\varnothing\right\}>0\quad\text{ and }\quad\mathbb{P}\left\{X(E_{2})\cap F_{2}\neq\varnothing\right\}=0. \tag{4.16}\] We start with providing some lower bounds and upper bounds on \(\mathbb{P}\left\{X(E)\cap F\neq\emptyset\right\}\) when \(E\) satisfies the Ahlfors-David regularity in the metric \(\delta\). This will be the key to prove (4.16). First, we recall the definition of an Ahlfors-David regular set. **Definition 4.3**.: Let \((X,\rho)\) be a bounded metric space, let \(\alpha>0\), and let \(G\subset X\). We say that \(G\) is \(\alpha\)-Ahlfors-David regular if there exists a Borel probability measure \(\mu\) on \(G\) and a positive constant \(\mathsf{c}_{0}\) such that \[\mathsf{c}_{0}^{-1}\,r^{\alpha}\leq\mu\left(B_{\rho}\left(a,r\right)\right) \leq\mathsf{c}_{0}\,r^{\alpha}\ \text{ for all }a\in G,\text{ and all }\ \ 0<r\leq 1. \tag{4.17}\] To best represent the delicate size of our hitting probabilities of interest, we find it necessary to introduce a finer concept of regularity for our standard deviation function \(\gamma\), using slowly-varying modulation. Let \(\ell:(0,\infty)\to\mathbb{R}_{+}\) be a slowly varying function at \(0\), such that \(\lim_{y\to 0}\ell(y)=c\in(0,+\infty]\). We denote the following condition \((\mathbf{C}_{\ell})\), \((\mathbf{C}_{\ell})\): There exist two constants \(\mathsf{c}_{1}>0\) and \(x_{0}\in(0,1)\) such that \[\int_{0}^{1/2}\gamma(xy)\frac{dy}{y\sqrt{\log(1/y)}}\leq\mathsf{c}_{1}\,\gamma (x)\ell\left(\gamma(x)\right)\quad\text{ for all }x\in[0,x_{0}]. \tag{4.18}\] **Remark 4.4**.: * This condition \((\mathbf{C}_{\ell})\) is slightly stronger than \((\mathbf{C}_{0+})\), and weaker than \((\mathbf{C}_{0})\) when \(\lim_{y\to 0}\ell(y)=+\infty\). Moreover it is satisfied by a large class of functions \(\gamma\) with zero index of interest to us, including the example \(\gamma(x)=\exp\left(-\log^{q}(1/x)\right)\) with \(q\in(0,1)\). * When \(\lim_{y\to 0}\ell(y)<+\infty\), the conditions \((\mathbf{C}_{0})\) and \((\mathbf{C}_{\ell})\) are equivalent. * The case of \(\lim_{y\to 0}\ell(y)=0\) does not occur. Indeed, one can show that, up to a multiplicative constant, \(\gamma(x)\) is a lower bound of the integral in Condition \((\mathbf{C}_{\ell})\). This modulated condition \((\mathbf{C}_{\ell})\) is naturally accompanied by the more general notion of Hausdorff measure with a gauge function other than the power function, which we will also need. For a metric space \((X,\rho)\) and a function \(\varphi:\mathbb{R}_{+}\to\mathbb{R}_{+}\), right-continuous and increasing near zero with \(\lim_{0+}\varphi=0\), and \(G\subseteq X\) be a Borel set, the \(\varphi\)-Hausdorff measure of \(G\) in the metric \(\rho\) is defined by \[\mathcal{H}_{\rho}^{\varphi}(G)=\lim_{\eta\to 0}\inf\left\{\sum_{n=1}^{ \infty}\varphi\left(2r_{n}\right):G\subseteq\bigcup_{n=1}^{\infty}B_{\rho} \left(r_{n}\right),\ r_{n}\leqslant\eta\right\}. \tag{4.19}\] The same reasoning as in the proof of Theorem 4.1 leads to an upper bound more accurate than (4.3), under the condition \((\mathbf{C}_{\ell})\). The proof of the following theorem is thus left to the interested reader. **Theorem 4.5**.: _Let \(0<a<b<\infty\) and \(M>0\), and let \(E\subset[a,b]\) and \(F\subset[-M,M]^{d}\) be two Borel sets. If \(\gamma\) satisfies the hypothesis 2.2 and the condition \((\mathbf{C}_{\ell})\), then_ \[\mathsf{c}_{2}^{-1}\,\mathcal{C}_{\rho_{\delta}}^{d}(E\times F)\leq\mathbb{P} \left\{X(E)\cap F\neq\emptyset\right\}\leq\mathsf{c}_{2}\,\mathcal{H}_{\rho_{ \delta}}^{\varphi_{d}}(E\times F), \tag{4.20}\] _where \(\varphi_{d}(x):=x^{d}\,\ell^{d}(x)\)._ If \(E\) is an \(\alpha\)-Ahlfors-David regular set in the metric \(\delta\), the hitting probability estimates (4.20) take a more specific form. Namely the lower and upper bounds are given, respectively, in terms of the Bessel-Riesz capacity of \(F\) and the Hausdorff measure of \(F\) in the Euclidean metric, the latter still being relative to the \(\ell\)-modulated power function. However, when \(\alpha\) reaches the critical dimension \(d\), the capacity lower bound requires the use of a logarithmic metric. To be specific, we have the following proposition, whose proof, based on the previous theorem, requires a bit of care, and is therefore included below. **Proposition 4.6**.: _Let \(X\) be a \(d\)-dimensional Gaussian process such that its standard deviation function \(\gamma\) satisfies Condition \((\mathbf{\Gamma})\), Hypothesis 2.2 and Condition \((\mathbf{C}_{\ell})\). Let \(0<a<b<\infty\) and \(M>0\). Also let \(E\subset[a,b]\) be a \(\alpha\)-Ahlfors-David regular set in the metric \(\delta\) for some \(0<\alpha\leq d\). Then for all \(0<M\leq 1\) and \(F\subset[-M,M]^{d}\) the following two alternatives hold, depending on whether \(\alpha\) equals the critical dimension \(d\)._ 1. _If_ \(\alpha<d\) _and_ \(\gamma\) _satisfies Condition_ \((\mathbf{C_{0}})\) _then_ \[\mathsf{c}_{3}^{-1}\,\mathcal{C}_{\mathrm{euc}}^{d-\alpha}\,(F)\leq\mathbb{P} \left\{X(E)\cap F\neq\emptyset\right\}\leq\,\mathsf{c}_{3}\,\mathcal{H}_{ \mathrm{euc}}^{d-\alpha}(F),\] (4.21) 2. _If_ \(\alpha<d\) _and_ \(\gamma\) _satisfies Condition_ \((\mathbf{C}_{\ell})\) _for some_ \(\ell\) _given such that_ \(\lim_{y\to 0}\ell(y)=+\infty\)_, then we have_ \[\mathsf{c}_{3}^{-1}\,\mathcal{C}_{\mathrm{euc}}^{d-\alpha}\,(F)\leq\mathbb{P} \left\{X(E)\cap F\neq\emptyset\right\}\leq\,\mathsf{c}_{3}\,\mathcal{H}_{ \mathrm{euc}}^{\varphi_{d-\alpha}}(F),\] (4.22) _where_ \(\varphi_{d-\alpha}(x):=x^{d-\alpha}\,\ell^{d}(x)\) _and_ \(\mathsf{c}_{3}\) _is a positive constant depends on_ \(a,\,b,\,M\) _and_ \(\alpha\) _only._ 3. _If_ \(\alpha=d\) _then_ \[\mathsf{c}_{4}\,\mathcal{C}_{\delta_{\log}}^{1}\,(F)\leq\mathbb{P}\left\{X(E) \cap F\neq\emptyset\right\}\] (4.23) _where the metric_ \(\delta_{\log}(\cdot)\) _is defined on_ \([-M,M]^{d}\) _by_ \(\,\delta_{\log}(x,y):=-\log^{-1}(\|x-y\|)\)_._ **Remark 4.7**.: In the case \(\alpha=d\), the upper bound in terms of the Hausdorff measure, under either Condition \((\mathbf{C}_{0})\) or Condition \((\mathbf{C}_{\ell})\) with \(\lim_{y\to 0}\ell(y)=+\infty\), is not informative. Indeed, under \((\mathbf{C}_{0})\) the Hausdorff measure is a discrete measure, implying that the upper bound is typically too large to be informative, and under Condition \((\mathbf{C}_{\ell})\) the Hausdorff measure is infinite for any nonempty set \(F\). Proof.: Using the bounds in (4.20), to prove (i) it will be sufficient to show that \[\mathsf{c}_{5}^{-1}\mathcal{C}_{\mathrm{euc}}^{d-\alpha}(F)\leq\mathcal{C}_{ \rho_{\delta}}^{d}(E\times F)\quad\text{ and }\quad\mathcal{H}_{\rho_{\delta}}^{\varphi_{d}}(E\times F)\leq\mathsf{c}_{5} \,\mathcal{H}_{\mathrm{euc}}^{\varphi_{d-\alpha}}(F), \tag{4.24}\] respectively. Indeed for the capacities inequality, since \(E\) is \(\alpha\)-Ahlfors-David regular in the metric \(\delta\), then by using [6, Proposition 2.5], with \(G_{1}=E\), \(G_{2}=F\), \(\rho_{1}=\delta\), \(\rho_{2}=\|\cdot\|\) and \(\rho_{3}=\rho_{\delta}\), we get the desired inequality. On the other hand, for the Hausdorff measures inequality, we follow the same reasoning of [6, Proposition 2.1]. Assume that \(\mathcal{H}_{\mathrm{euc}}^{\varphi_{d-\alpha}}(F)<\infty\) otherwise there is nothing to prove. Let \(\zeta>\mathcal{H}^{\varphi_{d-\alpha}}_{\mathrm{euc}}(F)\) be arbitrary. Then there is a covering of \(F\) by open balls \(B_{\mathrm{euc}}(x_{n},r_{n})\) such that \[F\subset\bigcup_{n=1}^{\infty}B_{\mathrm{euc}}(x_{n},r_{n})\quad\text{ and }\quad\sum_{n=1}^{\infty}(2r_{n})^{d-\alpha}\,\ell^{d}(2r_{n})\leq\zeta. \tag{4.25}\] Let \(\mathrm{N}_{\delta}(E,r)\) be the smallest number of balls in the metric \(\delta\) of radius \(r\) by which we can cover \(E\). For all \(n\geq 1\), let \(B_{\delta}(t_{n,j},r_{n})\), \(j=1,...,\mathrm{N}_{\delta}(E,r_{n})\) be the family of balls covering \(E\). It follows that the family \(B_{\delta}(t_{n,j},r_{n})\times B_{\mathrm{euc}}(x_{n},r_{n})\), \(j=1,...,\mathrm{N}_{\delta}(E,r_{n})\), \(n\geq 1\) covers \(E\times F\). Let \(P_{\delta}(E,r)\) be the greatest number of disjoint balls \(B_{\delta}(x_{j},r)\) of radius \(r>0\) and centers \(x_{j}\in F\). The left inequality of (4.17) ensures that \[\mathsf{c}_{0}^{-1}\,P_{\delta}(E,r)\,r^{\alpha}\leq\sum_{j=1}^{P_{\delta}(E, r)}\,\mu\,(B_{\delta}(t_{j},r))=\mu(G_{1})\leq 1\quad\text{for all }r\in(0,1]. \tag{4.26}\] Using the well known fact that \[\mathrm{N}_{\delta}(E,2\,r)\leq P_{\delta}(E,r), \tag{4.27}\] we obtain that \(\mathrm{N}_{\delta}(E,r)\leq 2^{\alpha}\mathsf{c}_{0}\,r^{-\alpha}\) for all \(r\in(0,1]\). Hence combining this with (4.25) we obtain that \[\mathcal{H}^{\varphi_{d}}_{\rho_{\delta}}(E\times F)\leq\sum_{n=1}^{\infty} \sum_{j=1}^{\mathrm{N}_{\delta}(E,r_{n})}(2r_{n})^{d}\,\,\ell^{d}(2r_{n})\leq 2 ^{2\alpha}\,\mathsf{c}_{0}\,\sum_{n=1}^{\infty}(2r_{n})^{d-\alpha}\,\ell^{d}(2r _{n})\leq 2^{2\alpha}\,\mathsf{c}_{0}\,\zeta. \tag{4.28}\] Letting \(\zeta\downarrow\mathcal{H}^{\varphi_{d-\alpha}}_{\mathrm{euc}}(F)\), the desired inequality follows immediately. The result in (ii) is a consequence of [6, Proposition 2.5], we only need to mention that the capacity term \(\mathcal{C}^{1}_{\delta_{\log}}(\cdot)\) considered in (4.23) is equivalent to the capacity term \(\mathcal{C}^{0}_{\mathrm{euc}}(\cdot)\) considered in [6]. Hence the proof is complete. The next proposition states our undecidability claim with precise assumptions. In particular, any \(\alpha\)-Ahlfors-David-regular compact set \(E\) in \(X\)'s metric leads to the construction of sets in the target space where one cannot decide whether they are reachable from \(E\) based solely on their dimensions. **Proposition 4.8**.: _Let \(X\), \(a\), \(b\) and \(M\) be as in Proposition 4.6. Let \(E\subset[0,1]\) be a \(\alpha\)-Ahlfors-David regular compact set in the metric \(\delta\) with \(\alpha\in(0,d)\). Then there exist two compact sets \(F_{1},F_{2}\subset[-M,M]^{d}\) such that \(\dim_{\rho_{\delta}}(E\times F_{1})=\dim_{\rho_{\delta}}(E\times F_{2})=d\) and that_ \[\mathbb{P}\left\{X(E)\cap F_{1}\neq\varnothing\right\}=0\quad\text{ and }\quad\mathbb{P}\left\{X(E)\cap F_{2}\neq\varnothing\right\}>0. \tag{4.29}\] **Remark 4.9**.: The previous proposition shows that we can construct image sets leading to undecidability for any compact \(\alpha\)-Ahlfors-regular set in the domain of \(X\) (relative to \(\delta\)), when \(\alpha\in(0,d)\). But we are also able to construct examples of undecidable image sets with \(\alpha=d\). Indeed, assume \(X\) is a fractional Brownian motion (fBm) with Hurst parameter \(H\), and assume \(Hd=1\). We show here that in the particular case where \(E:=I\) is an interval, the critical case \(\dim_{\delta}(E)=\frac{1}{H}=d\) is also undecidable. First note that it was proved in [5] for a fractional Gaussian random field \(X\) restricted on \(I_{1}\times\ldots\times I_{k}\), for some intervals \(I_{1},\ldots,I_{k}\), with Hurst parameter \((H_{1},...,H_{k})\), that \(X\) does not visit points in the critical dimension case \(d=Q\) where \(Q=H_{1}^{-1}+...+H_{k}^{-1}\). Since our domains are one-dimensional, we apply this to the case of fBm itself, i.e. \(k=1\). Let then \(X=B^{H}\) be a \(d\)-dimensional fBm with \(Hd=1\). Let \(F_{1}=\{x\}\) for some fixed point \(x\in\mathbb{R}^{d}\); then evidently \(\dim_{\text{euc}}(F_{1})=0\) and the aforementioned result [5] implies \(\mathbb{P}\left\{X(E)\cap F_{1}\neq\varnothing\right\}=0\). On the other hand, one can easily construct a Borel set \(F_{2}\subset[-1,1]^{d}\) such that its Euclidean dimension \(\dim_{\text{euc}}(F_{2})=0\) though it has positive logarithmic capacity \(\mathcal{C}^{1}_{\delta_{\text{ho}}}(F_{2})>0\). Then, by using (4.23), we obtain \(\mathbb{P}\left\{X(E)\cap F_{2}\neq\varnothing\right\}>0\). Moreover, the intervals are known to be \(1/H\)-Ahlfors-David regular in the metric \(\delta\); therefore Lemma 4.12 ensures that \(\dim_{\rho_{\delta}}(E\times F_{1})=\dim_{\rho_{\delta}}(E\times F_{2})=1/H=d\). This prove the aforementioned undecidability. Proof of Proposition 4.8.: First, it turns out that since \(E\) is \(\alpha\)-Ahlfors-David regular in the metric \(\delta\), we have the following convenient expression for the \(\rho_{\delta}\)-dimension of \(E\times F\): \[\dim_{\rho_{\delta}}(E\times F)=\dim_{\delta}(E)+\dim_{\text{euc}}(F).\] This formula is established in Lemma 4.12, which is stated and proved in the next subsection, though this analysis lemma's proof is self-contained and its result can thus be used here. Therefore, by Proposition 4.6, recalling the notation \(\varphi_{d-\alpha}\) introduced in Item (i) therein, to obtain (4.29), it is sufficient to find \(F_{1},F_{2}\subset[-M,M]^{d}\) such that \(\dim_{\text{euc}}(F_{1})=\dim_{\text{euc}}(F_{2})=d-\alpha\) and that \[\mathcal{H}^{\varphi_{d-\alpha}}_{\text{euc}}(F_{1})=0\quad\text{ and }\quad\mathcal{C}^{d-\alpha}_{\text{euc}}(F_{2})>0. \tag{4.30}\] To prove this, we claim that it is sufficient to show the following, which is established in the independent Lemma 4.10 immediately following the proof of this proposition. Let \(\theta>1\) be fixed. There exist two probability measures \(\mu_{1}\) and \(\mu_{2}\) supported by two different compact subsets \(F_{1}\) and \(F_{2}\) of \([-M,M]^{d}\), such that for some positive constants \(\mathsf{c}_{5}\) and \(\mathsf{c}_{6}\) we have \[\mathsf{c}_{5}^{-1}\,\varphi_{d-\alpha}(r)\,\log^{\theta}(e/r)\leq\mu_{1}\,( B_{\text{euc}}(x,r))\leq\mathsf{c}_{5}\,\varphi_{d-\alpha}(r)\,\log^{\theta}(e/r) \quad\text{for all $r\in(0,1)$, $x\in F_{1}$,} \tag{4.31}\] and \[\mathsf{c}_{6}^{-1}\,r^{d-\alpha}\,\log^{-\theta}(e/r)\leq\mu_{2}\,(B_{\text{ euc}}(x,r))\leq\mathsf{c}_{6}\,r^{d-\alpha}\,\log^{-\theta}(e/r)\quad\text{for all $r\in(0,1)$, $x\in F_{2}$.} \tag{4.32}\] We begin by proving our claim (4.30) for the compacts \(F_{1}\) and \(F_{2}\) mentioned above. For all \(r\in(0,1)\) and \(F\subseteq[-M,M]^{d}\) let \(\mathrm{N}_{\text{euc}}(F,r)\) be the minimal number of balls \(B_{\text{euc}}(x_{j},r)\) of radius \(r\) required to cover \(F\). By using the lower estimate in (4.31) and the same argument used in (4.26) and (4.27), in the Euclidean metric this time, we deduce that \[\mathrm{N}_{\text{euc}}(F_{1},r)\leq\mathsf{c}_{7}\,\left(\varphi_{d-\alpha}( r)\right)^{-1}\,\log^{-\theta}(e/r)\quad\text{ for all $r\in(0,1)$.} \tag{4.33}\] Furthermore, using the definition of the \(\varphi_{d-\alpha}\)-Hausdorff measure as well as (4.33) we infer that \[\mathcal{H}^{\varphi_{d-\alpha}}_{\text{euc}}(F_{1})\leq\mathsf{c}_{8}\,\limsup _{r\to 0}\varphi_{d-\alpha}(r)\,\,\mathrm{N}_{\text{euc}}(F_{1},r)=\limsup_{r \to 0}\,\log^{-\theta}(e/r)=0, \tag{4.34}\] where \(\mathsf{c}_{8}\) is a positive constant. This gives the first outcome of (4.30). Now, we show that \(\mathcal{C}_{d-\alpha}(F_{2})>0\), where by definition it is sufficient to prove that \(\mathcal{E}_{\text{euc},d-\alpha}(\mu_{2})<\infty\), with \(\mu_{2}\) being the measure identified in (4.32). First notice that the upper bound in (4.32) ensures that \(\mu_{2}\) has no atom. Then for all \(x\in F_{2}\) we have \[\int_{F_{2}}\frac{\mu_{2}(dy)}{\|x-y\|^{d-\alpha}}=\sum_{j=0}^{ \infty}\int_{\{y\,\colon\,\|x-y\|\in(\kappa 2^{-(j+1)},\kappa 2^{-j}]\}}\,\frac{\mu_{2}(dy)}{\|x-y\|^{d-\alpha}} \leq\sum_{j=0}^{\infty}\kappa^{-(d-\alpha)}2^{(d-\alpha)\,(j+1)} \mu_{2}\left(B_{\rm euc}(t,\kappa\,2^{-j})\right)\] \[\leq 2^{d-\alpha}\,{\sf c}_{9}\,\sum_{j=0}^{\infty}\log^{-\theta}( 2^{j}\,e/\kappa), \tag{4.35}\] where \(\kappa:={\rm diam}_{\rm euc}(F_{2})\) and \({\sf c}_{9}\) depends only on \(\theta\), \(\alpha\), \(\kappa\) and \(d\). The last sum is finite since \(\theta>1\), and does not depend on \(x\) so by integrating with respect to the probability measure \(\mu_{2}(dx)\) we get that \(\mathcal{E}_{\text{euc},d-\alpha}(\mu_{2})<\infty\), which proves the second outcome of (4.30). In remains to show that \({\rm dim}_{\rm euc}(F_{1})={\rm dim}_{\rm euc}(F_{2})=d-\alpha\). First, notice that the same reasoning as in (4.33) and (4.34) will ensures that \[\mathcal{H}^{\varphi_{1}}_{\rm euc}(F_{1})<\infty\quad\text{ and }\quad \mathcal{H}^{\varphi_{2}}_{\rm euc}(F_{2})<\infty, \tag{4.36}\] where \(\varphi_{1}(r):=\varphi_{d-\alpha}(r)\log^{\theta}(1/r)\) and \(\varphi_{2}(r):=r^{d-\alpha}\log^{-\theta}(1/r)\). Since \(\ell^{d}(\cdot)\,\log^{\theta}(1/\cdot)\) and \(\log^{-\theta}(1/\cdot)\) are slowly varying functions and \(\lim_{r\to 0}\ell(r)\in(0,+\infty]\), then \[r^{d-\alpha}=o(\varphi_{1}(r))\quad\text{ and }\quad r^{d-\alpha+\varepsilon}=o( \varphi_{2}(r))\quad\text{ as }r\to 0,\] for all \(\varepsilon>0\). This fact combined together with (4.36) imply that \(\mathcal{H}^{d-\alpha}_{\rm euc}(F_{1})=\mathcal{H}^{d-\alpha+\varepsilon}_{ \rm euc}(F_{2})=0\) for all \(\varepsilon>0\). On the other hand, (4.35) ensures that \(\mathcal{E}_{\text{euc},d-\alpha}(\mu_{2})<\infty\) and then \(\mathcal{C}^{d-\alpha}_{\rm euc}(F_{2})>0\). Moreover, repeating the same argument as (4.35), we obtain that \(\mathcal{E}_{\text{euc},d-\alpha-\varepsilon}(\mu_{1})<\infty\) and then \(\mathcal{C}^{d-\alpha-\varepsilon}_{\rm euc}(F_{1})>0\) for all \(\varepsilon>0\) small enough. Hence, combining all the previous facts we infer than \[d-\alpha-\varepsilon\leq{\rm dim}_{\rm euc}(F_{1})\leq d-\alpha\leq{\rm dim}_ {\rm euc}(F_{2})\leq d-\alpha+\varepsilon\quad\text{for all}\quad\varepsilon>0.\] Since \(\varepsilon>0\) is arbitrary, we deduce that \({\rm dim}_{\rm euc}(F_{1})={\rm dim}_{\rm euc}(F_{2})=d-\alpha\), which finishes the proof. The next lemma, whose proof establishes the existence of measures \(\mu_{1}\) and \(\mu_{2}\) satisfying conditions (4.31) and (4.32), is enough to conclude the proof of the proposition. **Lemma 4.10**.: _Let \(\alpha\in(0,d)\) and \(\theta>1\), then there exist two compact subsets \(F_{1}\) and \(F_{2}\) of \([-M,M]^{d}\) which respectively, support two probability measures \(\mu_{1}\) and \(\mu_{2}\) satisfying (4.31) and (4.32)._ Before proving this lemma, we give the following key result for constructing the measures in (4.31) and (4.32), which appears as Proposition 7.4 in [6]. The proof of that proposition comes from the procedure for constructing the classical Cantor set and its associated singular continuous distribution function, which is then adapted to a scale that might involve a regularly/slowly varying function in general rather than a power function. **Proposition 4.11**.: _(Appendix B Proposition 7.4 in [6]) Let \(\psi\) be a function satisfying_ \[\psi(0)=0\quad\text{and}\quad\psi(2x)<2\psi(x)\quad\text{for all }x\in(0,x_{0}), \tag{4.37}\] _for some \(x_{0}\in(0,1)\). Then there exists a Borel set \(G\subset[0,1]\) which support a probability measure \(\nu\) such that_ \[{\bf c}_{0}^{-1}\,\psi(r)\leq\nu([a-r,a+r])\leq{\bf c}_{0}\,\psi(r)\quad\text{ for all }r\in[0,x_{0}]\text{ and }\,a\in G. \tag{4.38}\] Proof of Lemma 4.10.: First, let us define the functions \(\psi_{1}(r):=r^{1-\alpha/d}\ell^{1/d}(r)\log^{\theta/d}(e/r)\) and \(\psi_{2}(r):=r^{1-\alpha/d}\log^{-\theta/d}(e/r)\). Since \(\alpha<d\), one readily checks that \(\psi_{i}\) for \(i=1,2\) are continuous increasing functions on \((0,1)\) such that (4.37) is satisfied. Therefore, using Proposition 4.11, there exist two Borel probability measures \(\mu_{0,1}\) and \(\mu_{0,2}\) supported by two compact subsets \(F_{0,1}\) and \(F_{0,2}\) of \([0,1]\), respectively, and two positive constants \(\mathbf{c}_{1},\mathbf{c}_{2}\) such that for \(i=1,2\) we have \[\mathbf{c}_{i}^{-1}\,\psi_{i}(r)\leq\mu_{0,i}([x-r,x+r])\leq\mathbf{c}_{i}\, \psi_{i}(r)\quad\text{for all }r\in(0,r_{0})\text{ and }\,x\in F_{0,i}, \tag{4.39}\] for some \(r_{0}\in(0,1)\). Now, let \(\mu_{i}:=\underset{j=1}{\overset{d}{\otimes}}\mu_{0,i}\) and \(F_{i}:=\underset{j=1}{\overset{d}{\times}}F_{0,i}\) for \(i=1,2\). Then using (4.39) and the definition of the measure \(\mu_{i}\), we obtain that \[\mathbf{c}_{i}^{-d}\,\psi_{i}^{d}(r)\leq\mu_{i}\left(\prod_{j=1}^{d}[x_{j}-r, x_{j}+r]\right)\leq\mathbf{c}_{i}^{d}\,\psi_{i}^{d}(r)\quad\text{for all }r\in(0,1)\text{ and }\,(x_{1},\ldots,x_{d})\in F_{i}\, \tag{4.40}\] for \(i=1,2\). The fact that the Euclidean norm \(\|.\|_{2}\) and the maximum norm \(\|.\|_{\infty}\) are equivalent ensures that (4.31) and (4.32) follow with \(\mathbf{c}_{5}\) depending on the constants \(\mathbf{c}_{1}\), \(\theta\), \(\alpha\), \(d\) and \(\ell\), and the constant \(\mathbf{c}_{6}\) depending the constants \(\mathbf{c}_{2}\), \(\theta\), \(\alpha\) and \(d\). Hence the proof is complete. ### Co-dimension of the image set \(X(e)\) In this final subsection we consider the so-called stochastic codimension of our image sets. For a random Borel set \(K\subset\mathbb{R}^{d}\) the upper and lower stochastic codimensions of \(K\) are defined as follows: \[\underline{\text{codim}}(K):=\sup\left\{\beta\leq d\,:\ \text{ for all }F\subset\mathbb{R}^{d}\text{ s.t. }\ \dim_{euc}(F)<\beta\text{ we have }\mathbb{P}\{K\cap F\neq\varnothing\}=0\right\}, \tag{4.41}\] and \[\overline{\text{codim}}(K):=\inf\left\{\beta\leq d\,:\ \text{ for all }F\subset\mathbb{R}^{d}\text{ s.t. }\ \dim_{euc}(F)>\beta\text{ we have }\mathbb{P}\{K\cap F\neq\varnothing\}>0\right\}. \tag{4.42}\] The above definitions can be found in [13]. Moreover, [13, Lemma 4.7.1 p. 435] provides the following summary \[\mathbb{P}(K\cap F\neq\varnothing)\begin{cases}>0,&\text{ whenever }\dim_{euc}(F)>\overline{\text{codim}}(K)\\ =0,&\text{ whenever }\dim_{euc}(F)<\underline{\text{codim}}(K)\end{cases}. \tag{4.43}\] It is worth noting that the upper and lower stochastic codimension of \(K\) are not random, even if \(K\) is a random set. Notice that \(\underline{\text{codim}}(K)\leq\overline{\text{codim}}(K)\) for all \(K\). Moreover, in the case when \(\underline{\text{codim}}(K)=\overline{\text{codim}}(K)\), we write \(\text{codim}(K)\) for the common value and call it the stochastic codimension of \(K\). Let \((Y,\rho)\) be a metric space. We recall that, the upper Minkowski dimension of a Borel set \(G\subset Y\), in the metric \(\rho\), is defined as \[\overline{\dim}_{\rho,M}(G)=\inf\{\alpha:\exists\ \mathbf{c}(\alpha)>0\,\text{ such that }N_{\rho}(G,r)\leqslant\mathbf{c}(\alpha)\,r^{-\alpha}\text{ for all }r>0\}. \tag{4.44}\] where \(N_{\rho}(G,r)\) is the smallest number of balls of radius \(r\) in the metric \(\rho\) needed to cover \(G\). The following lemma, which shows how Minkowski dimension can be helpful in estimating Hausdorff dimensions, will be useful for the rest of this section, particularly in establishing our formula for the dimension of the cartesian product of two Borel sets, where at least one is Ahlfors-David-regular. **Lemma 4.12**.: _Let \(E\subset[a,b]\) and \(F\subset[-M,M]^{d}\) be two bounded Borel sets. Then we have_ \[\begin{split}\dim_{\delta}(E)+\dim_{\mathrm{euc}}(F)& \leq\dim_{\rho_{\delta}}(E\times F)\\ &\leq\left(\overline{\dim}_{\delta,M}(E)+\dim_{\mathrm{euc}}(F) \right)\wedge\left(\dim_{\delta}(E)+\overline{\dim}_{\mathrm{euc},M}(F) \right).\end{split} \tag{4.45}\] _Moreover, if \(E\) (resp. \(F\)) is Ahlfors-David regular, in the metric \(\delta\) (resp. the Euclidean metric), then_ \[\dim_{\delta}(E)=\overline{\dim}_{\delta,M}(E)\quad(\text{resp.}\quad\dim_{ \mathrm{euc}}(F)=\overline{\dim}_{\mathrm{euc},M}(F)). \tag{4.46}\] _In that case, i.e. when one of \(E\) or \(F\) is Ahlfors-David regular in its associated metric, we have_ \[\dim_{\rho_{\delta}}(E\times F)=\dim_{\delta}(E)+\dim_{\mathrm{euc}}(F). \tag{4.47}\] Proof.: We start by proving the upper bound in (4.45). Let us assume that \(\dim_{\delta}(E)>0\) and \(\dim_{\mathrm{euc}}(F)>0\) otherwise when one of these dimensions is equal to zero, the result can be readily deduced from the property that the Hausdorff dimension does not increase under projection. Let \(\alpha\in(0,\dim_{\delta}(E))\) and \(\beta\in(0,\dim_{\mathrm{euc}}(F))\); then \(\mathcal{C}^{\beta}_{\mathrm{euc}}(E)>0\) and by Frostman's theorem there is a probability measures \(\nu\) supported on \(E\) such that \[\nu(B_{\delta}(t,r))\leq\mathsf{c}_{5}\,r^{\alpha}\,\text{ for all }t\in[a,b] \text{ and }r\in(0,1).\] Now, using [6, Proposition 2.1-i)] we have \(\mathcal{C}^{\alpha+\beta}_{\rho_{\delta}}(E\times F)\geq\mathsf{c}_{6}\, \mathcal{C}^{\beta}_{\mathrm{euc}}(F)>0\). Hence \(\dim_{\rho_{\delta}}(E\times F)\geq\alpha+\beta\). Letting \(\alpha\uparrow\dim_{\delta}(E)\) and \(\beta\uparrow\dim_{\mathrm{euc}}(F)\), the lower inequality in (4.45) follows. For the upper bound, let \(\alpha>\overline{\dim}_{\delta,M}(E)\) and \(\beta>\dim_{\mathrm{euc}}(F)\), then \(\mathcal{H}^{\beta}_{\mathrm{euc}}(F)=0\) and \[\mathrm{N}_{\delta}(E,r)\leq\mathsf{c}_{7}\,r^{-\alpha}\quad\text{ for all }r>0. \tag{4.48}\] By using [6, Proposition 2.1-ii)] we obtain \(\mathcal{H}^{\alpha+\beta}_{\rho_{\delta}}(E\times F)\leq\mathsf{c}_{8}\, \mathcal{H}^{\beta}_{\mathrm{euc}}(F)=0\). Hence \(\dim_{\rho_{\delta}}(E\times F)\leq\alpha+\beta\). Letting \(\alpha\downarrow\overline{\dim}_{\delta,M}(E)\) and \(\beta\downarrow\dim_{\mathrm{euc}}(F)\), the first term of the upper inequality in (4.45) follows. The second term follows in the same way. For the statement (4.46) it suffices to go through the same lines of the proof of the Euclidean case, which is shown in [15, Theorem 5.7 p. 80]. The last statement of the lemma follows immediately from its first two statements. We are ready to state and easily prove a formula for the stochastic codimension of our processes' image sets. **Corollary 4.13**.: _Let \(X\) be a \(d\)-dimensional Gaussian process verifying the commensurability condition \((\mathbf{\Gamma})\) such that its standard deviation function \(\gamma\) satisfies the concavity Hypothesis (2.2) and the mild regularity condition \((\mathbf{C_{0+}})\). Let \(E\subset[0,1]\) be a Borel set such that \(\dim_{\delta}(E)=\overline{\dim}_{\delta,M}(E)\). Then we have_ \[\operatorname{codim}\left(X(E)\right)=(d-\dim_{\delta}(E))\lor 0. \tag{4.49}\] Proof.: First, using Corollary 4.2 and Lemma 4.12 we obtain that \[\mathbb{P}\left\{X(E)\cap F\neq\varnothing\right\}\left\{\begin{array}{ll}>0& \mbox{ if }\dim_{\rm euc}(F)>d-\dim_{\delta}(E)\\ =0&\mbox{ if }\dim_{\rm euc}(F)<d-\dim_{\delta}(E)\end{array}\right.. \tag{4.50}\] If \(0<\dim_{\delta}(E)<d\,\) then (4.50) ensures immediately that \({\rm codim}(X(E))=d-\dim_{\delta}(E)\). On the other hand, if \(\dim_{\delta}(E)\geq d\) then (4.50) implies that \(\mathbb{P}\left\{X(E)\cap F\neq\varnothing\right\}>0\) for all \(F\subset[-M,M]^{d}\) with \(\dim_{\rm euc}(F)>0\), which means that \(\overline{{\rm codim}}(X(E))=0\). Remains the case when \(\dim_{\delta}(E)=\overline{\dim_{\delta,M}}(E)=0\), for which (4.50) provides that \(\mathbb{P}\left\{X(E)\cap F\neq\varnothing\right\}=0\) for all \(F\subset[-M,M]^{d}\) with \(\dim_{\rm euc}(F)<d\). This implies that \(\underline{{\rm codim}}(X(E))=d\). Hence the proof of (4.49) is then complete. We finish our paper with a discussion and a conjecture of what may happen when the mild regularity condition \(({\bf C_{0+}})\) fails to hold. The method of Theorem 4.1 leads to a lack of information on hitting probabilities estimates when that condition fails. For instance, in the logBm scale, i.e. when \(\delta(t,s)\asymp\log^{-\beta}(1/|t-s|)\) for some \(\beta>1/2\), the method Subsection 4.1 leads to a lower bound of \(\mathbb{P}\left\{X(E)\cap F\neq\varnothing\right\}\) in terms of the \(\rho_{\delta}\)-capacity of \(E\times F\) with order \(d\), and to an upper bound in terms of the \(\rho_{\delta}\)-Hausdorff measure of \(E\times F\) with order \(d(1-1/2\beta)\). Namely we have \[c_{1}^{-1}{\cal C}_{\rho_{\delta}}^{d}(E\times F)\,\leq\mathbb{P}\left\{X(E) \cap F\neq\varnothing\right\}\leq c_{1}\,{\cal H}_{\rho_{\delta}}^{d(1-1/2 \beta)}(E\times F), \tag{4.51}\] which implies that \[\mathbb{P}\left\{X(E)\cap F\neq\varnothing\right\}\left\{\begin{array}{ll}> 0&\mbox{ if }\dim_{\rho_{\delta}}(E\times F)>d\\ =0&\mbox{ if }\dim_{\rho_{\delta}}(E\times F)<d(1-1/2\beta)\end{array}\right.. \tag{4.52}\] If \(E\) is Ahlfors-David regular, by Lemma 4.12 we have \(\dim_{\rho_{\delta}}(E\times F)=\dim_{\delta}(E)+\dim_{\rm euc}(F)\). Therefore (4.52) takes the following form \[\mathbb{P}\left\{X(E)\cap F\neq\varnothing\right\}\left\{\begin{array}{ll}> 0&\mbox{ if }\dim_{\rm euc}(F)>d-\dim_{\delta}(E)\\ =0&\mbox{ if }\dim_{\rm euc}(F)<d-\dim_{\delta}(E)-d/2\beta\end{array}\right.. \tag{4.53}\] When combining (4.41), (4.42) and (4.53) we get that \[\overline{{\rm codim}}\left(X(E)\right)\leq d-\dim_{\delta}(E)\quad\mbox{ and }\quad\underline{{\rm codim}}\left(X(E)\right)\geq d-\dim_{\delta}(E)-d/2\beta.\] On the other hand, it follows from (3.2) and (3.16), when \(\dim_{\delta}(E)\leq d\), that \(\dim_{\delta}(E)\) and \(\dim_{\delta}(E)+d/2\beta\) are lower and upper bounds for \(\dim_{\rm euc}X(E)\), respectively. Moreover Theorem 3.9, which holds without any regularity assumptions on the standard deviation function \(\gamma\), ensures that \(\dim_{\rm euc}X(E)={\bf c}(E)\) a.s., where \({\bf c}(E)\) is a non-random constant depending only on \(E\) and on the law of \(X\). Thus in the case of logBm, the constant \({\bf c}(E)\) lives in the interval \([\dim_{\delta}(E)\,,\,\dim_{\delta}(E)+d/2\beta]\), which becomes an increasingly precise estimate as one approaches the regularity realm of Condition \(({\bf C_{0+}})\). However, we conjecture that the constant \({\bf c}(E)\), whose value is unknown for highly irregular processes beyond that realm, is nonetheless directly connected to the image's stochastic codimension. In other words we conjecture the following. **Conjecture 4.1**.: _Let \(X\) be as in Theorem 3.9. Let \(E\subset[0,1]\) be a Borel set such that \(\dim_{\delta}(E)=\overline{\dim_{\delta,M}}(E)\leq{\bf c}(E)\leq d\), where \({\bf c}(E)\) was defined in that theorem as the almost sure value of \(\dim_{\rm euc}X(E)\). Then_ \[\mathbb{P}\left\{X(E)\cap F\neq\varnothing\right\}\left\{\begin{array}{ll}> 0&\mbox{ if }\dim_{\rm euc}(F)>d-{\bf c}(E)\,,\\ =0&\mbox{ if }\dim_{\rm euc}(F)<d-{\bf c}(E)\,.\end{array}\right. \tag{4.54}\] _In other words, we have the following formula for the stochastic codimension of \(X(E)\):_ \[{\rm codim}\left(X(E)\right)=d-{\bf c}(E).\]
2307.16892
Origin of correlated isolated flat bands in copper-substituted lead phosphate apatite
A recent report of room temperature superconductivity at ambient pressure in Cu-substituted apatite (`LK99') has invigorated interest in the understanding of what materials and mechanisms can allow for high-temperature superconductivity. Here I perform density functional theory calculations on Cu-substituted lead phosphate apatite, identifying correlated isolated flat bands at the Fermi level, a common signature of high transition temperatures in already established families of superconductors. I elucidate the origins of these isolated bands as arising from a structural distortion induced by the Cu ions and a chiral charge density wave from the Pb lone pairs. These results suggest that a minimal two-band model can encompass much of the low-energy physics in this system. Finally, I discuss the implications of my results on possible superconductivity in Cu-doped apatite.
Sinéad M. Griffin
2023-07-31T17:58:17Z
http://arxiv.org/abs/2307.16892v2
# Origin of correlated isolated flat bands in copper-substituted lead phosphate apatite ###### Abstract A recent report of room temperature superconductivity at ambient pressure in Cu-substituted apatite ('LK99') has invigorated interest in the understanding of what materials and mechanisms can allow for high-temperature superconductivity. Here I perform density functional theory calculations on Cu-substituted lead phosphate apatite, identifying correlated isolated flat bands at the Fermi level, a common signature of high transition temperatures in already-established families of superconductors. I elucidate the origins of these isolated bands as arising from a structural distortion induced by the Cu ions and a chiral charge density wave from the Pb lone pairs. These results suggest that a minimal two-band model can encompass much of the low-energy physics in this system. Finally, I discuss the implications of my results on possible superconductivity in Cu-doped apatite. ## I Introduction High-T\({}_{C}\) superconductors are arguably the holy grail of condensed matter physics with huge potential applications for an energy-efficient future. The first class of superconductors that were considered to be high-T\({}_{C}\) were the cuprates which were discovered by Bednorz and Muller in 1987 [1]. The cuprates have been subsequently followed by several new classes including the Fe-pnictides in 2008 [2] and the nickelates [3]. While significant strides have been made in the discovery and understanding of high-T\({}_{C}\) superconductors, and we continue to unearth novel examples within established classes [4], a definitive roadmap to achieving room-temperature T\({}_{C}\) under ambient pressures has remained elusive. Common to many of these high-T\({}_{C}\) superconducting families are strongly correlated bands which can give rise to unconventional mechanisms for Cooper pair formation [5; 6], and proximity to multiple competing interactions such as antiferromagnetism, charge density waves and spin-density waves. These phases can compete or coexist with superconductivity where fluctuations between these states are believed to play a significant role for achieving high-T\({}_{C}\). Searching for these features in new materials systems is therefore a promising route for finding new classes of high-T\({}_{C}\) superconductors. For instance, the nickelate superconductors were originally predicted in theory by Anisimov, Bukhvalov and Rice as an analogy to the cuprate superconductors [7]. Similar approaches have also been proposed to selectively design a material with the sought-after isolated d-manifold that is associated with strong correlations [8], and have inspired high-throughput searches for good candidate materials, further expanding the horizons of high-Tc superconductivity [9]. The recent report of possible room temperature superconductivity at ambient pressures in Cu-substituted apatite (also known as 'LK99')[10; 11] motivates the need for a thorough understanding of the structure-property relationships in these compounds to begin to unravel their potential correlated physics. In this Letter, I use _ab initio_ calculations to elucidate the key competing interactions in Cu-doped apatite at the mean-field density functional level. ## II Methods I used the Vienna Ab initio Simulation Package (VASP) [12; 13; 14; 15] for all density functional theory (DFT) calculations with full calculation details given in the SI. I applied a Hubbard-U correction to account for the underlocalization of the Cu-\(d\) states. I tested values of U between 2 eV and 6 eV, finding my results were similar for all values calculated. The results in the main text are for U = 4 eV which gives lattice parameters within 1% of experiment [10; 16]. ## III Results ### Structural Properties Apatites are materials with the general formula A\({}_{10}\)(TO\({}_{4}\))\({}_{6}\)X\({}_{2\pm x}\), where A = alkaline or rare earth metal; T = Ge, Si, or P; and X = halide, O, or OH. The name 'apatite' derives from the Greek _apat\(\bar{e}\)_ meaning 'deceit' as a result of the diverse range of forms it can take [17]. Here I consider the lead-phosphate apatite Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)(OH)\({}_{2}\). Taking its structure reported from X-ray diffraction in Ref. [16] as the starting point, its structure following a full optimization is depicted in Fig. 1. It adopts the typical crystal structure of various apatite chemistries, namely it forms a network comprising PbO\({}_{6}\) prisms that are corner shared with PO\({}_{4}\) tetrahedra. I refer to these Pb as Pb(1), in keeping with the convention in literature [18]. This framework is filled with Pb\({}_{6}\)(OH)\({}_{2}\) where the (OH)\({}_{2}\) forms a chain in the center of a hexagonal
2309.11489
Text2Reward: Reward Shaping with Language Models for Reinforcement Learning
Designing reward functions is a longstanding challenge in reinforcement learning (RL); it requires specialized knowledge or domain data, leading to high costs for development. To address this, we introduce Text2Reward, a data-free framework that automates the generation and shaping of dense reward functions based on large language models (LLMs). Given a goal described in natural language, Text2Reward generates shaped dense reward functions as an executable program grounded in a compact representation of the environment. Unlike inverse RL and recent work that uses LLMs to write sparse reward codes or unshaped dense rewards with a constant function across timesteps, Text2Reward produces interpretable, free-form dense reward codes that cover a wide range of tasks, utilize existing packages, and allow iterative refinement with human feedback. We evaluate Text2Reward on two robotic manipulation benchmarks (ManiSkill2, MetaWorld) and two locomotion environments of MuJoCo. On 13 of the 17 manipulation tasks, policies trained with generated reward codes achieve similar or better task success rates and convergence speed than expert-written reward codes. For locomotion tasks, our method learns six novel locomotion behaviors with a success rate exceeding 94%. Furthermore, we show that the policies trained in the simulator with our method can be deployed in the real world. Finally, Text2Reward further improves the policies by refining their reward functions with human feedback. Video results are available at https://text-to-reward.github.io/ .
Tianbao Xie, Siheng Zhao, Chen Henry Wu, Yitao Liu, Qian Luo, Victor Zhong, Yanchao Yang, Tao Yu
2023-09-20T17:39:13Z
http://arxiv.org/abs/2309.11489v3
# Text2Reward: Automated Dense Reward Function Generation for Reinforcement Learning ###### Abstract Designing reward functions is a longstanding challenge in reinforcement learning (RL); it requires specialized knowledge or domain data, leading to high costs for development. To address this, we introduce Text2Reward, a data-free framework that automates the generation of dense reward functions based on large language models (LLMs). Given a goal described in natural language, Text2Reward generates dense reward functions as an executable program grounded in a compact representation of the environment. Unlike inverse RL and recent work that uses LLMs to write sparse reward codes, Text2Reward produces interpretable, free-form dense reward codes that cover a wide range of tasks, utilize existing packages, and allow iterative refinement with human feedback. We evaluate Text2Reward on two robotic manipulation benchmarks (ManiSkill2, MetaWorld) and two locomotion environments of MuJoCo. On 13 of the 17 manipulation tasks, policies trained with generated reward codes achieve similar or better task success rates and convergence speed than expert-written reward codes. For locomotion tasks, our method learns six novel locomotion behaviors with a success rate exceeding 94%. Furthermore, we show that the policies trained in the simulator with our method can be deployed in the real world. Finally, Text2Reward further improves the policies by refining their reward functions with human feedback. Video results are available at [https://text-to-reward.github.io](https://text-to-reward.github.io). ## 1 Introduction Reward shaping (Ng et al., 1999) remains a long-standing challenge in reinforcement learning (RL); it aims to design reward functions that guide an agent towards desired behaviors more efficiently. Traditionally, reward shaping is often done by manually designing rewards based on expert intuition and heuristics, while it is a time-consuming process that demands expertise and can be sub-optimal. Inverse reinforcement learning (IRL) (Ziebart et al., 2008; Wulfmeier et al., 2016; Finn et al., 2016) and preference learning (Christiano et al., 2017; Ibarz et al., 2018; Lee et al., 2021; Park et al., 2022) have emerged as potential solutions to reward shaping. A reward model is learned from human demonstrations or preference-based feedback. However, both strategies still require considerable human effort or data collection; also, the neural network-based reward models are not interpretable and cannot be generalized out of the domains of the training data. This paper introduces a novel framework, Text2Reward, to write dense reward code based on goal descriptions. Given an RL goal (e.g., "push the chair to the marked position"), Text2Reward generates dense reward code (Figure 1 middle) based on large language models (LLMs), grounded on a compact, Python representation of the environment (Figure 1 left). The dense reward code is then used by an RL algorithm such as PPO (Schulman et al., 2017) and SAC (Haampoi et al., 2018) to train a policy (Figure 1 right). Different from inverse RL, Text2Reward is data-free and generates symbolic reward with high interpretability. Different from recent work (Yu et al., 2023) that used LLMs to write sparse reward code (i.e., the reward is non-zero only when the episode ends) with hand-designed APIs, our free-form dense reward code has a wider coverage of tasks and can utilize established coding packages (e.g. NumPy operations over point clouds and agent positions). Finally, given the sensitivity of RL training and the ambiguity of language, the RL policy may fail to achieve the goal or achieve it in unintended ways. Text2Reward addresses this problem by executing the learned policy in the environment, requesting human feedback, and refining the reward accordingly. We conduct systematic experiments on two robotics manipulation benchmarks (ManiSkill2(Gu et al., 2023), MetaWorld(Yu et al., 2020)) and two locomotion environments of MuJoCo(Brockman et al., 2016), as cases. On 13 out of 17 manipulation tasks, policies trained with our generated reward code achieve comparable or better success rates and convergence speed than the ground truth reward code carefully tuned by human experts. For locomotion, Text2Reward learns 6 novel locomotion behaviors with over 94% success rate. We also demonstrate that the policy trained in the simulator can be deployed on a real Franka Panda robot. With human feedback of less than 3 iterations, our method can iteratively improve the success rate of learned policy from 0 to almost 100%, as well as resolve task ambiguity. In summary, the experimental results demonstrated that Text2Reward can generate generalizable and interpretable dense reward code, enabling a wide coverage of RL tasks and a human-in-the-loop pipeline. We hope that the results can inspire further explorations in the intersection of reinforcement learning and code generation. ## 2 Approach We propose the Text2Reward framework for reward shaping. Text2Reward takes as input a goal instruction and a description of the environment and generates a dense reward function to be used by any RL algorithm. In this section, we introduce the background and details of Text2Reward. ### Background Reward codeReinforcement learning (RL) aims to learn a policy that maximizes the expected reward in an episode. To train a policy to achieve a goal, the key is to design a reward function that specifies the goal. The reward function can take various forms such as a neural network or a piece of reward code. In this paper, we focus on the reward code given its interpretability. In this case, the observation and the action are represented as variables, such that the reward does not need to handle perception - it only reasons about abstract variables and APIs in code. Reward shapingReinforcement learning from task completion rewards is difficult because the reward signals are sparse and delayed Sutton & Barto (2005). A dense reward function is useful since it encourages key intermediate steps and regularization that help achieve the goal. In the form of code, the dense reward function returns a scalar value at each timestep, instead of only at the final timestep. Figure 1: An overview of Text2Reward of three stages: _Expert Abstraction_ provides an abstraction of the environment as a hierarchy of Pythonic classes. _User Instruction_ describes the goal to be achieved in natural language. _User Feedback_ allows users to summarize the failure mode or their preferences, which are used to improve the reward code. ### Zero-Shot and Few-Shot Dense Reward Generation In this part, we describe the core of Text2Reward for zero-shot and few-shot dense reward generation. Detailed prompt examples can be found in the Appendix C. Interactive generation is described in the next subsection. InstructionThe instruction is a natural language sentence that describes what we want the agent to achieve (e.g. "push the chair to the marked position"). It can be provided by the user, or it can be one of the subgoals for a long-horizon task, planned by the LLM. Environment abstractionTo ground reward generation in an environment, it is necessary for the model to know how object states are represented in the environment, such as the configuration of robots and objects, and what functions can be called. We adopt a compact representation in Pythonic style as shown in Figure 1, which utilizes Python class, typing, and comment. Compared to listing all environment-specific information in the list or table format, Pythonic representation has a higher level of abstraction and allows us to write general, reusable prompts across different environments. Moreover, this Pythonic representation is prevalent in LLMs pre-training data, making it easier for the LLM to understand the environment. Background knowledgeGenerating dense reward codes can be challenging for LLMs due to the scarcity of data in these domains. Recent works have shown the benefits of providing relevant function information and usage examples to facilitate code generation (Shi et al., 2022; Zhou et al., 2022). Inspired by them, we provide functions that can be helpful in this environment as background knowledge (e.g., NumPy/SciPy functions for pairwise distance and quaternion computation, specified by their input and output types and natural language explanations). Few-shot examplesProviding relevant examples as input has been shown to be useful in helping LLMs solve tasks. We assume access to a pool of pairs of instructions and verified reward codes. The library can be initialized by experts and then continually extended by our generated dense reward code. We utilize the sentence embedding model from Su et al. (2022) to encode each instruction. Given a new instruction, we use the embedding to retrieve the top-\(k\) similar instructions and concatenate the instruction-code pairs as few-shot examples. Reducing error with code executionOnce the reward code is generated, we execute the code in the code interpreter. This step may give us valuable feedback, e.g., syntax errors and runtime errors (e.g., shape mismatch between matrices). In line with previous works (Le et al., 2022; Olausson et al., 2023), we utilize the feedback from code execution as a tool for ongoing refinement within the LLM. This iterative process fosters the systematic rectification of errors and continues until the code is devoid of errors. Our experiments show that this step decreases error rates from 10% to near zero. ### Improving Reward Code from Human Feedback Humans seldom specify precise intent in a single interaction. In an optimistic scenario, the initial generated reward functions may be semantically correct but practically sub-optimal. For instance, users instructing a robot to open a cabinet may not specify whether to pull the handle or the edge of the door. While both methods open the cabinet, the former is preferable because it is less likely to damage the furniture and the robot. In a pessimistic scenario, the initially generated reward function may be too difficult to accomplish. For instance, telling a robot to "clean up the desk" results in a more difficult learning process than telling the robot to "pick up items on the desk and then put them in the drawer below". While both descriptions specify the same intent, the latter provides intermediate objectives that simplify the learning problem. To address the problem of under-specified instructions resulting in sub-optimal reward functions, Text2Reward actively requests human feedback from users to improve the generated reward functions. After every RL training cycle, the users are provided with rollout videos of task execution by the current policy. Users then offer critical insights and feedback based on the video, identifying areas of improvement or errors. This feedback is integrated into subsequent prompts to generate more refined and efficient reward functions. In the first example of opening a cabinet, the user may say "use the door handles" to discourage the robot from damaging itself and the furniture by opening using the door edges. In the second example of cleaning a desk, the user may say "pick up the items and store them in the drawer" to encourage the robot to solve sub-tasks. It is noteworthy that this setup encourages the participation of general users, devoid of expertise in programming or RL, enabling a democratized approach to optimizing system functionality through natural language instructions, thus eliminating the necessity for expert intervention. ## 3 Experiment Setup We evaluate Text2Reward on manipulation and locomotion tasks across three environments: MetaWorld, ManiSkill2, and Gym MuJoCo. We use GPT-41 as the LLM. We choose the RL algorithm (PPO or SAC) and set default hyper-parameters according to the performance of human-written reward, and fix that in all experiments on this task to do RL training. Experiment hyperparameters are listed in Appendix A. Footnote 1: [https://platform.openai.com/docs/guides/gpt](https://platform.openai.com/docs/guides/gpt). This work uses gpt-4-0314. ### Manipulation Tasks We demonstrate manipulation on MetaWorld, a commonly used benchmark for Multi-task Robotics Learning and Preference-based Reinforcement Learning (Nair et al., 2022; Lee et al., 2021; Hejna III & Sadigh, 2023), and ManiSkill2, a platform showcasing a diverse range of object manipulation tasks executed within environments with realistic physical simulations. We evaluate a diverse set of manipulation tasks including pick-and-place, assembly, articulated object manipulation with revolute or sliding joint, and mobile manipulation. For all tasks, compare Text2Reward with _oracle_ reward functions tuned by human experts (provided in the original codebases). For RL training, we tune the hyperparameters such that the oracle reward functions have the best results, and then keep them fixed when running Text2Reward. The full list of tasks, corresponding input instructions, and details of simulated environments are found in Appendix B. ### Locomotion Tasks For locomotion tasks, we demonstrate our method using Gym MuJoCo. Due to the lack of expert-written reward functions for locomotion tasks, we follow previous work (Christiano et al., 2017; Lee et al., 2021) to evaluate the policy based on human judgment of the rollout video. We develop six novel tasks in total for two different locomotion agents, Hopper (a 2D unipedal robot) and Ant (a 3D quadruped robot). The tasks include Move Forward, Front Flip and Back Flip for Hopper, as well as Move Forward, Lie Down, and Wave Leg for Ant. ### Real Robot Manipulation Unlike model-based methods such as model predictive control (MPC) (Howell et al., 2022), which require further parameter adjustment, our RL agents--trained in a simulator--can be directly deployed in the real world, necessitating only minor calibration and the introduction of random noise for sim-to-real transfer. To demonstrate this benefit, as well as verify the generalization ability of RL policy trained from our generated reward, we conducted a real robot manipulation experiment with the Franka Panda robot arm. We verify our approach on two manipulation tasks: Pick Cube and Stack Cube. To obtain the object state required by our RL policy, we use the Segment Anything Model (SAM) (Kirillov et al., 2023) and a depth camera to get the estimated pose of objects. Specifically, we query SAM to segment each object in the scene. The segmentation map and the depth map together give us an incomplete point cloud. We then estimate the pose of the object based on this point cloud. ### Interactive Generation with human feedback We conduct human feedback on a challenging task for single-round reward code generation, Stack Cube, to investigate whether human feedback can improve or fix the reward code, enabling RL algorithms to successfully train models in a given environment. This task involves reaching the cube, grasping the cube, placing the cube on top of another cube, and releasing the cube while making it static. We sample 3 generated codes from zero-shot and few-shot methods and perform this task with two rounds of feedback. In addition, we also conduct experiments on one locomotion task Ant Lie Down, where the initial training results do not satisfy the user's preference. The general user who provides the feedback can only see the rollout video and learning curve, without any code. The authors provide feedback as per the described setup. ## 4 Results and Analysis ### Main Results This section shows the results of Text2Reward for robotics manipulation and locomotion. Generated reward function samples can be found in the Appendex D. Figure 3: Learning curves on Metaworld under zero-shot reward generation setting, measured by success rate. Notations follow Figure 2. Additional results can be found in the Appendix E. Figure 2: Learning curves on **Manskill\(\underline{1}\)** under zero-shot and few-shot reward generation settings, measured by task success rate. _Oracle_ means the expert-written reward function provided by the environment; _zero-shot_ and _few-shot_ stands for the reward function is generated by Text2Reward w.o and w. retrieving examples from expert-written rewards functions examples for prompting. The solid line represents the mean success rate, while the shaded regions correspond to the standard deviation, both calculated across five different random seeds. **Text2Reward \(\simeq\) expert-designed rewards on manipulation tasks.** Quantitative results from the Maniskill2 and Metaworld environments are shown in Figures 2 and 3. In the figures, _Oracle_ means the expert-written dense reward function provided by the environment; _zero-shot_ and _few-shot_ stands for the dense reward function generated by Text2Reward without human feedback under zero-shot and few-shot prompting paradigms, respectively. On 13 of the 17 tasks, the final performance (i.e., success rate after convergence and convergence speed) of Text2Reward achieves comparable results to the human oracle. Surprisingly, on 4 of the 17 tasks, zero-shot and few-shot Text2Reward can even outperform human oracle, in terms of either the convergence speed (e.g., Open Cabinet Door in Maniskill2, Handle Press in MetaWorld) or the success rate (e.g., Pick Cube in Maniskill2, Drawer Open in MetaWorld). It suggests that LLMs have the potential to draft high-quality dense reward functions without any human intervention. Furthermore, as illustrated in Figure 2, in 2 of the 6 tasks that are not fully solvable, the few-shot paradigm markedly outperforms the zero-shot approach. This underscores the benefits of utilizing few-shot examples from our skills library in enhancing the efficacy of RL training reward functions. **Text2Reward can learn novel locomotion behaviors.** Table 1 shows the success rate of all six tasks trained with the reward generated under the zero-shot setting, evaluated by humans watching the rollout videos. The results suggest that our method can generate dense reward functions that generalize to novel locomotion tasks. Image samples from the Gym Mujoco environment of three selected tasks are shown in Figure 4. Corresponding full video results are available here. **Demonstrating Text2Reward on a real robot.** Figure 5 shows the key frames of real robot manipulation on two tasks: Pick Cube and Stack Cube. Here, we use the same 7 DoF Franka Panda robot arm as ManiSkill2 simulation environment. Results suggest that the RL policy trained in the simulator using dense reward function generated from Text2Reward can be successfully deployed to the real world. Full videos of robot execution are on our project page. \begin{table} \begin{tabular}{c c c c} \hline \hline \multicolumn{2}{c}{Hopper} & \multicolumn{2}{c}{Ant} \\ \hline Task & Success Rate & Task & Success Rate \\ \hline Move Forward & 100\% & Move Forward & 94\% \\ Front Flip & 99\% & Lie Down & 98\% \\ Back Flip & 100\% & Wave Leg & 95\% \\ \hline \hline \end{tabular} \end{table} Table 1: Success rate of locomotion tasks in Gym MuJoCo trained on reward functions generated in zero-shot setting. Each task is tested on 100 rollouts, and the task success is decided by a human annotator after watching the rollout video. Figure 4: Novel locomotion behaviors acquired through Text2Reward under zero-shot reward generation setting. These images are sampled by policy rollouts in Gym MuJoCo. Text2Reward can resolve ambiguity from human feedback.To demonstrate the ability of Text2Reward to address this problem, we show one case in which "control the Ant to lie down" itself has ambiguity in terms of the orientation of the Ant, as shown in Figure 6. After observing the training result of this instruction, the user can give the feedback in natural language, e.g., "the Ant's torso should be top down, not bottom up". Then Text2Reward will regenerate the reward code and train a new policy, which successfully caters to the user's intent. Text2Reward can improve RL training from human feedback.Given the sensitivity of RL training, sometimes single-turn generation can not generate good enough reward functions to finish the task. In these cases, Text2Reward asks for human feedback on the failure mode and tries to improve the dense reward. In Figure 7, we demonstrate this on the Stack Cube task, where zero-shot and few-shot generation in a single turn fails to solve the task stably. For few-shot generation, we observed that interactive code generation with human feedback can improve the success rate from zero to one, as well as speed up the convergence speed of training. However, this improvement is prone to the quality of the reward function generated in the beginning (i.e. _iter0_). For relatively low-quality reward functions (e.g. zero-shot generated codes), the improvement in success rate after iterations of feedback is not as pronounced as for few-shot generated codes. This problem may be solved in a sparse-to-dense reward function generation manner, which generates the stage reward first and then generates reward terms interactively. We leave this paradigm for possible future work. ### Error Analysis on Generated Function We conduct an error analysis on the generated reward functions. We manually go over 100 reward function examples each generated by the zero-shot and few-shot prompting on 10 different tasks of ManiSkill2, where each task has 10 different reward codes. These reward functions are generated specifically for our error analysis and with no execution feedback to LLMs. We classify them into 4 error types: Class attributes misuse; Attributes hallucination (referring to attributes that do not exist); Figure 5: Sampled images for real robot manipulation on Pick Cube (i.e., pick a cube and move it to a predefined position) and Stack Cube (i.e., stack a cube onto another one). Figure 6: Interactive reward generation from human feedback. The original instruction is _control the Ant to lie down_, which is ambiguous in the orientation of the Ant. Our interactive framework allows the user to provide feedback based on the rollout observation. Syntax/shape error; Wrong package. Table 2 shows that the overall error rate is around 10%. Within these error samples, 30% of the errors are caused by the code syntax or shape mismatch, the rest are introduced during grounding from background knowledge context to choose the existing yet right one, indicating there is still space to improve on understanding how to choose the right function and attributes for Text2Reward direction, especially for code generation community. ## 5 Related Work Reward ShapingReward shaping remains a persistent challenge in the domain of reinforcement learning (RL). Traditionally, handcrafted reward functions are employed, yet crafting precise reward functions is a time-consuming process that demands expertise. Inverse reinforcement learning (IRL) emerges as a potential solution, where a non-linear reward model is recovered from expert trajectories to facilitate RL learning (Ziebart et al., 2008; Wulfmeier et al., 2016; Finn et al., 2016). However, this technique necessitates a large amount of high-quality trajectory data, which can be elusive for complex and rare tasks. An alternative approach is preference learning, which develops a reward model based on human preferences (Christiano et al., 2017; Ibarz et al., 2018; Lee et al., 2021; Park et al., 2022; Zhu et al., 2023). In this method, humans distinguish preferences between pairs of actions, upon which a reward model is constructed utilizing the preference data. Nonetheless, this strategy still requires some human-annotated preference data which is expensive or even hard to collect in some cases. Both of these prevalent approaches to reward shaping demand extensive high-quality data, resulting in compromised generalizability and low efficiency. In contrast, Text2Reward excels with limited (or even zero) data input and can be easily generalized to new tasks in the environment. Language Models in Reinforcement LearningLarge Language Models (LLMs) have exhibited remarkable reasoning and planning capabilities (Wei et al., 2022; Huang et al., 2022a). Recent works have shown that the knowledge in LLMs can be helpful for RL and can transform the data-driven \begin{table} \begin{tabular}{l l c c} \hline \hline **Type of Error** & **Description of Error** & **Zero-shot** & **Few-shot** \\ \hline Class attribute misuse & Use other classes’ attribute wrongly & 6\% & 4\% \\ Attribute hallucination & Invent nonexistent attribute & 3\% & 2\% \\ Syntax/shape error & Incorrect program grammar or shape mismatch & 3\% & 3\% \\ Wrong package & Import incorrect package function & 1\% & 1\% \\ Correct & Execute correctly without error & 87\% & 90\% \\ \hline \hline \end{tabular} \end{table} Table 2: Error distribution across 100 generated reward code on Maniskill2. Figure 7: Training iteration vs. success rate on Stack Cube with interactive generation. _Oracle_ is the reward code manually tuned by experts; _iter0_ is generated by Text2Reward without feedback; _iter1_ and _iter2_ are generated after 1 or 2 feedback iterations; _zero-shot_ and _few-shot_ stands for how the _iter0_ code is generated. The solid lines represent the mean success rate, and the shaded regions correspond to the standard deviation, with three samples. policy network acquisition paradigm (Carta et al., 2023; Wu et al., 2023). This trend sees LLM-powered autonomous agents coming to the fore, with a growing body of approaches that use LLMs as policy networks during the RL process, indicating a promising trajectory in this field (Yao et al., 2022; Shinn et al., 2023; Lin et al., 2023; Wang et al., 2023; Xu et al., 2023; Yao et al., 2023; Hao et al., 2023). Instead of directly using LLMs as the policy model or the reward model, Text2Reward generates dense reward code to train RL policies, which has an advantage in terms of the flexibility of agent model type and inference efficiency. Language Models for RoboticsUtilizing LLMs for embodied applications emerges as a popular trend of research, and typical directions include planning and reasoning through language model generation (Ahn et al., 2022; Zeng et al., 2022; Liang et al., 2022; Huang et al., 2022; Singh et al., 2023; Song et al., 2022). Recent works have harnessed the capabilities of LLMs to assist in the learning of primitive tasks (Brohan et al., 2022; Huang et al., 2023; Brohan et al., 2023; Mu et al., 2023), by finetuning LLMs on robotic trajectories to predict primitive actions. Different from them, Text2Reward generates reward codes to learn smaller policy networks. A recent work Yu et al. (2023) combines sparse reward code and Model Predictive Control (MPC) to synthesize robotic actions. Different from them, our work trains model-free RL agents that exhibit greater adaptability. ## 6 Conclusion We proposed Text2Reward, an interactive reward code generation framework that uses LLMs to automate dense reward code generation for reinforcement learning. Our experiments showcased the effectiveness of our approach, as the RL policies trained with our generated reward codes were able to match or even surpass the performance of those trained with expert-designed codes in the majority of tasks. We also showcased real-world applicability by deploying a policy trained in a simulator on a real robot. By incorporating human feedback, our approach iteratively refines the generated reward codes, addressing the challenge of language ambiguity and improving the success rates of learned policies. This interactive learning process allows for better alignment with human needs and preferences, leading to more robust and efficient reinforcement learning solutions. In conclusion, Text2Reward demonstrates the effectiveness of using natural language to transform human intentions and LLMs knowledge into reward function code, which is then incorporated into policy functions. We hope that our work may serve as an inspiration for researchers across various disciplines, including but not limited to reinforcement learning and code generation, to further investigate this promising intersection of fields and contribute to the ongoing advancement of research in these areas. ## 7 Limitations and Future Work Our work demonstrates the effectiveness of generating dense reward functions for RL. We focus on the code-based reward format, which gives us high interpretability. However, the symbolic space may not cover all aspects of the reward. Furthermore, our method also assumes that the perception is already done with other off-the-shelf components. Future works may consider the combination of code-based and neural network-based reward design that combines both symbolic reasoning and perception. Utilizing the knowledge derived from LLMs in creating such models showcases promising prospects and could be advantageous in several scenarios (Wulfmeier et al., 2016; Finn et al., 2016; Christiano et al., 2017; Lee et al., 2021). We utilize GPT-4 as the LLM because of its strong performance in reasoning and coding (OpenAI, 2023). It is important to test it with different scaffolding to see how well they work (Li et al., 2023; Roziere et al., 2023; OpenLemur, 2023). Although our main method is simple but effective, it still has room for improvement by designing methods to generate better reward functions, possibly leading to higher success rates and the ability to tackle more complex tasks. At present, our test cases primarily concentrate on robotics tasks, specifically manipulation and locomotion, to illustrate this approach. In the future, this research may find broader applications in various reinforcement learning related domains, including gaming (Brockman et al., 2016; Schrittwieser et al., 2020; Zhong et al., 2021; Fan et al., 2022), web navigation (Shi et al., 2017; Zhou et al., 2023), household management (Puig et al., 2018; Shridhar et al., 2020;b).
2309.15189
Detection of dominant large-scale coherent structures in turbulent pipe flow
Large-scale coherent structures are identified in turbulent pipe flow at $Re_\tau=181$ by having long lifetimes, living on large scales and travelling with a certain group velocity. A Characteristic Dynamic Mode Decomposition (CDMD) is used to detect events which meet these criteria. To this end, a temporal sequence of state vectors from Direct Numerical Simulations are rotated in space-time such that persistent dynamical modes on a hyper-surface are found travelling along its normal in space-time, which serves as the new time-like coordinate. Reconstruction of the candidate modes in physical space gives the low rank model of the flow. The modes within this subspace are highly aligned, but are separated from the remaining modes by larger angles. We are able to capture the essential features of the flow like the spectral energy distribution and Reynolds stresses with a subspace consisting of about 10 modes. The remaining modes are collected in two further subspaces, which distinguish themselves by their axial length scale and degree of isotropy.
Amir Shahirpour, Christoph Egbers, Jörn Sesterhenn
2023-09-26T18:49:35Z
http://arxiv.org/abs/2309.15189v1
[ ###### Abstract Large-scale coherent structures are detected in turbulent pipe flow at \(Re_{\tau}=181\) by having long lifetimes, living on large scales and travelling with a certain group velocity. A Characteristic Dynamic Mode Decomposition (CDMD) is used to detect events which meet these criteria. To this end, a temporal sequence of state vectors from direct numerical simulations are rotated in space-time such that persistent dynamical modes on a hypersurface are found travelling along its normal in space-time, which serves as the new time-like coordinate. Reconstruction of the candidate modes in physical space gives the low rank model of the flow. The modes within this subspace are highly aligned, but are separated from the remaining modes by larger angles. We are able to capture the essential features of the flow like the spectral energy distribution and Reynolds stresses with a subspace consisting of about 10 modes. The remaining modes are collected in two further subspaces, which distinguish themselves by their axial length scale and degree of isotropy. Authors should not enter keywords on the manuscript, as these must be chosen by the author during the online submission process and will then be added during the typesetting process (see [1] PDF for the full list). Other classifications will be added at the same time. A]Amir Shahirpour\({}^{1}\), Christoph Egbers\({}^{2}\) and Jorn Sesterhenn\({}^{1}\)+ Footnote †: Email address for correspondence: [email protected] \({}^{1}\)Lehrstuhl fur Technische Mechanik und Stromungsmechanik, Universitat Bayreuth, 95440 Bayreuth, Germany \({}^{2}\)Department of Aerodynamics and Fluid Mechanics, Brandenburg University of Technology Cottbus-Senftenberg ## 1 Introduction Large-scale energetic coherent structures detected in turbulent flows have become an inseparable part of turbulence studies. Proof of their existence is promising as it implies that taking advantage of the notion of coherence and organization, can shed light on the highly dimensional turbulent flows with complex flow patterns. Coherence in space and time is commonly known to be caused by flow properties which are maintained by the flow in space and within a certain frame of time, so that the maintained property, being for instance a certain type of motion, can be perceived as the underlying basis for coherence. These structures contribute prominently to the turbulent kinetic energy while diffusing mass and momentum and carrying large desirable or undesirable effects such as better mixture or more drag (Marusic _et al._, 2010). In spite of large number of studies in the last decade to understand their physical properties and the ease with which they are spotted by the naked eye, there is still limited consensus in the scientific community on how to define these structures, what they physically look like, how long they live and how their length scales depend on Reynolds numbers. It is not fully understood what they feed on, how their regeneration mechanism works and how they interact with each other or with near wall turbulence. Three groups of structures are well distinguished in literature by their length scales and wall-normal locations where they are found. Near wall streaks are known as manifestations of the wall cycle of turbulence and have span-wise spacing of \(\lambda^{+}=100\)(Kline _et al._, 1967). Their regeneration mechanism has been observed in many studies where their self-sustainability has been shown. An example would be the study by Jimenez & Moin (1991) where a minimal flow unit is simulated as the smallest channel flow that can maintain turbulence. Large scale motions (LSMs) are described as motions whose coherence is maintained as a result of eddies travelling at the same group velocity (Kim & Adrian, 1999). Measurements of Bailey & Smits (2010) show evidence for existence of such eddies in the outer layer being detached from the wall with small correlation with the near wall flow, whereas in the logarithmic region they are more likely to be attached to the wall. This suggests existence of attached LSMs in the near-wall region and detached ones in the outer layer. They are known to have stream-wise scale of 2-3 pipe radii and span-wise length scale of 1-1.5 radii (Guala _et al._, 2006). Very Large-Scale Motions in pipe and channel flow (referred to as VLSMs by Adrian and coworkers) or superstructures in boundary layer flows (named by Marusic and coworkers), appear to be longer and have streamwise length scale of 8-20 pipe radii (Vallikivi _et al._, 2015). While they are mostly seen in the logarithmic layer in boundary layer flow, they appear in the outer layer of internal flows (Monty _et al._, 2009). Kim & Adrian (1999) interpret VLSM as a result of stream-wise alignment of LSMs which exist in the outer layer, whereas del Alamo & Jimenez (2006) argue that their formation is the result of linear and nonlinear processes. Toh & Itano (2005) consider large-scale structures as part of the turbulence and argue that they feed on their interactions with the near-wall small-scale structures. Del Alamo & Jimenez (2006) on the other hand interpret them as self-sustained structures. Apart from their regeneration mechanism, many key questions concerning LSM and VLSM are still unanswered including a uniform scaling law for their identification as well as a clear understanding of their origin and evolution. Differing views on the origin and nature of low wave number VLSM question their dependence on geometry and outer layer variables. Spectral analysis has been one of the key approaches commonly used to learn about the properties of such structures. Their foot prints can be followed by observing the premultiplied velocity spectra which represent the energy distribution in the wave number space. At sufficiently large Reynolds numbers two peaks appear in contour plots of spectra which are associated with VLSM and LSM (Rosenberg _et al._, 2013). The signature of large-scale energetic structures are hereby followed and their length scales and energy content at different wall normal positions are determined. Taking advantage of this signature, Bauer _et al._ (2019) apply a two dimensional Fourier cut-off filter to separate the structures based on the their known length-scales to investigate which length scales are responsible for feeding the largest scales and which ones feed from them. Besides differing views on the nature and origin of turbulent structures, the suitable approach for their analysis is also still under debate. Following the spectral peaks helps to follow foot prints of structures, but cannot provide insight to their evolution and interactions. One of the major difficulties arising while studying the physical properties of large-scale coherent structures, is that many of the findings can be biased by influences of smaller-scale structures and instabilities. This has led to an increasing interest in extracting the structures from the turbulent flows and to study their properties in absence of small-scale structures. The latter, together with recent availability of large numerical and experimental datasets has led to increasing popularity of data driven methods. After introduction of Proper Orthogonal Decomposition (POD) to fluid dynamics by Lumley (1967), numerous variations of this method were proposed building on the main idea which was to extract spatial and temporal flow structures from numerical and experimental data by decomposing the flow to spatially uncorrelated modes. This was particularly desirable as the largest amount of energy could be captured with the fewest number of modes, but also required the flow to be projected to orthogonal basis, hence removing the possibility for the modes to linearly interact. An example would be the study by Hellstrom & Smits (2014) who applied snapshots POD (Sirovich, 1987) to cross-sectional PIV measurements, and found that the first 10 snapshots POD modes contribute 43% to average Reynolds shear stress and 15% to the kinetic energy. In a different approach, Dynamic Mode Decomposition (DMD) was introduced by Schmid & Sesterhenn (2008) decomposing the flow to correlated spatial modes possessing certain temporal frequencies and decay rates. Majority of these methods decompose the flow on a stationary frame of reference leading to the need for large number of modes to describe the convecting features in the transport-dominated flows. This issue is addressed by several studies (Rowley & Marsden (2000) and Reiss _et al._ (2018)) introducing a spatial transformation in form of a shift. Sesterhenn & Shahirpour (2019) proposed a different approach by applying a spatio-temporal transformation in form of a rotation in space and time on a moving frame of reference along the characteristics of the flow. Hereby, they observed a faster drop of singular values compared to the shifted reference frame. In what follows we apply a CDMD to DNS data of turbulent pipe flow. ## 2 Numerical methods and computational details The data used for the study is generated using an open-source, hybrid parallel DNS code (Lopez _et al._, 2020). Hereby, Navier-Stokes equations are solved in cylindrical coordinates for an incompressible pipe flow fulfilling mass and momentum conservations given by \[\nabla\cdot\mathbf{u}=0\,,\quad\partial_{t}\mathbf{u}+\mathbf{u}\cdot\nabla \mathbf{u}=-\nabla p+\frac{1}{Re_{b}}\nabla^{2}\mathbf{u}, \tag{1}\] where \(\mathbf{u}(\mathbf{x},t)\) and \(p(\mathbf{x},t)\) represent velocity field \((u,v,w)\) in cylindrical coordinates \(\mathbf{x}=(x,r,\theta)\) and dynamic pressure respectively. The governing equations are solved for velocity and pressure, being discretised with a combined Fourier-Galerkin / Finite Difference method in space and using a semi-implicit fractional-step of Hugues & Randriamampianina (1998), using second-order-accurate backwards differences and second order linear extrapolation for nonlinear term. More details on the numerical scheme can be found in the study by Shi _et al._ (2015). Simulations are carried out at bulk Reynolds number of \(Re_{b}=2RU_{b}/\nu=5300\) for pipe length of \(L=50R\) with \(R\), \(U_{b}\) and \(\nu\) being respectively the pipe radius, bulk velocity and kinematic viscosity. After the final grid refinement, calculations have been advanced for 400 convective time steps \(t_{c}=R/U_{b}\), during which \(\text{CFL}_{max}=0.5\) was maintained, leading to simulation time step of \(dt=4.93\times 10^{-4}\,t_{c}\). The grid spacing measured in wall units is chosen so that there are 5 and 20 points bellow \(y^{\star}=1\) and \(y^{\star}=10\) respectively, with the first point in the vicinity of the wall at \(y^{\star}=0.026\). The + superscript denotes normalisation by inner scaling using viscous length-scale \(\nu/u_{\tau}\) and friction velocity \(u_{\tau}=\sqrt{\tau_{w}/\rho}\) where \(\tau_{w}\) and \(\rho\) are the wall shear stress and density respectively. Further details on the simulation and grid spacing are mentioned in table 1. Results are validated by comparing the statistical flow properties with benchmark DNS data in the next chapters. ## 3 Methodology ### Characteristic DMD Investigating transport-dominated phenomena on a stationary frame of reference adversely influences the observations. To remedy this issue, Sesterhenn & Shahirpour (2019) proposed a Characteristic DMD. The essence of a characteristic decomposition of the flow is to seek coherence as a persistent behaviour observed in space and time coupled together on a moving frame of reference, as opposed to spatial or temporal coherence individually. They introduced a transformation \(\mathcal{T}\) in form of a rotation in space and time \(\mathcal{T}(\mathbf{u}(x,r,\theta,t))=\mathbf{u}(\xi,r,\theta,\tau)\) and used the drop of singular values as a measure of how well the convected phenomena can be described on each frame of reference. Two major advantages were presented for the spatio-temporal transformation. The first one is that convected phenomenon could be described on the rotated frame with far fewer modes compared to a stationary frame. In addition, it was shown that singular values drop faster along the characteristics compared to those taken on a shifted moving frame which is obtained by a purely spatial transformation. The second advantage is that as expected, dynamics of the detected structures are captured more accurately. Having chosen the frame of reference, the decomposition method is selected based on the fact that the goal of this study is to analyse the interactions between the modes. We intend to present a framework in which the origins of structures, their regeneration mechanism, their sustainability and finally their decay process can be investigated. Therefore, the obtained eigen modes should be found such that they can give energy to other modes or to feed from them, and as a result, should not be forced to be normal to each other. To this end, the standard dynamic mode decomposition (Schmid, 2010) has been taken as the main basis for decomposing the flow field. Three subsets of the modes are detected, reconstructed in spatio-temporal space and transformed back to physical space, where their contributions to Reynolds stress tensor and their anisotropy invariant maps are studied. Further details of the method can be found in the relevant manuscript. A reference is needed to validate the identity of captured structures. What many studies have in common in their definition of coherent structures, is the foot print they leave behind in the Fourier space, in premultiplied energy spectra. Therefore, we verify our detected structures, by how well they represent the spectral peak and therefore, we use velocity field as the state vector in our analysis. ## 4 Results and discussions ### Direction search The main goal in the first step is to find the direction of characteristics along which the large-scale features of the flow can be described with fewest modes possible. The slope of the characteristics represents the group velocity \(u_{g}\) at which the large-scale features are being convected and is defined as the axial length-scale travelled per unit convective time defined as \(t_{c}=R/U_{b}\). In figure 1, space-time diagram is shown for three velocity components at wall-normal location \(y/R=0.5\) for one azimuthal location. The colourmap represents the corresponding velocity component normalised by bulk velocity. Although several group velocities can be observed for each component, one dominant group velocity can be perceived which corresponds to the energetic large-scale events. The dominant group velocity will get essentially smaller by moving closer to the wall and into the wall layer, and it will be larger in the outer layer and close to the pipe axis. A second observation is that the main group velocity appears to stay relatively constant for 50 \(t_{c}\) which is the time required to travel through the pipe once. The objective is to decompose the flow into modes which describe the complete velocity field. Therefore, the direction along which the decomposition is applied, should be chosen optimally for all velocity components. Optimality here is defined by detection of large-scale features using a minimal number of modes and is quantified by the drop of singular values along the characteristics. For each time-step the entire velocity field is stacked in one column vector which forms one of the columns of matrix \(M_{(N_{ph}\times N_{t})}\) with \(N_{ph}\) and \(N_{t}\) corresponding to the number of spatial points in physical space and time-steps respectively. The spatio-temporal rotation is then applied to \(M\) for a range of angles spaced 0.1 radian from each other. After each rotation, a singular value decomposition is carried out and the drop of singular values are recorded as shown in figure 2a. A piecewise cubic interpolation is then used to fit a curve to all the points and to find the maximum drop which is shown with a red marker for rotation angle of \(\theta_{g}=1.311\) corresponding to group velocity of \(u_{g}=1.06\,U_{b}=15.5\,u_{\tau}\). This group velocity is equal to the mean radial velocity found at wall-normal location \(y^{+}=1-(r/R)^{+}=44\). In figure 2b, \(u_{g}\) is annotated along with the mean radial velocity profile compared with the benchmark data by El Khoury _et al._ (2013). By rotating the matrix \(M\) by \(\theta_{g}\), the data will be transformed to a moving frame of reference with the direction of characteristics serving as the new time coordinate. We search for coherent structures in planes normal to the characteristics as they travel in space and time and undergo minimal changes while maintaining their coherence. Figure 1: Space-time diagrams for \(u\) (a), \(v\) (b) and \(w\) (c) normalised by bulk velocity \(U_{b}\) at wall-normal location \(y/R=0.5\). ### Decomposition and subspace detection Having detected the optimal group velocity, matrix \(M\) is formed using 500 timesteps with spatial resolution of \((900\times 60\times 143)\) in \((x,r,\theta)\) directions. To ensure that the dynamics of the modes are captured correctly, timestep of \(dt_{\text{CDMD}}=0.2\,t_{c}\) is chosen between the columns of \(M\). Therefore, each event moving at \(U_{b}\) propagates two times through the entire pipe. Transforming the data to spatio-temporal space and choosing the largest \(\xi-\tau\) window in the rotated frame of reference results in the snapshots matrix in spatio-temporal space \(X_{st}=\mathcal{T}(M)\), with \(N_{\xi}=290\) and \(N_{\tau}=843\) points along \(\xi\) and \(\tau\) respectively. A standard DMD is carried out to decompose \(X_{st}\) into the dynamic modes \(\mathbf{\phi}_{i}\) and their corresponding coefficients \(c_{i}(\tau)\) such that \(X_{st}=\Phi C\) where \(\Phi\) and \(C\) are matrices of dynamic modes and their coefficients for all timesteps. Continuous-time eigenvalues are transformed back to physical space with their real and imaginary parts representing decay rates and frequencies of the modes respectively in physical time. In figure 3, time averaged mode coefficients normalised by their \(\mathcal{L}_{2}\) norm, dimensionless decay rates \(\hat{d}=d/(U_{b}/R)\) and frequencies \(\hat{f}=f/(U_{b}/R)\) are plotted with the modes being sorted by their decay rates. All the frequencies in spatio-temporal space are within the range of \(0\leqslant\hat{f}_{st}<2.5\) which corresponds to \(0\leqslant\hat{f}<10\) after transformation to physical space. Next, a subset of modes is to be selected constituting a subspace (subspace I) to fulfil certain criteria which are chosen with the knowledge, that for turbulent pipe flow at this Reynolds number, there exists only one peak in the premultiplied energy spectra. The first criteria is that subspace I should accommodate energetic structures with large spatio-temporal length-scales. Therefore, it is expected to have a large contribution to the spectral peak which is known as the footprint of large-scale structures in premultiplied energy spectra. Given the nature of coherent structures, the second criteria dictates that the modes in this subspace should not possess large decay rates. This is to ensure that energetic modes with short lifetimes will not be a member of this subspace. Similarly, the candidate modes are expected to have small frequencies and not undergo strong oscillations. We hypothesize that the modes fulfilling the mentioned criteria are expected to possess another significant property. Due to the spatio-temporal coherence of the flow captured by these modes, they are expected to have major interactions with each other, but have smaller interactions with the rest of the modes. We define this interaction in terms of the angle between the modes as well as the energy which is gained or lost by the flow as a result of presence of each two modes in a subspace. This implies that the modes in subspace I, Figure 2: Drop of singular values for a range of rotation angles (a) and mean radial velocity profile compared against the benchmark data (b). the remaining modes. To calculate the energy of a subspace, we first consider subspace \(S\) comprised of two modes \(S=\{\mathbf{\phi}_{1},\mathbf{\phi}_{2}\}\) and coefficients matrix \(C_{S}\) with rows defined as \(c_{1}(\tau)\) and \(c_{2}(\tau)\) that can be used to obtain \(X_{S}=SC_{S}\). Columns of \(X_{S}\) and \(C_{S}\) can be used to write for each timestep \(\chi_{S}(\tau)=S\,\mathbf{c}_{S}(\tau)=\mathbf{\phi}_{1}c_{1}(\tau)+\mathbf{\phi}_{2}c_{2}(\tau)\). The total energy of \(S\) integrated along \(\tau\) is then defined by \[E_{S}=\sum_{\tau=1}^{N_{\tau}}\mathbf{c}_{S}^{*}(\tau)\,S^{*}S\,\mathbf{c}_{S}(\tau), \tag{4.1}\] and energy of \(S\) can be written for each time step \(\tau\) as \[\begin{split} E_{S}(\tau)&=\chi_{S}^{*}(\tau)\chi_ {S}(\tau)=\mathbf{c}_{S}^{*}(\tau)\,S^{*}S\,\mathbf{c}_{S}(\tau)=\Big{(}c_{1}^{*}(\tau) \mathbf{\phi}_{1}^{*}+c_{2}^{*}(\tau)\mathbf{\phi}_{2}^{*}\Big{)}\Big{(}\mathbf{\phi}_{1}c _{1}(\tau)+\mathbf{\phi}_{2}c_{2}(\tau)\Big{)}\\ &=\underbrace{c_{1}^{*}(\tau)\mathbf{\phi}_{1}^{*}\mathbf{\phi}_{1}c_{1 }(\tau)}_{E_{1}(\tau)}+\underbrace{c_{1}^{*}(\tau)\mathbf{\phi}_{1}^{*}\mathbf{\phi}_{2 }c_{2}(\tau)+c_{2}^{*}(\tau)\mathbf{\phi}_{2}^{*}\mathbf{\phi}_{1}c_{1}(\tau)}_{E_{1|2} (\tau)}+\underbrace{c_{2}^{*}(\tau)\mathbf{\phi}_{2}^{*}\mathbf{\phi}_{2}c_{2}(\tau)}_ {E_{2}(\tau)}.\end{split} \tag{4.2}\] The terms \(E_{1}(\tau)\) and \(E_{2}(\tau)\) in equation 4.2 correspond to the energy of modes \(\mathbf{\phi}_{1}\) and \(\mathbf{\phi}_{2}\) respectively at one timestep, and the term \(E_{1|2}(\tau)\) represents the energy added to or taken from \(X_{s}\) as a result of interaction between \(\mathbf{\phi}_{1}\) and \(\mathbf{\phi}_{2}\). For modes that are orthogonal to each other, the term \(E_{1|2}\) vanishes and for mode pairs with small angles, \(E_{1|2}\) can have large positive or negative values. Equation 4.2 can be generalised to the case where \(\mathbf{\phi}_{1}\) and \(\mathbf{\phi}_{2}\) are each separate subspaces. To detect a subspace \(\text{I}_{n}\) with \(n\) most energetic modes that represents the full-field energy with fewest number of modes, and to observe how the subspace energy changes as the next Figure 3: Dynamic mode amplitudes, decay rates and frequencies. energetic mode is added to it, cumulative energy is calculated for the first \(n\) dominant modes, integrated along \(\tau\) and normalised by the total energy as \[\gamma_{I_{n}}=E_{I_{n}}/E_{\Phi}. \tag{4.3}\] and plotted in figure 4a for the first 50 modes. \(E_{I_{n}}\) represents energy of subspace I possessing \(n\) modes integrated over time. A fast drop is observed for the first few modes added, where two minima are observed for 4 and 6 modes resulting in subspace energy close to 1. (\(\gamma_{I_{4}}=1.1\) and \(\gamma_{I_{6}}=0.9\)). Adding more modes increases the energy, but finally by having 11 modes, subspace energy will drop again to \(\gamma_{I_{11}}=0.9\). It is clear that adding the next modes makes only minimal changes in the subspace energy. Next, relative error is calculated for reconstruction of modes in subspace I\({}_{n}\) having \(n\) modes with the corresponding coefficients matrix \(C_{I_{n}}\) (equation 4.4). Three matrix norms have been used with \(p=\{1,\infty,F\}\) for one-norm, infinity norm and frobenius norm respectively. \[\epsilon_{n}=\|X-I_{n}C_{I_{n}}\|_{p}/\|X\|_{p}. \tag{4.4}\] As shown in figure 4b, all relative errors reach two minima for 4 and 6 modes, increase for 7 modes, and then drop strongly for 11 modes while minimally changing beyond that point. As depicted in figure 4c, these 11 modes have very small frequencies compared to the rest of the modes. The candidate 11 modes are highlighted in red in figure 3 with red bars. They have a small mean decay rate of \(\hat{d}_{I}=0.022\) and undergo minimal oscillations with average frequency of \(\hat{f}_{I}=0.086\) in the range of \(0\leqslant\hat{f}\leqslant 0.2\). All the remaining modes oscillate with larger frequencies \(0.22\leqslant\hat{f}\leqslant 9.86\) with the exception of mode 420 which has a large decay rate and small amplitude and therefore does not meet the criteria to be part of this subspace. 8 of the candidate modes have large amplitudes \(\tilde{c}/\|\tilde{C}\|\,\geqslant 0.1\) and the rest, in spite of having smaller amplitudes \(0.05\leqslant\tilde{c}/\|\tilde{C}\|\,\leqslant 0.07\), still possess much smaller frequencies compared to the rest of the modes. Therefore, based on the cumulative energy of subspace I, its relative error, mode amplitudes, their decay rates and frequencies, the first 11 dominant modes are chosen as members of subspace I. Having detected a subset of energetic modes matching the mentioned criteria, we verify orthogonality of each member of this subset to modes residing inside and outside the subset. Mode-pair angles \(m_{33}\angle m_{i}\) are plotted as an example in figure 5a with blue markers showing the angles that mode 33 (one of the members of subspace I) makes with all the other modes. Filled blue markers, correspond to subspace I members. It is readily seen that majority of the modes outside subspace I, are almost orthogonal to mode 33 as they accumulate close to \(\alpha=90\). On the other hand, the smallest angles are made with members of subspace I (\(m_{393}\) and \(m_{409}\)), indicating interaction between \(m_{33}\) with a small decay rate, with two modes Figure 4: Cumulative energy (a), relative error (b) and frequencies (c) of the first n dominant modes. with rather larger decay rates. In figure 5b, mode pair angles \(m_{237}\angle m_{i}\) and \(m_{280}\angle m_{i}\) are plotted. Here it is also observed that modes outside subspace I are mostly orthogonal to \(m_{237}\) and \(m_{280}\). These two modes appear to make small angles with one another and rather larger angles with the rest of the modes in subspace I. A similar behaviour exists for all the modes in subspace I. A small angle between two modes, provides the potential for a large energy interaction. But as inferred from equation 4.2, the term \(E_{1|2}\) is dependant also on the mode coefficients besides the inner product of the two modes. Therefore in the next step, integrated energy interactions \(\hat{E}_{i|k}\) are calculated between each mode (\(k\)) in subspace I and all the other modes (\(i\)) normalised by the total energy of the flow (with \(\hat{\cdot}\) denoting the normalisation). The results are plotted for \(\hat{E}_{i|237}\) in figure 6a with circles and diamond markers corresponding to positive and negative values respectively. Filled markers represent modes in subspace I, which show clearly the largest interactions with \(m_{237}\), some with positive and some with negative values. Apart from the contributions of the modes in subspace I (filled markers), two distinct regions also appear in this plot. A smaller number of modes can be seen at \(\hat{E}_{i|237}\geqslant 10^{-3}\) and majority of them seem to be accumulated below this limit. This implies that there are certain modes outside subspace I, which are interacting more than the rest with \(m_{237}\). These two regions appear for all the modes in subspace I indicating emergence of a second subspace, whose members are chosen based on how much energy they bring or take from the flow while interacting with subspace I. To detect the modes fitting in the new subspace, the term \(\hat{E}_{i|I}\) should be calculated for all the members of subspace I as \[\hat{E}_{i|I}=\frac{\sum\limits_{\tau=1}^{N_{\tau}}\sum\limits_{k=1}^{N_{I}} \left(c_{i}^{*}(\tau)\,\mathbf{\phi}_{i}^{*}\mathbf{\phi}_{k}\,c_{k}(\tau)+c_{k}^{*}( \tau)\,\mathbf{\phi}_{k}^{*}\mathbf{\phi}_{i}\,c_{i}(\tau)\right)}{\sum\limits_{\tau=1} ^{N_{\tau}}\epsilon^{*}(\tau)\,\Phi^{*}\mathbf{\Phi}\,\mathbf{c}(\tau)}, \tag{4.5}\] with \(N_{I}\) being the number of modes in subspace \(I\), and \(\tau_{i}\) and \(\tau_{e}\) being the initial and last time step along \(\tau\) respectively. The vector calculated using equation 4.5 is sorted in a descending order and modes in subspace I are excluded from the set in order to detect the largest contributions to subspace I. Cumulative energy interaction is then given for the first \(p\) dominant contributions by \[\gamma_{P,I}=\frac{\sum\limits_{j=1}^{P}\hat{E}_{j|I}}{N-N_{t}}, \tag{10}\] with \(N\) being the total number of modes, and is plotted in figure 6b. Cumulative energy contribution rises rapidly with the first 10 modes and reaches a saturation point after 60 modes, beyond which the energy does not change much by adding the remaining modes. Taking 67 modes, where a red dashed line is plotted, captures 98% of the total contribution (grey solid line). Having detected a second subspace, the remaining modes are grouped together to form the third subspace. Subspaces I, II and III each with 11, 67 and 346 modes amount to 3%, 15% and 82% of the total number of modes respectively. Their total kinetic energy is calculated using equation 10 being equal to 97%, 15% and 2% of the snapshots energy for the first, second and third subspaces respectively. Subspace interactions I \(|\) II and II \(|\) III lead to 12% and 2% energy loss, whereas interactions between the first and third subspaces does not cause any overall energy gain or loss. Each subspace is then reconstructed along \(\tau\). To have a visual comparison between the full-field and the subspaces, space-time diagrams are plotted for axial velocity components in figure 7 at radial location \(y=0.34R\) (\(y^{+}=61.5\)) for one azimuthal location. Comparing the full-field with Subspace I in figures 7a and 7b, it can be seen that the large-scale flow patterns are present very well in spite of the fact that only 3% of the modes exist in this subspace. Magnitudes of negative and positive perturbations agree well with those of the full-field. Small-scale patterns are clearly missing from the reconstruction as expected. Dominant structures in this subspace appear to remain stationary along the direction of \(\tau\). Subspace II in figure 7c accommodates small-scale patterns with perturbations which are considerably less energetic than those captured in subspace I. Oblique patterns emerging show that structures here have different group velocities compared to the dominant one. Some appear to move backwards relative to the moving frame of reference indicating a slower convection velocity, whereas others move forward at higher velocity. Absence of strong vertical patterns in this figure shows that no energetic structure moving with the dominant group velocity is present in subspace II. Subspace III with 82% of the modes bears traces of some small-scale patterns similar to Figure 6: Normalised energy interactions between mode \(m_{237}\) and all the other modes (a), and cumulative energy interaction between subspace I and the rest of the modes (b). those in subspace II, but is mainly populated with very small-scale structures. No dominant group velocity is observable in this subspace. ### Subspaces in physical space Each reconstructed subspace is transformed back to physical space. In figure 8, iso-surfaces of streamwise velocity component is shown for the full-field (figure 8a) and for each subspace. Similar to what was seen in the spacetime diagram, the structures in subspace I (figure 8b) appear to be similar to those in full-filed. This resemblance is observed in terms of where high and low momentum regions are located and also in terms of amplitudes of perturbations (In both subfigures iso-levels \(u=\pm 0.1\,U_{b}\) are plotted). Axial length-scales in both figures perceived from the large-scale structures agree well and they will be examined in the next chapters in premultiplied spectra. Subspace II in figure 8c on the other hand, accommodates only smaller scale structures with lower perturbation magnitudes (with iso-levels \(u=\pm 0.04\,U_{b}\)). The modes in this subspace were chosen based on the level of their interactions with subspace I causing large energy gains or losses. On the other hand it was shown in figure 4 that the total energy of the flow will not change drastically beyond 11 modes. This implies that although the two subspaces have large energy interactions, the overall energy of subspace I remains relatively constant. Subspace III is plotted with iso-levels \(u=\pm 0.005\,U_{b}\) with two major length-scales being present in the flow, both of which are smaller than those present in the other subspaces. ### limitations and constraints Due to two reasons the statistical turbulence properties of the full-field diverge from those of the snapshots matrix in physical space \(X_{ph}=\mathcal{T}^{-1}(X_{st})\). The first reason is that in order Figure 7: Space-time diagram in spatio-temporal space for the full-field (a) and subspace I (b), II (c) and III (d), at wall-normal location \(y=0.34R\) and one azimuthal point for the second order statistics to converge, 4000 data realisations recorded for 400 convective timesteps have been used, whereas taking the same number of timesteps for DMD was not possible due to memory limitations. The second reason is the linear interpolation used for the spatio-temporal transformations. The first limitation could be only partially removed using a streaming DMD (Hemati _et al._, 2014) and at the expense of truncating the singular values. As in this study it was intended to keep all non-zero singular values, a streaming DMD was not used. Employing higher order interpolation schemes and using larger number of timesteps, substantially increase the computation time specially at higher Reynolds numbers. It was also observed that the present setup does not bias the conclusions. Therefore, turbulence properties of the subspaces in subchapters 4.5 and 4.7 are presented using three references. The first two references are the full-field and DNS data by El Khoury _et al._ (2013) which are compared against each other to validate the simulation results. \(X_{ph}\) serves as the third reference against which the subspaces are compared. In subchapter 4.6, the difference between the length scales in snapshots and the full-field is compensated by applying the same correction to the snapshots and all subspaces. Figure 8: Iso-surfaces of axial velocity component of the full-field compared against each subspace. Iso-levels for the full-field (a) and subspace I (b) are identical with yellow and blue corresponding to \(u=\pm 0.1\,U_{b}\). In subfigures (c) and (d) iso-levels of \(u=\pm 0.04\,U_{b}\) and \(\pm 0.005\,U_{b}\) are plotted respectively for subspaces II and III. ### Contribution to Reynolds stress tensor Contributions of each subspace to components of Reynolds stress tensor are calculated in physical space and are compared against the snapshots \(X_{ph}\) which is plotted in black solid lines in figure 9. This helps to verify whether the differences between subspace statistics and the full-field, are a result of the constraints mentioned in chapter 4.4, or a property of the flow represented by the corresponding subspace. The invariants of Reynolds stress tensor are also calculated for each subspace to provide a measure of how the entire tensor compares with that of the snapshots. The first invariant being equal to the turbulent kinetic energy is already reported in the previous chapter for each subspace. The remaining two are presented in this chapter. To ensure the accuracy and reliability of the simulated data, Reynolds stress components of the full-field are plotted in grey solid lines and are compared against the benchmark DNS data by El Khoury _et al._ (2013) plotted in red dashed lines in figure 9. Stress tensor components of the full-field agree very well with those of the benchmark with the peaks being located at \(y^{+}=[15,56,36,32]\) for \(\langle u^{2}\rangle\), \(\langle v^{2}\rangle\), \(\langle w^{2}\rangle\) and \(\langle uv\rangle\) respectively. Subspace I (sI), plotted in solid blue lines, shows substantial contributions to the stress components of the snapshots with average contribution of 98%. The wall-normal locations of the peaks coincide with those of the snapshots at \(y^{+}=[14,50,28,28]\) for \(\langle u^{2}\rangle\), \(\langle v^{2}\rangle\), \(\langle w^{2}\rangle\) and \(\langle uv\rangle\). The second and third stress tensor invariants of this subspace amount to 97% and 96% of those of the snapshots. Subspace II which appears with axial length-scales smaller than sI and larger than sIII (figure 8) is plotted in green solid lines. It contributes most to the radial stress component (22%) and least to the axial-radial one (6%) reaching the peak values at wall-normal locations \(y^{+}=28\) and \(y^{+}=14\) respectively. The peaks of axial and azimuthal components occur at \(y^{+}=14\) and \(y^{+}=16\) with 14% and 21% contributions to the corresponding components of snapshots. Except for the axial component, all the peaks in this subspace have moved clearly closer to the wall compared to the snapshots. The second and third invariants of the stress tensor of this subspace are equal to 2% and 0.4% of those of the snapshots respectively. Subspace III accommodating very small scale structures and represented by 82% of the modes has 3.3% average contribution to the diagonal Reynolds stress components and 0.1% to \(\langle uv\rangle\) reaching their maxima at \(y^{+}=[12,92,74,31]\) respectively. This subspace contributes less than 0.02% to the second and third invariants of stress tensor. ### Energy spectra The energy content of each length-scale is analysed for each subspace using premultiplied streamwise energy spectra of velocity auto correlations (\(\varphi_{uu}\), \(\varphi_{vv}\), \(\varphi_{ww}\)) and cross correlation (\(\varphi_{uv}\)) plotted in figure 10 for the snapshots in coloured contours and black contour lines. Blue, green and orange dashed contour levels represent subspaces I, II and III respectively, each normalised by the maximum of the snapshots spectra. Blue dashed lines and black solid contour lines correspond to the same levels annotated in black. Black circle and plus markers indicate spectral peaks of the snapshots and subspace I. Coloured plus markers point to the peak locations of the corresponding subspace and coloured labels indicate the respective contour levels. The horizontal dotted and dashed lines are plotted as a reference for the commonly accepted axial length-scales of LSMs at \(\lambda=2R\) and \(3R\) respectively. In the spectra of axial velocity in figure 10a, all large-scale structures are captured in subspace I and maximum energy is found for wave-length \(\lambda^{+}=1006\) at \(y^{+}=13.8\). Smaller structures with up to length-scale of \(\lambda^{+}=300\) are also present in this subspace. Energy of wave-lengths \(\lambda^{+}\leqslant 300\) drop compared to the snapshots at \(3\leqslant y^{+}\leqslant 30\), where the solid black levels diverge from the dashed blue ones. Subspaces II and III appear with spectral peaks having smaller axial length-scales of \(\lambda^{+}=304\) and 97 at \(y^{+}=11.6\) and 9.5 with normalised peak energy of 0.37 and 0.02 respectively. The radial velocity component has the shortest axial wave-length compared to the other two components as seen in figure 10b with the main peak occurring at \(y^{+}=57\) for \(\lambda^{+}=201\) for the snapshots and sl. The peaks of subspaces II and III emerge with smaller wave-lengths of \(\lambda^{+}=134\) and 25 at \(y^{+}=24\) and 87 with normalised peak energy of 0.3 and 0.084 respectively. What can be observed in all subplots of figure 10, is that the spectral peaks found for subspace I coincide with those in the snapshots, in terms of their wall-normal locations and axial wave-lengths, with their energy content peaking on average at 99% of the snapshots spectral peaks. Subspace I has captured large-scale energetic structures, and where its energy diverges from the snapshots, the next subspaces emerge with a peak. Spectral peaks in subspace II show a strong shift to the vicinity of the wall, although the shift is smaller for \(k_{x}\varphi_{uu}\). Subspace III appears in all spectral maps with two low energy peaks, one below the main peak closer to the wall and one above it, with length-scales \(\lambda^{+}\leqslant 100\). The more energetic peak belongs to the one with larger wave-length for \(\varphi_{uu}\) and \(\varphi_{uv}\), whereas for \(\varphi_{vv}\) and \(\varphi_{www}\) it represents the smaller wave-length. ### Anisotropy invariant map of the subspaces We study the structure of turbulent flow in each subspace by investigating the invariants of anisotropic Reynolds stress tensor Figure 9: Reynolds stress components of the full-field compared against those of the benchmark data, each subspace and the snapshots matrix \[a_{ij}=\frac{\tau_{ij}}{\tau_{kk}}-\frac{\delta_{ij}}{3}. \tag{4.7}\] A triangular domain is introduced by Banerjee _et al._ (2007) as a barycentric anisotropy map inside which all realisable Reynolds stress invariants are located. The vertices of the triangle represent three limiting states of one-component (1c), two-component (2c) and three-component isotropic turbulence, located respectively at \(\mathrm{x}_{1c}=(1,0)\), \(\mathrm{x}_{2c}=(0,0)\) and \(\mathrm{x}_{3c}=(1/2,\sqrt{3}/2)\) as shown in figure 11. Moving away from the isotropic vertex on the blue edge corresponds to axi-symmetric contraction which ends up at the disc-like anisotropy at \(2c\) vertex. Alternatively, moving on the black edge towards the \(1c\) vertex corresponds to axi-symmetric expansion leading to needle-like anisotropy. The red edge connecting \(1c\) and \(2c\) vertices depicts the two-component limit. This map is defined using a linear combination of positive scalar metrics. These metrics are functions of eigen values of \(a_{ij}\) being sorted as \(\lambda_{1}\geqslant\lambda_{2}\geqslant\lambda_{3}\) and are used to defined the coordinate system \((x_{B},y_{B})\) given by \[x_{B}=C_{1c}x_{1c}+C_{2c}x_{2c}+C_{3c}x_{3c}=C_{1c}+C_{3c}\frac{1}{2}\,, \tag{4.8a}\] \[y_{B}=C_{1c}y_{1c}+C_{2c}y_{2c}+C_{3c}y_{3c}=C_{3c}\frac{\sqrt{3}}{2}\,, \tag{4.8b}\] where Figure 10: Premultiplied energy spectra of velocity auto correlations \(\varphi_{uu}\) (a), \(\varphi_{vv}\) (b), \(\varphi_{ww}\) (c) and cross correlation \(\varphi_{uv}\) (d). \[C_{1c}=\lambda_{1}-\lambda_{2}\,,\ \ C_{2c}=2(\lambda_{2}-\lambda_{3})\,,\ \ C_{3c}=3 \lambda_{3}+1\,. \tag{4.9}\] To have a measure of the total anisotropy, \(a_{ij}\) is calculated using temporal averaging and weighted spatial averaging over all radial points and the results are plotted in annotated black markers in figure 10(a). Subspaces I, II and III (shown with rectangle, diamond and triangle markers respectively) appear to be aligned on a line moving towards the isotropic state, with subspace III clearly being the most isotropic one. Snapshots averaged isotropy is plotted with a red circular marker being away from the full-field due to the fewer number of timesteps available in the snapshots. Subspace I is almost exactly on top of the red marker showing a similar anisotropy to the snapshots. The invariant map is plotted for the full-field and for each wall-normal location in figure 11. To clearly inspect the isotropy state at each wall layer, a different colourmap has been chosen to distinguish four wall layers. Grey colourmap is set for the viscous sublayer, blue for the buffer layer between the viscous and logarithmic layer (\(5\leqslant y^{+}\leqslant 30\)), green for the logarithmic layer and a heat colourmap is chosen for the overlap and outer layer (\(50\leqslant y^{+}\leqslant 181\)). The map starts for the full-field at the wall at the two-component limit in figure 10(a) and moves towards the one-component vertex. At \(y^{+}=10\) in the buffer layer a sharp bend is observed after which the trajectory moves towards the centre of the map where a second bend is reached at \(y^{+}\approx 83\) followed by a straight path towards the isotropic vertex at the centre of the pipe. The map for subspace I is plotted in figure 10(b) for each wall-normal location. The first bend takes place at the same location as the full-field whereas the second bend appears earlier at \(y^{+}=65\) followed by an S shaped movement towards the isotropic state. Similarly, subspace II starts on the wall on the two-component limit but closer to disc-like isotropy moving more Figure 11: Isotropy invariant map of the full-field (a) subspace I (b), subspace II (c) and subspace III (d). rapidly towards the \(1c\) vertex, reaching a softer bend at \(y^{+}=10\). After that, the trajectory follows a straight line approaching the axi-symmetric expansion limit where the second bend takes place moving away from the black edge at \(y^{+}\approx 83\). A third bend is reached at \(y^{+}\approx 120\) after which the flow approaches the isotropic state close to the pipe axis. Subspace III shows a very different behaviour starting on the two-component limit and moving towards the disc-like anisotropy where it almost reaches the \(2c\) vertex in the viscous sublayer at \(y^{+}=1.3\). A very soft bend takes place at \(7.5\leqslant y^{+}\leqslant 14\) followed by a path towards the isotropic state. In the overlap layer at \(57\leqslant y^{+}\leqslant 65\) the trajectory gets closest to the isotropic state after which it departs towards the axi-symmetric contraction limit at the pipe axis. ## 5 Summary and conclusions ### Summary A characteristic DMD is carried out on DNS data of turbulent pipe flow at \(Re_{b}=5300\) decomposing three velocity components along the characteristics of the flow corresponding to the group velocity of \(u_{g}=1.1\,U_{b}\). Three subspaces are extracted and their contributions to stream wise energy spectra and components of Reynolds stress tensor are investigated along with the anisotropy invariant maps to compare the structure of turbulence in each subspace. Subspace I being the most energetic one is comprised of 11 modes (3% of the total modes) and is detected based on three main criteria: the mode amplitudes, cumulative energy of the constituent modes and the relative error with respect to the snapshots matrix. This subspace undergoes minimal oscillations in space and time having a very small average frequency of \(\hat{f}_{I}=0.086\) and it decays slowly with the mean decay rate of \(\hat{d}_{I}=0.022\). The modes in this subspace form small angles with one another and larger ones with the rest of the modes, indicating large energy interactions inside the subspace and small interactions with the rest of the modes while maintaining the total kinetic energy of the subspace. The axial wave lengths and wall normal locations of the spectral peaks in this subspace coincide accurately with those of the snapshots. Subspace I contributes 97% to the turbulent kinetic energy and 99% to the \(\langle uv\rangle\) component of Reynolds stress tensor. Subspace II is detected with 67 modes (15% of the total modes) based on cumulative energy interactions of its members to subspace I. This subspace oscillates almost 25 times faster than subspace I with average frequency of \(\hat{f}_{II}=2.1\) and decays almost twice faster with average decay rate of \(\hat{d}_{II}=0.043\). The spectral peaks of sII in all axial energy spectra appear closer to the wall in the buffer layer with the peak value amounting to \(30\%-40\%\) of the snapshots peak. Only small scale flow features are present in this subspace with spectral peaks corresponding to maximum wave length of \(\lambda^{+}=304\) for the stream wise component and minimum of \(\lambda^{+}=134\) for the radial one. The peaks of Reynolds stress profiles emerge closer to the wall for all the components having total contribution of 6% to \(\langle uv\rangle\) component of Reynolds stress tensor. This subspace contributes 15% to kinetic energy while its interactions with subspaces I and III causes 12% and 2% of energy loss respectively. There are length scales which are present in sI and sII implying that all structures with the same length scales do not necessarily have the same contributions to Reynolds stress tensor or kinetic energy. The remaining 346 modes (82%) constitute subspace III having only minimal energy contributions to the first two subspaces. It oscillates on average more than two times faster than subspace II with frequency of \(\hat{f}_{III}=5.61\) and decays with average decay rate of \(\hat{d}_{III}=0.016\). Only very small scale flow features with low energy levels are observed here which are oscillating fast but they are persistent in space and time. This subspace has 2% contribution to the kinetic energy and close to no overall interaction with subspace I. It contributes to 0.1% of \(\langle uv\rangle\) and shows the most isotropic behaviour among all the subspaces specially in the overlap layer and a distinct disc-like anisotropy in the viscous sublayer. ### Conclusions The wave-like definition of coherent structures in a characteristic frame of reference in transport-dominated turbulent flows proves to be very efficient for the two following reasons: The main features of pipe flow at \(Re_{b}=5300\) is captured accurately with only 3% of the modes which form an almost orthogonal subspace to the rest of the modes. This subspace reproduces 97% of the turbulent kinetic energy of the full flow, and more than 96% of the invariants of the Reynolds stress tensor. Its spectral signature matches the snapshots in terms of wall-normal locations and wavelengths of the premultiplied energy spectra. The second reason is that the remaining modes can be further divided into two subspaces based on their cumulative energy contributions to subspace I. The Third subspace accommodates very small scales with short turn-over times and persists as a turbulent background motion. The second subspace lives in between the mentioned subspaces I and III having faster decay rates than the other two subspaces with their spectral peak length scales being substantially smaller than the first and substantially larger than the third subspace. We speculate that at higher Reynolds numbers more modes would be needed to build subspace I and the scale separation between the subspaces would increase. We base our speculation on the fact that the flow becomes more complex and wider range of group velocities would be present in the flow. ## Funding This joint study was part of the Priority Programme SPP 1881 Turbulent Superstructures of the Deutsche Forschungsgemeinschaft and funded by grant no. SE 824/33-1 (J.S and A.Sh) and grant no. EG100/24-2 (Ch. E.) **Acknowledgements.** All simulations in this study have been carried out on Norddeutscher Verbund fur Hochund Hochstleistungsrechnen (HLRN) with project id: bbi00011, using the code of our project partners within the SPP 1881 (grant no. AV120/3-2). **Declaration of interests.** The authors report no conflict of interest.
2309.16480
A Liouville theorem of VT-harmonic map heat flow
We proved an Liouville theorem for Backward V T-harmonic map heat flow from evolution manifolds into generalized regular ball. Among others, we also proved an Liouville theorem for V T-harmonic map heat flow from complete manifolds into generalized regular ball.
Xiangzhi Cao
2023-09-28T14:44:39Z
http://arxiv.org/abs/2309.16480v1
# A Liouville theorem of \(Vt\)-harmonic map heat flow ###### Abstract We proved an Liouville theorem for Backward \(VT\)-harmonic map heat flow from evolution manifolds into generalized regular ball. Among others, we also proved an Liouville theorem for \(VT\)-harmonic map heat flow from complete manifolds into generalized regular ball. _Keywords and phrases_: Dirichlet problem, Heat flow, \(VT\)-Harmonic map. _MSC 2010_: 58E15, 58E20, 53C27 ## 1 Introduction It is well known that harmonic map has a long history. Harmonic map represents a kind of classical variation problem and nonlinear geometric analysis problem. Harmonic map has many generalizations, such as V-harmonic map [4], Hermitian harmonic map ([10]) which is the particular case of V-harmonic map, affine harmonic map ([9]), Dirac-harmonic map([3]), etc. In this paper, we consider another kind of generalized map which is introduced by Chen et al. in 2020: **Definition 1** (VT-harmonic map, cf. [5] ).: Let \((M,g)\) be a compact manifold with boundary, \((N,h)\) a compact Riemannian manifold. A map \(u:(M,g)\rightarrow(N,h)\) is called a \(VT\)-harmonic map iff \(u\) satisfies \[\tau_{V}u+Tr_{g}T(du,du)=0, \tag{1.1}\] where \(\tau_{V}u=\tau(u)+du(V),\tau(u)=Tr_{g}(Ddu),V\in\Gamma(TM),T\in\Gamma(\otimes^ {1,2}TN)\). **Remark 1**.: It is obvious that if \(T\equiv 0\), \(u\) is just \(V\)-harmonic map ([4]). \(V\)-harmonic map is the special case of \(VT\) harmonic map. Hermition harmonic map is a particular case of \(V\)-harmonic map. When we take special \(V\) and \(T\), \(VT\)- harmonic map has wide application to other geometric probelm. **Remark 2**.: The Liouville theorem of VT harmonic map is rare. In [2], we obtained a Liouville type theorem for VT-harmonic map into horoball. The method in [5] can't be used directly to get the Liouville theorem. So, our main contribution of this paper is to derive Liouville theorm for VT harmonic map into generalized regualar ball(see definition 3). Jost-Yau[10] investigated the existence of Hermitian harmonic maps from Hermitian manifolds into compact nonpositively curved Riemannian manifolds by using the heat flow method. The study of such kind of maps is more difficult than that of harmonic map, since generally no variation structure can be employed. The absence of variation structure often leads to the lack of monotonicity inequality, which is a challenge for studying its blow-up behaviour (e.g. energy identity) and proving the existence using heat flows. In the first part of this paper, we will deal with evolving manifold. We recall some definitions of backward Ricci flow. A smooth manifold \((M,g(t)),t\in I\) with a time-dependent Riemannian metric is called Ricci flow when \[\partial_{t}g=-2\,\mathrm{Ric} \tag{1.2}\] which was introduced by Hamilton. A supersolution to (1.2) is called super Ricci flow. Namely, \((M,g(t)),t\in I\) is called super Ricci flow if \[\partial_{t}g\geq-2\,\mathrm{Ric},\] which has been introduced by McCann-Topping[22] from the viewpoint of optimal transport theory. Harmonic Ricci flow ([23]) and List's flow ([21]), mean curvature flow for spacelike hypersurfaces in Lorentzian manifold of non-negative sectional curvature, and (scaled) twisted Kahler-Ricci flow are the examples of super Ricci flow include. One can refer to [1, 6, 7, 11, 12, 15, 20, 24, 25] the further study on super Ricci flow. When \(t\leq 0\), super Ricci flow is termed as ancient super Ricci flow. For the function \(u:(M,g)\times(\infty,0]\to R\), \[\frac{\partial}{\partial t}u=\Delta_{g}u,\] such equation is called ancient heat equation. The evolving manifold \((M,g(\tau),\tau\geq 0\) is called backward super Ricci flow if \[\partial_{\tau}g\leq 2\,\mathrm{Ric},\] For the function \(u:(M,g)\times[0,\infty)\to\mathbf{R}\), \[\frac{\partial}{\partial\tau}u+\Delta_{g}u=0,\] such equation is called backward heat equation. The ancient super Ricci flow and ancient heat equation are transformed into backward super Ricci flow and backward heat equation through negative transformation of time parameters. Wang [26] studied Liouville theorem of ancient heat equation. The backward harmonic mapping heat flow is a generalization of the backward heat equation. However, there are few studies on the heat flow of backward harmonic mapping. Guo et al. [8] considered the Liouville theorem of the backward heat equation heat flow along the backward super Ricci flow. Kunikawa et al. [13, 14] used the truncation function constructed by the reduced distance function to obtain the gradient estimates of the backward harmonic map along the backward super Ricci flow, and obtained the Liouville theorem. In proving gradient estimates, this truncation function is still very novel. In this paper, we hope to continue to use the skills of these two papers [13, 14] and extend the corresponding results for harmonic map to \(VT\)- harmonic map. However, in this paper, unlike kunikawa et al[13, 14], the image of our map is in the generalized regular ball in the target manifold. We generalized condition B in [18, 17, 16, 19] as follows **Definition 2** (Condtion C).: Let \(N\) be a complete Riemannian manifold. Let \(\Omega\) be a bounded open subset of \(N\). We say \(\Omega\) satisfies condition \((C)\) if there exists a positive function \(f\in C^{2}(\Omega)\) satisfying the conditons: \[\begin{cases}-\nabla^{2}f-f(\frac{s_{0}-1}{s_{0}}\kappa+\frac{1}{4\varepsilon _{1}}\|\nabla T\|_{L^{\infty}}^{2}+\frac{1}{\varepsilon_{2}}\|T\|_{L^{\infty}} ^{2})h\geq Qh\\ 0<m_{1}(\Omega)\leq f(y)\leq m_{2}(\Omega)<\infty\\ |\nabla f(y)|\leq m_{3}(\Omega)<\infty\end{cases} \tag{1.3}\] for all \(y\in\Omega\), where \(\kappa(y)=\sup\{K(y,\pi)|K(y,\pi)\) is the sectional curvature of plane \(\pi\in T_{y}N\}\,,Q>\frac{m_{3}^{2}}{2m_{1}}\,\epsilon_{1},\epsilon_{2}\) are two small positive constant, \(s_{0}=\min\{m,n\},m,m_{1},m_{2},m_{3}\) are suitable positive constant. **Remark 3**.: When \(T=0\), it is just generalized regular ball defined in [18, 17, 16, 19]. **Definition 3** (cf. [18, 17, 16, 19]).: If \(\Omega\) satisfies condition (C) and there is a nonnegative convex function \(f^{*}\) on \(\Omega\) such that \(\Omega=\left(f^{*}\right)^{-1}\left([0,r)\right)\), \(\Omega\) is called a generalizad regular ball. **Remark 4**.: Regular ball is an exmaple of generalized regular ball. It is hard to get Liouville theorem for \(VT\)-harmonic map into regular ball due to the extra terms about \(T\) and \(V\). We can get get Liouville theorem for \(VT\)-harmonic map into gerneralized regular ball due to the definition 3. This is our motivation of this paper. In this paper, we firstly consider the following backward \(VT\)-harmonic map heat flow which is the generaliztion of backward harmonic map heat flow: \[\begin{cases}&\frac{\partial u}{\partial\tau}+\tau_{V}u+Tr_{g}T(du,du)=0,\quad on \quad M.\\ &u:M\times[0,\infty)\to\Omega\subset N.\\ &\partial_{\tau}g\leq 2\,\mathrm{Ric}\end{cases} \tag{1.4}\] where \(\Omega\) is the generalized regular ball in \(N.\) Guo-Philipowski-Thalmaier[8] have approached this problem for backward harmonic map heat flow from stochastic analytic viewpoint. Here we aim to approach the problem from Perelmans reduced geometric viewpoint. We say that \((M,g(\tau)),\tau\in[0,\infty)\) is admissible(cf. [14, subsection 1.2]) if for every \(\tau>0\) there is \(c_{\tau}\geq 0\) depending only on \(\tau\) such that \(h\geq c_{\tau}g\) on \([0,\tau]\). As said in [14], the admissibility ensures that the \(L\)-distance is achieved by a minimal \(L\)-geodesic. Now we are in a postiton to state our first main result **Theorem 1**.: _Let \((M^{m},g(\tau))_{\tau\in[0,\infty)}\) be an admissible complete backward super Ricci flow. Let \((N^{n},h)\) be a complete Riemannian manifold with \(\sec\leq\kappa\) for \(\kappa>0\). We denote by \(h_{0}\) the funtion \(\frac{1}{2}\frac{\partial g(\tau)}{\partial\tau}\). We assume_ \[\mathcal{D}(V)\geq 0,\mathcal{H}(V)\geq-\frac{H}{\tau},Ric_{V}-h_{0}\geq-K, \left\|T\right\|_{\infty}<\frac{2Q}{m_{3}}-\frac{m_{3}}{m_{1}},\] _for all vector fields \(V\), \(K\geq 0,H\geq 0\). Here, one can refer to (2.2) for the definition of \(\mathcal{D}(V),\mathcal{H}(V)\), the constants \(Q,m_{1},m_{3}\) are the same as that in definition 2. Let \(Q_{R,\Lambda}:=\{(x,\tau)|\mathfrak{d}(x,\tau)\leq R\}\), here the function \(\mathfrak{d}\) refers to \(\sqrt{4\tau\ell(x,\tau)}\), where the distance function \(\ell(x,\tau)\) is defined in (2.1). Let \(u:M\times[0,\infty)\to N\) be a solution to backward \(VT\)-harmonic map heat flow (1.4) such that the image of \(u\) is contained in \(\Omega\subset N\), Then on \(Q_{R/2,\Lambda/4}\), for any \(0\leq\Lambda<\infty\)_ \[\sup_{Q_{R/2,\Lambda/4}}\frac{|du|^{2}}{f^{2}}(x,\tau)\] \[\leq \frac{C_{2}}{C_{1}-5\varepsilon-m_{3}^{2}} \tag{1.5}\] \[+\frac{1}{C_{1}-5\varepsilon-m_{3}^{2}}\bigg{(}\frac{C_{3/4}^{2}} {\varepsilon}\left(m^{2}+\frac{9}{4}\right)\frac{1}{R^{4}}+\frac{D^{2}}{4 \varepsilon}\frac{1}{\Lambda^{2}}+\frac{C_{3/4}^{2}}{4\varepsilon}K^{2}\] \[+\frac{9C_{3/4}^{4}}{\varepsilon}\frac{1}{R^{4}}+\frac{243C_{3/4 }^{4}}{16}\frac{1}{m_{3}^{2}}\frac{1}{R^{4}}+\frac{1}{4\epsilon}\|V\|_{\infty} C_{1/2}^{2}\frac{1}{R^{2}}\bigg{)}^{\frac{1}{2}}\] _Here the costant \(C_{1}=2Qm_{1}-\|T\|_{L^{\infty}}m_{3}m_{1}\), \(C_{2}=2(K+\epsilon_{1})-\frac{8\epsilon_{3}-1}{4\epsilon_{3}}(\frac{m_{3}}{m_ {1}})^{2},2\epsilon_{3}=1-\epsilon_{2}\). The constants \(D,C_{3/4}\) is defined in Lemma 3, the constant \(\epsilon\) is some small positive constant, the function \(f\) is the defining function of the domain \(\Omega\), the constant \(m_{3}\) is that defined in Definition 2. The constants \(\epsilon_{1},\epsilon_{2}\) are suitable small postive constants defined in (3.3)_ **Remark 5**.: Along admissible complete backward super Ricci flow, the condition \(Ric_{V}-h_{0}\geq-K\) is implied by the conditon \(\frac{1}{2}L_{V}g\geq-K.\)__ Choosing some \(\epsilon_{1},\epsilon_{2}\), letting \(R,\Lambda\rightarrow\infty\) in Theorem 1 and seeing that the function \(f\) is upper bounded, we get **Corollary 1.1**.: _In the situations of Theroem 1, let \(u:M\times[0,\infty)\to N\) be a solution to backward VT-harmonic map heat flow (1.4) such that the image of \(u\) is contained in \(\Omega\subset N\), then the map \(u\) is constant for some choice of \(\epsilon_{1},\epsilon_{2}\)._ **Remark 6**.: Compared to the main theorems in [13, 14], condition on the map \(u\) near infinity in Corollary 1.1 is not required, since the image of the map cosidered in this paper is not in regular ball, but rather in generalized regular ball. The method of gradient estimates in Theorem 1 was inspired by [18], in order to get new estimates and nontrivial generalization. In the second part of this paper, we also considered the following heat flow combining method of [5] and [18, 17, 16]: \[\left\{\begin{array}{rl}&\frac{\partial u}{\partial t}=\tau_{V}u+Tr_{g}T(du,du),\quad on\quad M.\\ &u=u_{0},\qquad\quad on\qquad M\times\{0\}.\\ &u:M\times[0,T_{max})\rightarrow\Omega.\end{array}\right. \tag{1.6}\] where \(u_{0}:M\times[0,T_{max})\rightarrow\Omega\), here \(\Omega\) is the generalized regular ball. About this problem (1.6), the second result in this paper is as follows: **Theorem 2**.: _Let \(\left(M^{m},g\right)\) and \(\left(N^{n},h\right)\) be two complete Riemannian manifolds. Let \(x_{o}\in M\) and \(r(x)\) be the distance function from \(x_{o}\), we use the notations \(B_{R}\left(x_{o}\right)=\left\{x\in M|r(x)\leq R\right)\), in addition, we assume that \(Ric_{V}\geq-A,A\geq 0.\) Moreover, suppose that \(\Omega\subset N\) satisfies condition \(\left(B\right)\). Assume taht \(\left\langle V,\nabla r\right\rangle\leq v(r)\) for some nondecreasing function \(v(\cdot)\) and \(\left\|T\right\|_{\infty}<\frac{2Q}{m_{3}}\). If \(u(x,t)\) is a solution of equation (1.6) on \(B_{R}\left(x_{0}\right)\times\left[0,T_{1}\right),u\left(B_{R}\left(x_{0} \right)\times\left[0,T_{1}\right)\right)\subset\Omega\) and \(B_{R}\left(x_{0}\right)\cap\partial M=\phi\), then for \(0<\Lambda<T_{1}\), we have_ \[\sup_{B_{R/2}\left(x_{0}\right)}|\nabla u(x,t)|\leq m_{2}\left(\frac{C_{0}^{ \frac{1}{2}}m_{3}}{K_{2}R}+C_{4}\frac{1}{\sqrt{K_{2}}}\left(\sqrt{K_{1}}+ \sqrt{\frac{1}{R}}+\frac{1}{R}\right)+\frac{1}{\sqrt{K_{2}}\Lambda^{1/2}} \right), \tag{1.7}\] _and_ \[\sup_{B_{R/2}\left(x_{0}\right)}|\nabla u(x,t)|\leq\frac{m_{2}}{m_{1}}\sup_{B _{R}\left(x_{0}\right)}|\nabla u_{0}|+m_{2}\left(\frac{\frac{2m\sqrt{C_{0}}}{ R}+\sqrt{\frac{2m\sqrt{C_{0}}}{R^{2}}+4K_{2}\left(K_{1}+\frac{C_{2}+2C_{0}}{R^{2}}+ \frac{C_{3}}{R}\right)}}{2K_{2}}\right), \tag{1.8}\] _for all \(0<t<T_{1}\). Here \(K_{1}=2(A+\epsilon_{1})-\frac{3-4\epsilon_{2}}{2(1-\epsilon_{2})}(\frac{m_{3}} {m_{1}})^{2}\), \(K_{2}=2Qm_{1}-\|T\|_{L^{\infty}}m_{3}m_{1}\), \(C_{0}>0\) is a universal constant defined in (4.7), \(C_{1}=\sqrt{(m-1)A},C_{2}=C_{0}+\sqrt{C_{0}}(m-1),C_{3}=v(a)+C_{1},C_{4}=\max(C _{2}+2C_{0},C_{3}).\) The constants \(\epsilon_{1},\epsilon_{2}\) are suitable small postive constants defined in (3.3)_ **Corollary 1.2**.: _In the situations of Theroem 2, let \(u:M\times\left[0,\infty\right)\to N\) be a solution to \(VT\)-harmonic map heat flow (1.6) such that the image of \(u\) is contained in \(\Omega\subset N\), then the map \(u\) is constant for some choice of \(\epsilon_{1},\epsilon_{2}\)._ As in Theroem 2, if we take the function \(F\) directly instead of the function \(\lambda F\), slightly adapting the proof of Theroem 2, we will have **Corollary 1.3**.: _Let \(\left(M^{m},g\right)\) be closed manifold. Let \(N,\Omega,V\) be the same situations in Theorem 2. If \(u(x,t)\) is a solution of equation (1.6) on \(M\times\left[0,T_{1}\right),u\left(M\times\left[0,T_{1}\right)\right)\subset\Omega\), then for \(0<t<T_{1}\), we have_ \[|\nabla u(x,t)|\leq m_{2}\left(C_{4}\frac{1}{\sqrt{K_{2}}}\sqrt{K_{1}}+\frac{ 1}{\sqrt{K_{2}}t^{1/2}}\right),\] _and_ \[|\nabla u(x,t)|\leq\frac{m_{2}}{m_{1}}\sup_{B_{R}\left(x_{0}\right)}|\nabla u( x,0)|+m_{2}\sqrt{\frac{K_{1}}{K_{2}}}, \tag{1.9}\] _for all \(0<t<T_{1}\). Here \(K_{1}=2(A+\epsilon_{1})-\frac{3-4\epsilon_{2}}{2(1-\epsilon_{2})}(\frac{m_{3}}{m_{ 1}})^{2}\), \(K_{2}=2Qm_{1}-\|T\|_{L^{\infty}}m_{3}m_{1}\), \(C_{0}>0\) is a universal constant defined in (4.7), \(C_{1}=\sqrt{(m-1)A},C_{2}=C_{0}+\sqrt{C_{0}}(m-1),C_{3}=v(a)+C_{1},C_{4}=\max(C _{2}+2C_{0},C_{3}).\) The constants \(\epsilon_{1},\epsilon_{2}\) are suitable small postive constants defined in (3.3)_ For \(VT\)-harmonic map, from Theroem 2 and Corollary 1.3, we get **Corollary 1.4**.: _Let \(M,N,\Omega,V\) be the same situations in Theorem 2. If \(u(x)\) is \(VT\)-harmonic map from \(B_{R}(x_{0})\) into \(\Omega\), then for \(0<\Lambda<T_{1}\), we have_ \[\sup_{B_{R/2}(x_{0})}|\nabla u|\leq m_{2}\left(\frac{C_{0}^{\frac{1}{2}}m_{3} }{K_{2}R}+C_{4}\frac{1}{\sqrt{K_{2}}}\left(\sqrt{K_{1}}+\sqrt{\frac{1}{R}}+ \frac{1}{R}\right)\right),\] _If \(u(x)\) is \(VT\)-harmonic map from \(M\) into \(\Omega\),then we have_ \[\sup_{B_{R/2}(x_{0})}|\nabla u|\leq m_{2}\sqrt{\frac{K_{1}}{K_{2}}}, \tag{1.10}\] _Here \(K_{1}=2(A+\epsilon_{1})-\frac{3-4\epsilon_{2}}{2(1-\epsilon_{2})}(\frac{m_{3 }}{m_{1}})^{2}\), \(K_{2}=2Qm_{1}-\|T\|_{L^{\infty}}m_{3}m_{1}\), \(C_{0}>0\) is a universal constant defined in (4.7), \(C_{1}=\sqrt{(m-1)A},C_{2}=C_{0}+\sqrt{C_{0}}(m-1),C_{3}=v(a)+C_{1},C_{4}=\max( C_{2}+2C_{0},C_{3}).\) The constants \(\epsilon_{1},\epsilon_{2}\) are suitable small postive constant defined in (3.3)_ By a similar method as in Theorem 2, we can get **Corollary 1.5**.: _Let \(M\) be a complete Riemannian manifold with Ricci curvature bounded below by_ \[\mathrm{Ric}_{\mathrm{V}}\geq A\geq 0,\] _Let \(N,\Omega,V\) be in the same situations in Theorem 2. If \(u(x)\) is a \(VT\)-harmonic map from \(M\) into \(\Omega\) and \(A\geq\frac{\epsilon_{1}}{2}-\frac{3-4\epsilon_{2}}{4(1-\epsilon_{2})}\left( \frac{m_{3}}{m_{1}}\right)^{2},\) then \(u\) is constront._ Next we slightly mention the case when the domain manifold is compact manifold with boundary. Concretely speaking, we consider the following initial boundary problem \[\begin{cases}&\frac{\partial u}{\partial t}=\tau_{V}u+Tr_{g}T(du,du),\quad on \quad M.\\ &u=u_{0},\qquad\quad on\qquad\partial M\times[0,T_{max});\\ &u=u_{0},\qquad\quad on\qquad M\times\{0\}.\\ &u:M\times[0,T_{max})\rightarrow\Omega.\end{cases} \tag{1.11}\] where \(u_{0}:M\times[0,T_{max})\rightarrow\Omega\), here \(\Omega\) is the generalized regular ball. We have the following result without giving its proof. **Corollary 1.6**.: _Let \((M^{m},g)\) be compact manifold with boundary. Let \(N,\Omega,V\) be the same situations in Theorem 2. If \(u(x,t)\) is a solution of equation (1.11) on \(M\times\left[0,T_{1}\right),u\left(M\times\left[0,T_{1}\right)\right)\subset\Omega\), then for \(0<t<T_{1}\), we have_ \[\left|\nabla u(x,t)\right|\leq\frac{m_{2}}{m_{1}}\sup_{M}\left|\nabla u_{0} \right|+m_{2}\sqrt{\frac{K_{1}}{K_{2}}}, \tag{1.12}\] _Here \(K_{1}=2(A+\epsilon_{1})-\frac{3-4\epsilon_{2}}{2(1-\epsilon_{2})}(\frac{m_{3} }{m_{1}})^{2}\), \(K_{2}=2Qm_{1}-\|T\|_{L^{\infty}}m_{3}m_{1}\), \(C_{0}>0\) is a universal constant defined in (4.7), \(C_{1}=\sqrt{(m-1)A},C_{2}=C_{0}+\sqrt{C_{0}}(m-1),C_{3}=v(a)+C_{1},C_{4}=\max(C _{2}+2C_{0},C_{3}).\) The constants \(\epsilon_{1},\epsilon_{2}\) are suitable small postive constant defined in (3.3)_ This paper is organized as follows: In section 2, we recalled some backgound of backward Ricci flow and gives some lemmas used in the subsequent section. In section 3, we proved Theroem 1. In section 4, we prove Theorem 2. ## 2 Preliminary Let \((M,g(\tau))_{\tau\in[0,\infty)}\) be an \(m\)-dimensional, complete time-dependent Riemannian manifold. For a curve \(\gamma:[\tau_{1},\tau_{2}]\to M\), its \(\mathcal{L}\)-length is defined as \[\mathcal{L}(\gamma):=\int_{\tau_{1}}^{\tau_{2}}\sqrt{\tau}\left(H+\left\| \frac{d\gamma}{d\tau}\right\|^{2}\right)d\tau.\] It is well-known that its critical point over all curves with fixed endpoints is characterized by the following \(\mathcal{L}\)-geodesic equation: \[X:=\frac{d\gamma}{d\tau},\quad\nabla_{X}X-\frac{1}{2}\nabla H+\frac{1}{2\tau} X+2h(X)=0.\] For \((x,\tau)\in M\times(0,\infty)\), the \(L\)-distance \(L(x,\tau)\) and reduced distance \(\ell(x,\tau)\) from a space-time base point \((x_{0},0)\) are defined by \[L(x,\tau):=\inf_{\gamma}\mathcal{L}(\gamma),\ell(x,\tau):=\frac{1}{2\sqrt{ \tau}}L(x,\tau), \tag{2.1}\] Where the infimum is taken over all curves \(\gamma:[0,\tau]\to M\) with \(\gamma(0)=x_{0}\) and \(\gamma(\tau)=x\). A curve is called minimal \(\mathcal{L}\)-geodesic from \((x_{0},0)\) to \((x,\tau)\) if it attains the infimum of (2.1). Hereafter, we use the following notations: \[\begin{cases}\bar{L}(x,\tau):=4\tau\ell(x,\tau),\\ \mathcal{D}(V):=-\partial_{\tau}H-\Delta H-2\|h\|^{2}+4\operatorname{div}h(V)-2 g(\nabla H,V)+2\operatorname{Ric}(V,V)-2h(V,V),\\ \mathcal{H}(V):=-\partial_{\tau}H-\frac{H}{\tau}-2g(\nabla H,V)+2h(V,V),\\ R(V):=\operatorname{Ric}(V,V)-h(V,V).\end{cases} \tag{2.2}\] The function \(\mathcal{D}(V)\) is refered to as M\(\ddot{u}\)ller qunantity. We now assume that \((M,g(\tau))_{\tau\in[0,\infty)}\) is admissible (see Subsection 1.2). In this case, for every \((x,\tau)\in M\times(0,\infty)\), there exists at least one minimal \(\mathcal{L}\)-geodesic. Also, the functions \(L(\cdot,\tau)\) and \(L(x,\cdot)\) are locally Lipschitz in \((M,g(\tau))\) and \((0,\infty)\), respectively; in particular, they are differentiable almost everywhere. Assume that \(\ell\) is smooth at \((\bar{x},\bar{\tau})\in M\times(0,\infty)\). We have **Lemma 1** ([cf. [13, 14]).: _] Let \(K\geq 0\). We assume_ \[\mathcal{D}(V)\geq-2K\left(H+\|V\|^{2}\right),H\geq 0,\] _for all vector fields \(V\). Then at \((\bar{x},\bar{\tau})\) we have_ \[\left(\Delta+\partial_{\tau}\right)\bar{L}\leq 2m+2K\bar{L}.\] **Lemma 2** (cf. [13, 14]).: _We assume_ \[\mathcal{H}(V)\geq-\frac{H}{\tau},H\geq 0\] _for all vector fields \(V\). Then at \((\bar{x},\bar{\tau})\) we have_ \[\|\nabla\mathfrak{d}\|^{2}\leq 3.\] _Here, the function \(\mathfrak{d}\) refers to \(\sqrt{4\tau\ell(x,\tau)}\)._ **Lemma 3** (cf. Lemma 4.4 in [14] ).: _Let \(R,\Lambda>0,\alpha\in(0,1)\). Then there is a smooth function \(\psi:[0,\infty)\times[0,\infty)\to[0,1]\) which is supported on \([0,R]\times[0,\lambda]\), and a constant \(C_{\alpha}>0\) depending only on \(\alpha\) such that the following hold:_ _(1) \(\psi\equiv 1\) on \([0,R/2]\times[0,\Lambda/4]\);_ _(2) \(\partial_{\tau}\psi\leq 0\) on \([0,\infty)\times[0,\infty)\), and \(\partial_{r}\psi\equiv 0\) on \([0,R/2]\times[0,\infty)\);_ _(3) we have_ \[\frac{|\partial_{r}\psi|}{\psi^{\alpha}}\leq\frac{C_{\alpha}}{R},\frac{| \partial_{r}^{2}\psi|}{\psi^{\alpha}}\leq\frac{C_{\alpha}}{R^{2}},\frac{| \partial_{\tau}\psi|}{\psi^{1/2}}\leq\frac{D}{\Lambda},\] _where \(C>0\) is a universal constant,_ In the sequel, these constants \(\epsilon,\epsilon_{1},\epsilon_{2},\epsilon_{3}\) etc. are used to denote small positive constant. The constants \(C,C_{1},C_{3/4}\) etc. which occured in the inequalities may be different at defferent lines. The energy density \(e(u)=|du|^{2}\). We hope that the reader will not find these notations confusing. ## 3 Proof of Theorem 1 We first fix the notation. We denote by \(h_{0}\) the funtion \(\frac{1}{2}\frac{\partial g(\tau)}{\partial\tau}\) in order to make differences with the metric \(h\) of the manifold \(N\). The index \(\alpha,\beta\) etc. ranges from \(\ 1\ \ \text{to}\ m\). Let \(e_{1},e_{2},\cdots,e_{m}\) be a focal orthonormal frame field of the domain manifold \(M\). In this section, the constant such as \(C_{1/2},C_{3/4}\) are defined in Lemma 3. Since \(u\) is the solution of backward \(VT\)-harmonic heat flow, \[\frac{\partial u}{\partial\tau}+\tau_{V}u+Tr_{g}T(du,du)=0,\] Computing directly, we have \[\frac{\partial}{\partial\tau}|du|^{2}=-\langle du(h_{0}(e_{i})),du(e_{i}) \rangle+\langle\nabla_{e_{\alpha}}(\frac{\partial u}{\partial\tau})\rangle, du(e_{\alpha})\rangle, \tag{3.1}\] We can also deduce Bochner type formula for the backward \(VT\)-harmonic map heat flow, \[\frac{1}{2}(\triangle+\frac{\partial}{\partial\tau})|du|^{2}= |\nabla du|^{2}-\langle R^{N}(du(e_{\alpha}),du(e_{\beta}))du(e_{ \alpha}),du(e_{\beta})\rangle\] \[+\langle du(Ric^{M}(e_{\alpha})),du(e_{\alpha})\rangle+\langle \nabla_{e_{\alpha}}(\tau(u)),du(e_{\alpha})\rangle\] \[-\langle du(h(e_{\alpha})),du(e_{\alpha})\rangle+\langle\nabla_{e _{\alpha}}(\frac{\partial u}{\partial\tau})\rangle,du(e_{\alpha})\rangle.\] However, \[\langle\nabla_{e_{\alpha}}(\tau(u)),du(e_{\alpha})\rangle+ \langle\nabla_{e_{\alpha}}(\frac{\partial u}{\partial\tau})\rangle,du(e_{ \alpha})\rangle \tag{3.2}\] \[=-\langle\nabla_{e_{\alpha}}(du(V)+Tr_{g}T(du,du),du(e_{\alpha})\rangle\] \[=-\frac{1}{2}V|du|^{2}-\frac{1}{2}\langle du(L_{V}g(e_{i})),du(e_ {i})\rangle-\langle\nabla_{e_{\alpha}}Tr_{g}T(du,du),du(e_{\alpha})\rangle,\] and \[\langle\nabla_{e_{\alpha}}Tr_{g}T(du,du),du(e_{\alpha})\rangle= \sum_{\alpha,\beta=1}^{m}\left\langle\left(\nabla_{e_{\alpha}}T\right)\left( du\left(e_{\beta}\right),du\left(e_{\beta}\right)\right),du\left(e_{\alpha} \right)\right\rangle\] \[-\sum_{\alpha,\beta=1}^{m}\left\langle 2T\left(\left(\nabla_{e_{ \alpha}}du\right)\left(e_{\beta}\right),du\left(e_{\beta}\right)\right),du \left(e_{\alpha}\right)\right\rangle,\] Taking the above formula into count, we have \[\begin{split}&\frac{1}{2}(\triangle_{V}+\frac{\partial}{\partial \tau}|du|^{2}\\ &=|\nabla du|^{2}-\langle R^{N}(du(e_{\alpha}),du(e_{\beta}))du(e_{ \alpha}),du(e_{\beta})\rangle\\ &+\langle du((Ric_{V}^{M}-h_{0})(e_{\alpha})),du(e_{\alpha}) \rangle-\sum_{\alpha,\beta=1}^{m}\left\langle\left(\nabla_{e_{\alpha}}T\right) \left(du\left(e_{\beta}\right),du\left(e_{\beta}\right)\right),du\left(e_{ \alpha}\right)\right\rangle\\ &+\sum_{\alpha,\beta=1}^{m}\left\langle 2T\left(\left(\nabla_{e_{ \alpha}}du\right)\left(e_{\beta}\right),du\left(e_{\beta}\right)\right),du \left(e_{\alpha}\right)\right\rangle,\end{split}\] By the formula in [5], \[\begin{cases}\sum_{\alpha,\beta}R^{N}\left(du\left(e_{\alpha}\right),du\left(e _{\beta}\right),du\left(e_{\alpha}\right),du\left(e_{\beta}\right)\right)\leq \frac{s_{0}-1}{s_{0}}\kappa|du|^{4},\\ |\langle\left(\nabla_{e_{\alpha}}T\right)\left(du\left(e_{\beta} \right),du\left(e_{\beta}\right)\right),du\left(e_{\alpha}\right)\rangle|\leq \varepsilon_{1}e(u)+\frac{1}{4\varepsilon_{1}}\|\nabla T\|_{L^{\infty}}^{2}e( u)^{2},\\ |\langle 2T\left(\left(\nabla_{e_{\alpha}}du\right)\left(e_{ \beta}\right),du\left(e_{\beta}\right)\right),du\left(e_{\alpha}\right)\rangle |\leq\varepsilon_{2}|\nabla du|^{2}+\frac{1}{\varepsilon_{2}}\|T\|_{L^{\infty} }^{2}e(u)^{2}.\end{cases} \tag{3.3}\] Here \(\epsilon_{1},\epsilon_{2}\) are suitable small positive constant, \(s_{0}:=\min\{m,n\}\). Using the assumption \(Ric_{V}-h_{0}\geq-K\) and the above estimates, we have \[\begin{split}&\frac{1}{2}(\Delta_{V}+\frac{\partial}{\partial \tau})|du|^{2}\\ &\geq(1-\epsilon_{2})|\nabla du|^{2}-(K+\epsilon_{1})e(u)-( \frac{s_{0}-1}{s_{0}}\kappa+\frac{1}{4\varepsilon_{1}}\|\nabla T\|_{L^{ \infty}}^{2}+\frac{1}{\varepsilon_{2}}\|T\|_{L^{\infty}}^{2})e(u)^{2}.\end{split} \tag{3.4}\] Let the function \(f\) be the function in Definition 2 and \(\omega=\frac{|du|^{2}}{f^{2}}\), a routine computation will give \[\begin{split}\left(\Delta_{V}+\frac{\partial}{\partial\tau} \right)\omega=&\frac{\left(\Delta_{V}+\frac{\partial}{\partial \tau}\right)|du|^{2}}{f^{2}}-2\frac{\left(\Delta_{V}+\frac{\partial}{\partial \tau}\right)f(u(x,\tau))|du|^{2}}{f^{3}}\\ &-4\frac{\nabla f\nabla|du|^{2}}{f^{3}}+6\frac{|\nabla f|^{2}| du|^{2}}{f^{4}}.\end{split} \tag{3.5}\] Computing directly, one has \[\left(\Delta_{V}+\frac{\partial}{\partial\tau}\right)f(u(x,t))=\nabla^{2}(f) (du,du)-\langle Tr_{g}T(du,du),\nabla f\rangle. \tag{3.6}\] Substituting (3.6) and (3.4) into (3.5), we may have \[\left(\Delta_{V}+\frac{\partial}{\partial\tau}\right)\omega\] \[\geq -2(K+\epsilon_{1})\frac{|\nabla u|^{2}}{f^{2}}+2(1-\epsilon_{2}) \frac{|\nabla du|^{2}}{f^{2}}\] \[-2\frac{(\frac{s_{0}-1}{s_{0}}\kappa+\frac{1}{4\epsilon_{1}}\| \nabla T\|_{L^{\infty}}^{2}+\frac{1}{\varepsilon_{2}}\|T\|_{L^{\infty}}^{2})e (u)^{2}}{f^{2}}-2\frac{\left(\Delta+\frac{\partial}{\partial\tau}\right)f(u(x, \tau))|du|^{2}}{f^{3}}\] \[-2\frac{\nabla f\cdot\nabla|du|^{2}}{f^{3}}+2\frac{|\nabla f|^{2} |du|^{2}}{f^{4}}-2\nabla\omega\cdot\frac{\nabla f}{f}.\] But, since the domain \(\Omega\) satisfies conditon (B), \[-2\frac{(\frac{s_{0}-1}{s_{0}}\kappa+\frac{1}{4\epsilon_{1}}\| \nabla T\|_{L^{\infty}}^{2}+\frac{1}{\varepsilon_{2}}\|T\|_{L^{\infty}}^{2})e (u)^{2}}{f^{2}}-2\frac{\left(\Delta+\frac{\partial}{\partial\tau}\right)f(u(x, \tau))|du|^{2}}{f^{3}}+2Q\frac{|du|^{4}}{f^{3}}\] \[\geq\frac{\langle Tr_{g}T(du,du),\nabla f\rangle}{f^{3}}+2Q\frac {|du|^{4}}{f^{3}}\] \[\geq-\|T\|_{L^{\infty}}\frac{\|\nabla f\|\|du\|^{4}}{f^{3}}+2Q \frac{|du|^{4}}{f^{3}}.\] The Hlder's inequality implies \[4\frac{|\nabla du||du||\nabla f|}{f^{3}}\leq 4\epsilon_{3}\frac{|\nabla du|^{2} }{f^{2}}+\frac{1}{4\epsilon_{3}}\frac{|\nabla f|^{2}|du|^{2}}{f^{4}},\] and it is trivial to see \[|\nabla|du|^{2}\,|\leq 2|\nabla du||du|\] Taking \(2\epsilon_{3}=1-\epsilon_{2}\) and substituting the last two inequalities into (3.7), we have \[\left(\Delta+\frac{\partial}{\partial\tau}\right)\omega\geq C_{1}\omega^{2}- 2\nabla\omega\cdot\frac{\nabla f}{f}-C_{2}\omega\] where \(C_{1}=2Qm_{1}-\|T\|_{L^{\infty}}m_{3}m_{1},C_{2}=2(K+\epsilon_{1})-\frac{8 \epsilon_{3}-1}{4\epsilon_{3}}(\frac{m_{3}}{m_{1}})^{2}\). We choose the function \(\psi\) which is defined in Lemma 3, then we get \[\left(\Delta_{V}+\frac{\partial}{\partial\tau}\right)(\psi\omega )-2\frac{\langle\nabla(\psi\omega),\nabla\psi\rangle}{\psi}+\frac{\langle \nabla(\psi\omega),\nabla f\rangle}{f}\] \[= \psi\left(\Delta_{V}+\frac{\partial}{\partial\tau}\right)(\omega )+\omega\left(\Delta_{V}+\frac{\partial}{\partial\tau}\right)(\psi)-2|\nabla \psi|^{2}\frac{\omega}{\psi}+\langle\psi\nabla\omega,\nabla\log f\rangle+ \langle\psi\nabla\omega,\nabla\log f\rangle\] \[\geq C_{1}\psi\omega^{2}-2\psi\nabla\omega\cdot\frac{\nabla f}{f}-C_{2 }\psi\omega+\omega\left(\Delta_{V}+\frac{\partial}{\partial\tau}\right)\psi-2| \nabla\psi|^{2}\frac{\omega}{\psi}\] \[+2\langle\psi\nabla\omega,\nabla\log f\rangle+2\langle\omega \nabla\psi,\nabla\log f\rangle\] \[= C_{1}\psi\omega^{2}-C_{2}\psi\omega+\omega\left(\Delta_{V}+\frac{ \partial}{\partial\tau}\right)\psi-2\frac{|\nabla\psi|^{2}}{\psi}\omega+2\langle \omega\nabla\psi,\nabla\log f\rangle. \tag{3.7}\] Now we can estimate the last three terms on the right hand of (3.7). By the estimates in [13, 14], we have \[\omega\left(\Delta+\frac{\partial}{\partial\tau}\right)(\psi)\leq 4 \varepsilon\psi\omega^{2}+\frac{C_{3/4}^{2}}{\varepsilon}\left(m^{2}+\frac{9} {4}\right)\frac{1}{R^{4}}+\frac{D^{2}}{4\varepsilon}\frac{1}{\Lambda^{2}}+ \frac{C_{3/4}^{2}}{4\varepsilon}K^{2}. \tag{3.8}\] Here and in the sequel \(\epsilon\) denotes a small positive constant. This formula is derived using Lemma 1 and Lemma 2. One can refer to [13, 14] for details. Next, by Young's inequality, we get \[\omega\langle V,d\psi\rangle=\omega\sqrt{\psi}\langle V,\frac{d \psi}{\sqrt{\psi}}\rangle\leq\epsilon\psi w^{2}+\frac{1}{4\epsilon}\|V\|_{ \infty}\frac{|d\psi|^{2}}{\psi}\] \[\leq\epsilon\psi w^{2}+\frac{1}{4\epsilon}\|V\|_{\infty}C_{1/2}^ {2}\frac{1}{R^{2}}\] In addition, \[\frac{2w|\nabla\psi|^{2}}{\psi}\leq\varepsilon\psi w^{2}+\frac{|\nabla\psi|^{ 4}}{\varepsilon\psi^{3}}\leq\varepsilon\psi w^{2}+\frac{9C_{3/4}^{4}}{ \varepsilon}\frac{1}{R^{4}}, \tag{3.9}\] and \[-2\frac{wg\left(\nabla\psi,\nabla\left(f\circ u\right)\right)}{f \circ u}\leq\frac{2w|\nabla\psi|\left|\nabla\left(f\circ u\right)\right|}{f \circ u}\leq 2m_{3}w^{3/2}|\nabla\psi|\] \[\leq m_{3}^{2}\psi w^{2}+\frac{27}{16}\frac{1}{m_{3}^{2}}\frac{| \nabla\psi|^{4}}{\psi^{3}}\leq m_{3}^{2}\psi w^{2}+\frac{243C_{3/4}^{4}}{16} \frac{1}{m_{3}^{2}}\frac{1}{R^{4}}, \tag{3.10}\] where we have used the bound \(f\leq m_{3}\) and the definition of \(\omega\), the constants \(C_{3/4},D\) are the same as that in Lemma 3. \[C_{1}\psi\omega^{2}-C_{2}\psi\omega\leq \left(\Delta+\frac{\partial}{\partial\tau}\right)(\psi\omega)-2 \frac{\langle\nabla(\psi\omega),\nabla\psi\rangle}{\psi}+\frac{\langle\nabla (\psi\omega),\nabla f\rangle}{f} \tag{3.11}\] \[+\left(4\varepsilon\psi\omega^{2}+\frac{C_{3/4}^{2}}{\varepsilon }\left(m^{2}+\frac{9}{4}\right)\frac{1}{R^{4}}+\frac{D^{2}}{4\varepsilon} \frac{1}{\Lambda^{2}}+\frac{C_{3/4}^{2}}{4\varepsilon}K^{2}.\right)\] \[+\epsilon\psi w^{2}+\frac{1}{4\epsilon}\|V\|_{\infty}C_{1/2}^{2} \frac{1}{R^{2}}\] \[+\left(\varepsilon\psi w^{2}+\frac{9C_{3/4}^{4}}{\varepsilon} \frac{1}{R^{4}}\right)+\left(m_{3}^{2}\psi w^{2}+\frac{243C_{3/4}^{4}}{16} \frac{1}{m_{3}^{2}}\frac{1}{R^{4}}\right),\] where the constants \(C_{3/4},D\) are as above. We can suppose the reduced distance is smooth at maximal point \((\bar{x},\bar{\tau})\) of \(\psi\omega\), thus at the point \((\bar{x},\bar{\tau})\), we have \[\Delta(\psi\omega)\leq 0,\partial_{\tau}(\psi\omega)\leq 0,\nabla(\psi\omega)=0, \tag{3.12}\] Hence, \[\begin{split}&\left(C_{1}-6\varepsilon-m_{3}^{2}\right)\psi^{2} \omega^{2}-C_{2}\psi\omega\\ \leq&\left(\frac{C_{3/4}^{2}}{\varepsilon}\left(m^{ 2}+\frac{9}{4}\right)\frac{1}{R^{4}}+\frac{D^{2}}{4\varepsilon}\frac{1}{T^{2 }}+\frac{C_{3/4}^{2}}{4\varepsilon}K^{2}.\right)\\ &+\left(\frac{9C_{3/4}^{4}}{\varepsilon}\frac{1}{R^{4}}\right)+ \frac{243C_{3/4}^{4}}{16}\frac{1}{m_{3}^{2}}\frac{1}{R^{4}}+\frac{1}{4 \epsilon}\|V\|_{\infty}C_{1/2}^{2}\frac{1}{R^{2}},\end{split} \tag{3.13}\] where \(C_{1}=2Qm_{1}-\|T\|_{L^{\infty}}m_{3}m_{1},C_{2}=2(K+\epsilon_{1})-\frac{8 \epsilon_{3}-1}{4\epsilon_{3}}(\frac{m_{3}}{m_{1}})^{2},\varepsilon_{3}=4(1- \varepsilon_{2})\), the \(\epsilon,\epsilon_{1}\) is same that in (3.3),(3.8),(3.9),(3.10). It is easy to see that we can choose special \(\epsilon\) such that \(C_{1}-5\varepsilon-m_{3}^{2}>0\), Let \(Q_{R,\Lambda}:=\{(x,\tau)|\mathfrak{d}(x,\tau)\leq R\}\). Since \(\psi=1\), on \(Q_{R/2,\Lambda/4,\theta}:=\{(x,\tau)\in Q_{R/4,\Lambda/2}|\tau\in[\theta, \Lambda/4]\}\}\), The quadratic formula immplies that \[\begin{split}\omega(x,\tau)\leq&\frac{C_{2}}{C_{1} -5\varepsilon-m_{3}^{2}}\\ &+\frac{1}{C_{1}-5\varepsilon-m_{3}^{2}}\bigg{(}\frac{C_{3/4}^{2 }}{\varepsilon}\left(m^{2}+\frac{9}{4}\right)\frac{1}{R^{4}}+\frac{D^{2}}{4 \varepsilon}\frac{1}{\Lambda^{2}}+\frac{C_{3/4}^{2}}{4\varepsilon}K^{2}\\ &+\frac{9C_{3/4}^{4}}{\varepsilon}\frac{1}{R^{4}}+\frac{243C_{3/ 4}^{4}}{16}\frac{1}{m_{3}^{2}}\frac{1}{R^{4}}+\frac{1}{4\epsilon}\|V\|_{ \infty}C_{1/2}^{2}\frac{1}{R^{2}}\bigg{)}^{\frac{1}{2}}\end{split} \tag{3.14}\] Letting \(\theta\to 0\), the proof is complete. ## 4 Proof of Theorem 2 In this section, some constants are different from the previous section, e.g, \(C_{1},C_{2},\varepsilon_{1}\), etc. We hope that this the readers will not find confusing. Proof.: Let \(\omega(x,t)=\frac{|\nabla u(x,t)|^{2}}{f^{2}(u(x,t))}.\) The first step is to estimate \(\left(\Delta_{V}-\frac{\partial}{\partial t}\right)\omega\). A calculation shows that \[\nabla\omega=\frac{\nabla|\nabla u|^{2}}{f^{2}}-2\frac{\nabla f|\nabla u|^{2} }{f^{3}}, \tag{4.1}\] \[\Delta\omega=\frac{\Delta|\nabla u|^{2}}{f^{2}}-4\frac{\nabla f\nabla|\nabla u|^{2}} {f^{3}}-2\frac{\Delta f|\nabla u|^{2}}{f^{3}}+6\frac{|\nabla f|^{2}|\nabla u|^{2 }}{f^{4}}, \tag{4.2}\] and \[\frac{\partial\omega}{\partial t}=\frac{\frac{\partial}{\partial t}|\nabla u|^{ 2}}{f^{2}}-2\frac{\frac{\partial f}{\partial t}|\nabla u|^{2}}{f^{3}}.\] Combing all together gives \[\begin{split}\left(\Delta-\frac{\partial}{\partial t}\right) \omega=&\frac{\left(\Delta-\frac{\partial}{\partial t}\right)| \nabla u|^{2}}{f^{2}}-2\frac{\left(\Delta-\frac{\partial}{\partial t}\right)f (u(x,t))|\nabla u|^{2}}{f^{3}}\\ &-4\nabla f\frac{\nabla|\nabla u|^{2}}{f^{3}}+6\frac{|\nabla f|^ {2}|\nabla u|^{2}}{f^{4}},\end{split} \tag{4.3}\] We recall Weitzenbck type formula for \(VT\) harmonic map given in [5], \[\begin{split}\frac{1}{2}(\Delta_{V}-\frac{\partial}{\partial t} )|du|^{2}=&|\nabla du|^{2}+\sum_{\alpha=1}^{m}\left\langle du \left(\text{Ric}_{V}\left(e_{\alpha}\right)\right),du\left(e_{\alpha}\right) \right\rangle\\ &-\sum_{\alpha,\beta=1}^{m}R^{N}\left(du\left(e_{\alpha}\right),du \left(e_{\beta}\right),du\left(e_{\alpha}\right),du\left(e_{\beta}\right) \right)\\ &-\sum_{\alpha,\beta=1}^{m}\left\langle\left(\nabla_{e_{\alpha}}T \right)\left(du\left(e_{\beta}\right),du\left(e_{\beta}\right)\right),du \left(e_{\alpha}\right)\right\rangle\\ &-\sum_{\alpha,\beta=1}^{m}\left\langle 2T\left(\left(\nabla_{e_{ \alpha}}du\right)\left(e_{\beta}\right),du\left(e_{\beta}\right)\right),du \left(e_{\alpha}\right)\right\rangle,\end{split}\] where \(e_{1},e_{2},\cdots,e_{m}\) is a focal orthonormal frame field of the domain manifold. Noticing that \(Ric_{V}\geq-A\) and (3.3), we further have \[\begin{split}&\frac{1}{2}(\Delta_{V}-\frac{\partial}{\partial t })|du|^{2}\\ &\geq(1-\epsilon_{2})|\nabla du|^{2}-(A+\epsilon_{1})e(u)-(\frac{ s_{0}-1}{s_{0}}\kappa+\frac{1}{4\varepsilon_{1}}\|\nabla T\|_{L^{\infty}}^{2}+ \frac{1}{\varepsilon_{2}}\|T\|_{L^{\infty}}^{2})e(u)^{2},\end{split} \tag{4.4}\] where the constants \(s_{0},\epsilon_{1},\epsilon_{2}\) are the same as that in (3.3). Plugging the above formula into (4.3), we get \[\begin{split}\left(\Delta_{V}-\frac{\partial}{\partial t}\right) \omega\geq&-2(A+\epsilon_{1})\frac{|\nabla u|^{2}}{f^{2}}+2(1- \epsilon_{2})\frac{|\nabla du|^{2}}{f^{2}}\\ &-2\frac{(\frac{s_{0}-1}{s_{0}}\kappa+\frac{1}{4\varepsilon_{1}} \|\nabla T\|_{L^{\infty}}^{2}+\frac{1}{\varepsilon_{2}}\|T\|_{L^{\infty}}^{2}) e(u)^{2}}{f^{2}}-2\frac{\left(\Delta-\frac{\partial}{\partial t}\right)f(u(x,t))| \nabla u|^{2}}{f^{3}}\\ &-2\frac{\nabla f\cdot\nabla|\nabla u|^{2}}{f^{3}}+2\frac{|\nabla f |^{2}|\nabla u|^{2}}{f^{4}}-2\nabla\omega\cdot\frac{\nabla f}{f}.\end{split} \tag{4.5}\] The chain rule gives \[\left(\Delta_{V}-\frac{\partial}{\partial t}\right)f(u(x,t))=\nabla^{2}f(\nabla u,\nabla u)+\langle Tr_{g}T(du,du),\nabla f\rangle.\] Since \(\Omega\) satisfies condition (B), \[-2\frac{(\frac{80-1}{s_{0}}\kappa+\frac{1}{4\varepsilon_{1}}\| \nabla T\|_{L^{\infty}}^{2}+\frac{1}{\varepsilon_{2}}\|T\|_{L^{\infty}}^{2})e (u)^{2}}{f^{2}}-2\frac{\left(\Delta-\frac{\partial}{\partial t}\right)f(u(x,t) )|\nabla u|^{2}}{f^{3}}\] \[\geq-\frac{\langle Tr_{g}T(du,du),\nabla f\rangle}{f^{3}}++2Q \frac{|du|^{4}}{f^{3}}\] \[\geq-\|T\|_{L^{\infty}}\frac{\|\nabla f\|\|du\|^{4}}{f^{3}}+2Q \frac{|du|^{4}}{f^{3}}.\] By Hlder's inequality, we deduce that \[4\epsilon_{3}\frac{|\nabla du|^{2}}{f^{2}}+\frac{1}{4\epsilon_{3}}\frac{| \nabla f|^{2}|\nabla u|^{2}}{f^{4}}\geq 4\frac{|\nabla du||\nabla u||\nabla f|}{f^{ 3}},\] and \[|\nabla|\nabla u|^{2}\,|\leq 2|\nabla du||\nabla u|,\] It follows that \[\left(\Delta_{V}-\frac{\partial}{\partial t}\right)\omega\geq -2(A+\epsilon_{1})\frac{|\nabla u|^{2}}{f^{2}}+2(1-\epsilon_{2}) \frac{|\nabla du|^{2}}{f^{2}}\] \[-\|T\|_{L^{\infty}}\frac{\|\nabla f\|\|du\|^{4}}{f^{3}}+2Q\frac{| du|^{4}}{f^{3}}\] \[-4\epsilon_{3}\frac{|\nabla du|^{2}}{f^{2}}-\frac{1}{4\epsilon_{3 }}\frac{|\nabla f|^{2}|\nabla u|^{2}}{f^{4}}+2\frac{|\nabla f|^{2}|\nabla u|^ {2}}{f^{4}}-2\nabla\omega\cdot\frac{\nabla f}{f}.\] Taking \(2\epsilon_{3}=1-\epsilon_{2}\) and using the bound of \(f\), we can get the following estimates, \[\left(\Delta_{V}-\frac{\partial}{\partial t}\right)\omega\geq K_{2}\omega^{2 }-2\nabla\omega\cdot\frac{\nabla f}{f}-K_{1}\omega,\] where \(K_{1}=2(A+\epsilon_{1})-\frac{3-4\epsilon_{2}}{2(1-\epsilon_{2})}(\frac{m_{3 }}{m_{1}})^{2},K_{2}=2Qm_{1}-\|T\|_{L^{\infty}}m_{3}m_{1}.\) Henceforth, \[\left(\Delta_{V}-\frac{\partial}{\partial t}\right)F\geq K_{2}\frac{F^{2}}{t} -2\nabla F\cdot\frac{\nabla f}{f}-\left(K_{1}+\frac{1}{t}\right)F, \tag{4.6}\] which is the key formula to derive Theorem 2. \[\psi(r)=\left\{\begin{array}{ll}1,&if\quad r\in[0,1/2]\\ 0,&if\quad r\in(1,\infty)\\ 0\leq\psi(r)\leq 1,\\ \psi^{\prime}(r)\leq 0,\\ \psi^{\prime\prime}(r)\geq-C_{0}\\ \frac{|\psi^{\prime}(r)|^{2}}{\psi(r)}\leq C_{0},\end{array}\right. \tag{4.7}\] where \(C_{0}\) is an absolute constant. Let \(\lambda(x)=\psi\left(\frac{r(x)}{R}\right)\), let \(F(x,t)=t\omega(x,t)\). Assume that \((x_{1},t_{1})\) is the point where \(\lambda F\) achieves its maxisum in \(B_{R}\left(x_{o}\right)\times\left[0,\Lambda\right]\left(0<\Lambda<T_{1}\right)\). It is well known that we can assume \(\lambda(x)\) to be smooth at \(x_{1}\). And we may also assume \(\left(\lambda F\right)(x_{1},t_{1})>0\). At \((x_{1},t_{1})\), we have \[\nabla(\lambda F)=0,\frac{\partial}{\partial t}(\lambda F)\geq 0,\Delta( \lambda F)\leq 0. \tag{4.8}\] Hence at \((x_{1},t_{1})\), \[\left(\Delta_{V}-\frac{\partial}{\partial t}\right)(\lambda F)\leq 0. \tag{4.9}\] As \(\mbox{Ric}^{M}-\frac{1}{2}L_{V}g\geq-A\), by the \(V\)-Laplacian comparison Theorem (cf. [4, Theroem 3]), we have \[\Delta_{V}r\leq\sqrt{(m-1)A}\coth\sqrt{\frac{A}{m-1}}r+v(r).\] Here \(v(\cdot)\) is the function defined in Theorem 2. Noticing that \(kr\coth kr\leq 1+kr\), there exists a constant \(C_{1}>0\) depending on \(A\) such that \[r\Delta_{V}r\leq(v(a)+C_{1})r+m-1.\] where \(C_{1}=\sqrt{(m-1)A}.\) It is clear that \[\nabla\lambda=\psi^{\prime}\frac{r}{R}\] Noticing that \(\psi^{\prime}\leq 0\), we deduce that for \(x\in B_{R}(x_{0})\), \[\begin{split}\Delta_{V}\lambda=&\psi^{\prime\prime} \left(\frac{1}{R}\right)^{2}\left(r^{\prime}\right)^{2}+\psi^{\prime}\frac{1}{R }\Delta_{V}r\\ &\geq\frac{-C_{0}}{R^{2}}+\frac{\psi^{\prime}}{R}\left(v(a)+C_{1 }+\frac{m-1}{r}\right)\\ &=\frac{-C_{0}}{R^{2}}+\frac{\psi^{\prime}}{R}\left(v(a)+C_{1} \right)+\frac{\psi^{\prime}}{R}\frac{m-1}{R}\\ &=\frac{\psi^{\prime}(m-1)-C_{0}}{R^{2}}+\frac{\psi^{\prime}}{R}( v(a)+C_{1})\\ &\geq\frac{-\sqrt{C_{0}}(m-1)-C_{0}}{R^{2}}-\frac{(v(a)+C_{1})}{R} \sqrt{C_{0}}.\end{split}\] By the difinition of \(\psi\), We can conclude from the above estimates that \[\frac{|\nabla\lambda|^{2}}{\lambda}\leq\frac{C_{0}}{R^{2}},\quad\Delta_{V} \lambda\geq-\frac{C_{2}}{R^{2}}-\frac{C_{3}}{R}. \tag{4.10}\] where \(C_{2}=C_{0}+\sqrt{C_{0}}(m-1),C_{3}=v(a)+C_{1}\). In the sequel of this section, we continue the proof at the point \((x_{1},t_{1})\) where (4.8) holds. Then (4.9) and (4.10) gives \[0\geq-\left(\frac{C_{2}}{R^{2}}+\frac{C_{3}}{R}\right)F+2\nabla\lambda\nabla F +\lambda\left(\Delta_{V}-\frac{\partial}{\partial t}\right)F,\] According to (4.8) and (4.6) and noticing that \(|\nabla f(u(x,t)|\leq m_{3}|\nabla u|\), we get \[\begin{split} 0&\geq-\left(\frac{C_{2}}{R^{2}}+\frac{C_{ 3}}{R}\right)F-2\frac{|\nabla\lambda|^{2}}{\lambda}F+\lambda\left(\Delta- \frac{\partial}{\partial t}\right)F\\ &\geq-\left(\frac{C_{2}+2C_{0}}{R^{2}}+\frac{C_{3}}{R}\right)F+K_ {2}\frac{1}{t_{1}}\lambda F^{2}-\left(K_{1}+\frac{1}{t_{1}}\right)\lambda F-m _{3}\frac{|\nabla\lambda|}{\lambda^{1/2}t_{1}^{1/2}}\left(\lambda F\right)^{1 /2}F,\end{split} \tag{4.11}\] where we have used the definition of \(F\) and \(\omega\). By (4.11) and the first inequality in (4.10), we have \[0\geq K_{2}\frac{1}{t_{1}}(\lambda F)^{2}-\left(\frac{C_{2}+2C_{0}}{R^{2}}+ \frac{C_{3}}{R}+K_{1}+\frac{1}{t_{1}}\right)\lambda F-C_{0}^{\frac{1}{2}}m_{3 }\frac{1}{R}\left(\frac{\lambda F}{t_{1}}\right)^{1/2}\lambda F.\] The quadratic formula immplies that \[\begin{split}\left(\frac{\lambda F}{t_{1}}\right)^{1/2}\leq& \frac{C_{0}^{\frac{1}{2}}m_{3}}{K_{2}R}+\sqrt{\frac{1}{K_{2}} \left(\frac{C_{2}+2C_{0}}{R^{2}}+\frac{C_{3}}{R}+K_{1}+\frac{1}{t_{1}}\right) }\\ \leq&\frac{C_{0}^{\frac{1}{2}}m_{3}}{K_{2}R}+C_{4} \frac{1}{\sqrt{K_{2}}}\left(\sqrt{K_{1}}+\sqrt{\frac{1}{R}}+\frac{1}{R} \right)+\frac{1}{\sqrt{t_{1}}}\frac{1}{\sqrt{K_{2}}}\end{split} \tag{4.12}\] where \(C_{4}=\max(C_{2}+2C_{0},C_{3}).\) Noticing that \(0\leq t_{1}\leq\Lambda\) \[\left(\lambda F\right)^{1/2}\left(x_{1},t_{1}\right)\leq\frac{C_{0}^{\frac{1}{2} m_{3}}}{K_{2}R}\sqrt{\Lambda}+C_{4}\frac{1}{\sqrt{K_{2}}}\left(\sqrt{K_{1}}+ \sqrt{\frac{1}{R}}+\frac{1}{R}\right)\sqrt{\Lambda}+\frac{1}{\sqrt{K_{2}}}.\] So \[\sup\left\{\left|t^{1/2}\right|\nabla u(x,t)||(x,t)\in B_{R/2} \left(x_{0}\right)\times\left[0,\Lambda\right]\right\}\] \[\leq m_{2}\left(\frac{C_{0}^{\frac{1}{2}}m_{3}}{K_{2}R}\sqrt{ \Lambda}+C_{4}\frac{1}{\sqrt{K_{2}}}\left(\sqrt{K_{1}}+\sqrt{\frac{1}{R}}+ \frac{1}{R}\right)\sqrt{\Lambda}+\frac{1}{\sqrt{K_{2}}}\right)\] Hence \[\sup_{B_{R/2}\left(x_{0}\right)}\Lambda^{1/2}|\nabla u(x,t)|\leq m_{2}\left( \frac{C_{0}^{\frac{1}{2}}m_{3}}{K_{2}R}\sqrt{\Lambda}+C_{4}\frac{1}{\sqrt{K_{ 2}}}\left(\sqrt{K_{1}}+\sqrt{\frac{1}{R}}+\frac{1}{R}\right)\sqrt{\Lambda}+ \frac{1}{\sqrt{K_{2}}}\right).\] This proves (1.7). To prove (1.8), we set \(F(x,t)=\omega(x,t)\). If \(\lambda F\) achieves its maximum in \(B_{R/2}\left(x_{0}\right)\times\left[0,\Lambda\right]\) for \(0<\Lambda<T_{1}\) at \((x_{1},0)\), then we have \[\sup_{B_{R/2}\left(x_{0}\right)}|\nabla u(x,t)|\leq\frac{m_{2}}{m_{1}}\sup_{B _{R}\left(x_{0}\right)}|\nabla u(x,0)| \tag{4.13}\] If \(gF\) achieves its maximum at \((x_{1},t_{1})\left(t_{1}>0\right)\), then at \((x_{1},t_{1})\), \[\nabla(\lambda\omega)=0,\frac{\partial}{\partial t}(\lambda\omega)\geq 0, \Delta(\lambda\omega)\leq 0. \tag{4.14}\] Thus, we get \[0\geq -\left(\frac{C_{2}}{R^{2}}+\frac{C_{3}}{R}\right)\omega-2\frac{| \nabla\lambda|^{2}}{\lambda}\omega+\lambda\left(\Delta-\frac{\partial}{ \partial t}\right)\omega\] \[\geq -\left(\frac{C_{2}}{R^{2}}+\frac{C_{3}}{R}\right)\omega-2\frac{| \nabla\lambda|^{2}}{\lambda}\omega+\lambda\left(K_{2}\omega^{2}-2\nabla\omega \cdot\frac{\nabla f}{f}-K_{1}\omega\right)\] \[\geq K_{2}\lambda\omega^{2}-\left(\frac{C_{2}}{R^{2}}+\frac{C_{3}}{R} \right)\omega-2\frac{|\nabla\lambda|^{2}}{\lambda}\omega-2\nabla\lambda\cdot \frac{\nabla f}{f}\omega-K_{1}\lambda\omega\] \[\geq K_{2}\lambda\omega^{2}-\left(\frac{C_{2}+2C_{0}}{R^{2}}+\frac{C _{3}}{R}\right)\omega-\frac{2m\sqrt{C_{0}}}{R}\sqrt{\lambda\omega}-K_{1}\lambda\omega\] \[\geq K_{2}g\omega^{2}-\left(\frac{C_{2}}{R^{2}}+\frac{C_{3}}{R}\right) \omega-2\frac{|\nabla\lambda|^{2}}{\lambda}\omega-2\nabla\lambda\cdot\frac{ \nabla f}{f}\omega-K_{1}\lambda\omega\] \[\geq K_{2}\lambda\omega^{2}-\left(K_{1}+\frac{C_{2}+2C_{0}}{R^{2}}+ \frac{C_{3}}{R}\right)\omega-\frac{2m\sqrt{C_{0}}}{R}\sqrt{\lambda\omega}\omega.\] By the quadratic formula, one obtains \[\sqrt{\lambda\omega}(x_{1},t_{1})\leq\frac{\frac{2m\sqrt{C_{0}}}{R}+\sqrt{\frac{2m \sqrt{C_{0}}}{R^{2}}+4K_{2}\left(K_{1}+\frac{C_{2}+2C_{0}}{R^{2}}+\frac{C_{3}}{R }\right)}{2K_{2}}}{,} \tag{4.15}\] Using the definiton of \(\omega\) and \(\lambda\), \[\sup_{B_{R/2}(x_{o})}|\nabla u(x,t)|\leq m_{2}\left(\frac{\frac{2m\sqrt{C_{0}}} {R}+\sqrt{\frac{2m\sqrt{C_{0}}}{R^{2}}+4K_{2}\left(K_{1}+\frac{C_{2}+2C_{0}}{R ^{2}}+\frac{C_{3}}{R}\right)}{2K_{2}}}{\right).} \tag{4.16}\] Then, (1.8) follows from (4.16) and (4.13).
2305.19466
The Impact of Positional Encoding on Length Generalization in Transformers
Length generalization, the ability to generalize from small training context sizes to larger ones, is a critical challenge in the development of Transformer-based language models. Positional encoding (PE) has been identified as a major factor influencing length generalization, but the exact impact of different PE schemes on extrapolation in downstream tasks remains unclear. In this paper, we conduct a systematic empirical study comparing the length generalization performance of decoder-only Transformers with five different position encoding approaches including Absolute Position Embedding (APE), T5's Relative PE, ALiBi, and Rotary, in addition to Transformers without positional encoding (NoPE). Our evaluation encompasses a battery of reasoning and mathematical tasks. Our findings reveal that the most commonly used positional encoding methods, such as ALiBi, Rotary, and APE, are not well suited for length generalization in downstream tasks. More importantly, NoPE outperforms other explicit positional encoding methods while requiring no additional computation. We theoretically demonstrate that NoPE can represent both absolute and relative PEs, but when trained with SGD, it mostly resembles T5's relative PE attention patterns. Finally, we find that scratchpad is not always helpful to solve length generalization and its format highly impacts the model's performance. Overall, our work suggests that explicit position embeddings are not essential for decoder-only Transformers to generalize well to longer sequences.
Amirhossein Kazemnejad, Inkit Padhi, Karthikeyan Natesan Ramamurthy, Payel Das, Siva Reddy
2023-05-31T00:29:55Z
http://arxiv.org/abs/2305.19466v2
# The Impact of Positional Encoding on Length Generalization in Transformers ###### Abstract Length generalization, the ability to generalize from small training context sizes to larger ones, is a critical challenge in the development of Transformer-based language models. Positional encoding (PE) has been identified as a major factor influencing length generalization, but the exact impact of different PE schemes on extrapolation in downstream tasks remains unclear. In this paper, we conduct a systematic empirical study comparing the length generalization performance of decoder-only Transformers with five different position encoding approaches including Absolute Position Embedding (APE), T5's Relative PE, ALiBi, and Rotary, in addition to Transformers without positional encoding (NoPE). Our evaluation encompasses a battery of reasoning and mathematical tasks. Our findings reveal that the most commonly used positional encoding methods, such as ALiBi, Rotary, and APE, are not well suited for length generalization in downstream tasks. More importantly, NoPE outperforms other explicit positional encoding methods while requiring no additional computation. We theoretically demonstrate that NoPE can represent both absolute and relative PEs, but when trained with SGD, it mostly resembles T5's Relative PE attention patterns. Finally, we find that scratchpad is not always helpful to solve length generalization and its format highly impacts the model's performance. Overall, our work suggests that explicit position encodings are not essential for decoder-only Transformers to generalize well to longer sequences. ## 1 Introduction The ability to generalize from smaller training context sizes to larger ones, commonly known as length generalization, is a major challenge for Transformer-based language models (Vaswani et al., 2017; Deletang et al., 2023; Zhang et al., 2023). Even with larger Transformers, this issue persists (Brown et al., 2020; Furrer et al., 2020). With larger context sizes, a model can benefit from more in-context-learning examples, higher numbers of reasoning and planning steps, or longer text generation. However, training a Transformer with a larger context size can be excessively slow and memory-intensive. This is even more pronounced in the recent paradigm of model finetuning on instruction-following datasets (Wei et al., 2022; Chung et al., 2022; Ouyang et al., 2022). It is not only infeasible to train the model on all possible context lengths, but also the number of training examples drops dramatically as the sequence length increases requiring the model to generalize from finite and shorter-length training examples. In this work, we focus on the effect of _positional encoding_ on length generalization in the "**decoder-only**" Transformers on various tasks trained from scratch. Figure 1 summarizes our finding that using no positional encoding is better than using explicit positional encodings. Positional encoding (PE) seems to be a major factor in the length generalization of Transformers as the model has to systematically encode tokens in _all_ possible positions. To this end, the original Transformer architecture (Vaswani et al., 2017) used non-parametric periodic functions to represent _absolute position embeddings_ (APE) in a systematic manner, but further studies have shown that these functions are inadequate for length generalization (Ontanon et al., 2022). The prevailing belief is that relative PEs (Shaw et al., 2018; Raffel et al., 2020) are more effective in length generalization than APE variants (Ontanon et al., 2022; Csordas et al., 2021). However, recent work has shown that even Transformers with relative PEs, such as Rotary (Su et al., 2021), are poor at length generalization and proposed new position encoding schemes, ALiBi, that generalize well (Press et al., 2022). But these studies use language modeling perplexity as the sole evaluation metric which does not shed light on downstream task performance (Tay et al., 2022). As a result, a key question arises: what exactly is the influence of positional encoding on length generalization at various downstream tasks? Moreover, since a decoder-only Transformer's attention is shown to model sequences without explicit position information (Tsai et al., 2019), what is the effect of _no positional encoding_ (NoPE)? Recently, asking models to emit intermediate computation steps into a scratchpad, also referred to as _chain-of-thought_, has been adopted to improve the length extrapolation in Transformers (Nye et al., 2021; Wei et al., 2022). These techniques are architecture-independent and can be used with any positional encoding method. However, it remains an open question whether these techniques, at least in regard to length generalization, render the choice of positional encoding irrelevant, especially given that model performance is highly sensitive to the scratchpad format (Bueno et al., 2022; Akyurek and Akyurek, 2022). In this work, we conduct a systematic empirical study on the length generalization of decoder-only Transformers, popularized by the GPT-family of models (Radford et al., 2019), with the most commonly used positional encoding schemes, both with and without scratchpad. Specifically, we evaluate APE (Vaswani et al., 2017), T5's Relative PE (Raffel et al., 2020), ALiBi (Press et al., 2022), Rotary (Su et al., 2021) and NoPE on a battery of reasoning and mathematical tasks. Our results show that: * Most commonly used positional encoding methods, including ALiBi, Rotary, and APE, are ill-suited for length generalization in downstream tasks and are outperformed by T5's Relative PE. * Transformers without positional encoding (NoPE) outperform all explicit positional encoding schemes. They achieve this without computing additional terms in the attention mechanism (in contrast to explicit PEs). * We show that NoPE is theoretically capable of representing both absolute and relative PEs. But empirically, it is closer to the relative encoding scheme similar to T5's Relative PE. * Scratchpad is not always helpful for length generalization and its format highly impacts the performance. The attention distributions reveal that NoPE and T5's Relative PE encourage attending to both long and short-range positions, ALiBi to recent positions, and Rotary and APE to no particular positions. Figure 1: No positional encoding (NoPE) outperforms all other positional encodings at length generalization of decoder-only Transformers (GPT-style) trained from scratch and evaluated on a battery of reasoning-like downstream tasks. This figure shows aggregate ranking of positional encoding methods across 10 tasks. Background: Positional Encoding in Transformers Transformers, in contrast to sequential models such as RNNs, are parallel architectures that employ positional encoding to help encode word order. The most common choices for positional encoding are either _absolute_, where each absolute position (e.g. 1, 2, 3,...) is directly represented, or _relative_, where the distance between tokens is used as positional information. In this section, we briefly review the popular encoding methods used in Transformers (Refer to Appendix B for more formal details). _Absolute Position Embedding_ (APE), embeds each absolute position \(i\) into position vector \(\mathbf{p}_{i}\) and adds word embeddings to their corresponding \(\mathbf{p}_{i}\) before feeding them to the model. The non-parametric variant of APE uses periodic functions such as sine and cosine to generate embeddings for any position \(i\)(Vaswani et al., 2017). On the other hand, a learned version of APE, used in GPT3 (Brown et al., 2020) and OPT (Zhang et al., 2022), trains the position embeddings along with the model parameters, and it cannot generate a position embedding for unseen positions, so the context window is set to a fixed length. _TS's Relative bias_, first maps the relative distance \((i-j)\) between tokens at positions \(i\) and \(j\) to a scalar bias value \(b=f(i-j)\), where \(f\) is a lookup table. The relative bias \(b\) (learned during training) then is added to the dot product of the query and key in the self-attention mechanism. The lookup table maps distances larger than a threshold to the same parameter to enable generalization to unseen distances. _Rotary_, used in PaLM (Chowdhery et al., 2022) and LLaMA (Touvron et al., 2023), rotates the query and key representations with an angle proportional to their absolute positions before applying the dot product attention. As a result of this rotation, the attention dot product will only depend on the relative distance between tokens, effectively making it a relative positional encoding (Su et al., 2021). _ALBi_, used in BLOOM (Scao et al., 2022), is similar to T5's Relative Bias but instead subtracts a scalar bias from the attention score. This bias grows linearly with the distance between the query and key tokens. This, in effect, creates a preference toward recent tokens (recency bias). Note that encoder-only Transformers, such as BERT, become bag-of-words models in the absence of positional encoding. However, decoder-only Transformers with causal attention mask are not permutation invariant and can model sequences even without explicit position information (Tsai et al., 2019). But it is unclear if these models encode position information implicitly or generalize to unseen lengths. We demystify this in Section 5. ## 3 Model Evaluation Length Generalization SetupFollowing Anil et al. (2022), we focus on algorithmic tasks such as copying, addition, etc. For each task, we train on a finite number of examples of up to a certain length and test them on both seen and unseen lengths at inference. We present these problems as sequence-to-sequence tasks, where the input sequence is the problem instance and the output sequence is the solution. Formally, let \(\mathcal{D}=\{(\mathbf{x}_{i},\mathbf{y}_{i})\}\) denote a dataset of such task where \(\mathbf{x}_{i}\) is the input and \(\mathbf{y}_{i}\) is the output sequence. For each task a function \(\lambda:\mathcal{D}\rightarrow\mathbb{N}\) can be defined that returns the length bucket of a task instance \(d\in\mathcal{D}\). This can be the number of tokens or any general notion of length/depth of reasoning. Using this function and a threshold \(L\), we employ samples where \(\lambda\leq L\) for learning the task and samples where \(\lambda>L\) for evaluating generalization. The performance on each instance is reported as the exact-match accuracy of its answer with the ground truth. ArchitectureWe use a conventional decoder-only Transformer architecture as a base for all experiments and consider different approaches for encoding positions: **Absolute Position Embedding (APE)**, **ALiBi**, **Rotary** and **T5's Relative Bias**. We also consider removing the positional encoding (**NoPE**) to better understand its role in length generalization. Note that we use APE with sinusoidal functions (Vaswani et al., 2017) as the learnable variant cannot produce embeddings for unseen positions. Given the absence of publicly available Transformer-based LM trained with aforementioned PEs on the same pretraining data, we opt to train our models from scratch for each task on its training data with the autoregressive language modeling objective \(\log p_{\theta}(\mathbf{y}|\mathbf{x})=\sum_{t=1}^{T}\log p_{\theta}(y_{t}|\mathbf{x},\mathbf{ y}_{1:t-1})\). We use the same hyperparameters for all PEs and employ the "base" model size configuration, popular in HuggingFace library (Wolf et al., 2020), resulting in \(\sim\)107M trainable weights (List of all hyperparameters in Appendix D.2). TasksOur study of length generalization is concentrated on downstream tasks. Particularly, we evaluate the models on three categories (Table 1) of synthetic tasks that have been widely used in the literature to investigate length generalization: (1) Primitive tasks such as Copying and Reversing (Ontanon et al., 2022), (2) Mathematical and reasoning tasks such as Addition (Nye et al., 2021), Polynomial Evaluation, Sorting, Summation (Saxton et al., 2019), Parity (Anil et al., 2022), LEGO (Zhang et al., 2023) and (3) Classical length generalization datasets such as SCAN (Lake and Baroni, 2018) and PCFG (Hupkes et al., 2020). These tasks provide us with complete control over the train-test distribution, while also requiring reasoning and compositionality skills, which serve as fundamental building blocks for more complex tasks. For the first two categories, we generate the corresponding datasets. Specifically, we first sample the length of the task instance from the uniform distribution \(\mathcal{U}(1,L)\), and then, according to the task's generative process, we sample the input and output sequences. For the test set, we follow the same procedure but sample length from \(\mathcal{U}(1,2L)\) to include both seen and unseen lengths. Throughout the paper, unless otherwise stated, we use \(L=20\). For the third category of tasks, we use length generalization splits from the corresponding datasets. Table 1 provides an example of each task (More examples in Appendix D.1). We report the results of our empirical evaluation over ten tasks and three seeds per dataset-PE pair. ## 4 What Is The Effect of Positional Encoding? In this section we provide comparative results of positional encodings at length generalization. To provide a holistic view, following Liang et al. (2022), we report the mean ranking of various models in Figures 1 and 2 when compared against each other for all tasks and scenarios. Furthermore, we showcase the accuracy of models evaluated on examples of various lengths in Figure 3. (Detailed results for each task and scenario can be found in Appendix E). First, we observe that in most tasks, models achieve a perfect or near-perfect accuracy (Figure 3) on the I.I.D. lengths, which indicates that models have no problem fitting to the training data. However, the differences among positional encoding methods become more apparent when we evaluate on lengths that are larger than seen during training. In most extrapolation scenarios, T5's Relative Bias \begin{table} \begin{tabular}{l l l} \hline \hline **Task** & **Input Example** & **Output Example** \\ \hline \hline \multicolumn{3}{c}{Primitive Tasks} \\ \hline Copy & Copy the following words: \%u1> \%u2> \%u3> \%u4> \%u5> & \%u1> \%u2> \%u3> \%u4> \%u5> \\ Reverse & Reverse the following words: \%u1> \%u2> \%u3> \%u4> \%u5> & \%u5> \%u4> \%u3> \%u2> \%u1> \\ \hline \multicolumn{3}{c}{Mathematical and Algorithmic Tasks} \\ \hline Addition & Compute: S 3 7 2 6 + 1 9 1 7 7 \\ Polynomial Eval. & Evaluate x = 3 in ( 3 x ** 0 + 1 x ** 1 + 1 x ** 2 ) \% 10? & The answer is 5. \\ Sorting & Sort the following numbers: 3 1 4 1 5? \\ Summation & Compute: ( 1 + 2 + 3 + 4 + 7 ) \% 10? & The answer is 7. \\ Parity & Is the number of 1* even in [ 1 0 1 ]? \\ LEGO & If a = -1; b = -a; c = +b; d = -c. Then what is c? & The answer is +1. \\ \hline \multicolumn{3}{c}{Classical Length Generalization Datasets} \\ \hline SCAN & jump twice and run left & JUMP JUMP TURN\_LEFT RUN \\ PCFG & shift prepend K10 R1 K12, E12 F16 & F16 K10 R1 K12 E12 \\ \hline \hline \end{tabular} \end{table} Table 1: Examples of the input and output of the tasks. Figure 2: Aggregate ranking of positional encoding methods on length extrapolation across three different groups of tasks. No PE and T5’s Relative Bias outperform other encoding methods in these categories. outperforms other explicit positional encodings. ALiBi positions itself in the middle of the pack, while APE and Rotary show poor generalization performance. Although Rotary is often considered a relative encoding method (Ontanon et al., 2022), our results show that it performs more similarly to APE than to other relative schemes. Moreover, ALiBi, despite its promise for length generalization, underperforms with respect to T5's Relative Bias in most cases. This result aligns with Taylor et al. (2022) who found no significant improvement from ALiBi. Surprisingly, the NoPE model, which is just a decoder-only Transformer without any positional encoding, performs on par with or even better than the best-performing explicit PE, T5's Relative Bias. NoPE achieves the same level of generalization without _any computational overhead_ since it does not compute any additional term in the attention mechanism. This property has a direct impact on the runtime and memory footprint of the model. For instance, Press et al. (2022) reported that the additional computation incurred by T5's Relative Bias can make the training and inference time of the model almost two times slower than the Transformer with APE. ## 5 How Does NoPE Represent Positions? The surprising performance of NoPE model suggests that it capture useful positional information that can also generalize. But, how it does so is the primary question. In the next two sections, we provide theoretical and empirical analysis towards answering this question. ### NoPE can theoretically represent both absolute and relative PEs Let \(f_{\theta}\) be a NoPE decoder-only Transformer model, where \(\theta\) denotes the model parameters. \(f_{\theta}\) processes the input sequence \(\mathbf{x}=[\texttt{<bos>},x_{1},\ldots,x_{T}]\) by applying a series of layers. Note that since \(f_{\theta}\) does not have any PE, the input \(\mathbf{x}\) is not augmented with positional information (e.g. \([1,2,\ldots,T]\)). Each layer \(l\), consisting of self-attention heads and a feed-forward sub-layer, reads the previous hidden state \(\mathbf{H}^{(l-1)}\) and produces the hidden state at layer \(l\): \(\mathbf{H}^{l}\). Each head is parameterized by a query \(\mathbf{W}_{Q}\), key \(\mathbf{W}_{K}\), value \(\mathbf{W}_{V}\), and output \(\mathbf{W}_{O}\) matrices, where \(\mathbf{W}_{Q},\mathbf{W}_{K},\mathbf{W}_{V}\in\mathbb{R}^{h\times d}\) and \(\mathbf{W}_{O}\in\mathbb{R}^{d\times h}\). \(d\) and \(h\) are the model's hidden state size and attention dimension, respectively. \(\mathbf{W}_{1},\mathbf{W}_{2}\in\mathbb{R}^{d\times k.d}\) are the weight matrices of the feed-forward sub-layer. Figure 3: Showcasing the generalization behavior of different positional encodings on 6 datasets. The shaded area represents evaluation examples with I.I.D. lengths (i.e. seen during training). Since all models perform perfectly, or close to it, on the I.I.D. lengths (measured on unseen examples), for improved readability, we only show a subset of them in the figure. Refer to Appendix E for more detailed plots. **Theorem 1** (Absolute Encoding).: _Let \(\mathbf{x}\) be an input sequence of length \(T+1\) to the model. Then, the first layer of \(f_{\theta}\) can recover absolute positions \([1,\dots,T+1]\) in the hidden state \(\mathbf{H}^{(1)}\). That is, there exist \(\mathbf{W}_{Q}\), \(\mathbf{W}_{K}\), \(\mathbf{W}_{V}\), \(\mathbf{W}_{O}\), \(\mathbf{W}_{1}\), and \(\mathbf{W}_{2}\) such that the self-attention and feedforward operations in the first layer compute absolute positions and write it to the next hidden state._ We refer to Appendix C.1 for the complete proof of Theorem 1. This theorem shows that stochastic gradient descent (SGD) can potentially learn to recover absolute positions in NoPE Transformers. Next, we demonstrate how relative PE can be implemented in subsequent layers: **Theorem 2** (Relative Encoding).: _Suppose that the hidden state \(\mathbf{H}^{(1)}\) contains absolute positional information, as stated in Theorem 1, and assume that it is not overwritten by any subsequent layers. Then, the self-attention in all subsequent layers can implement a relative positional encoding: there exists a parameterization of \(f_{\theta}\) such that, for \(l\geq 2\), the attention dot product between query \(\mathbf{q}_{n}\) and key \(\mathbf{k}_{m}\) at positions \(n\) and \(m\) can be expressed as:_ \[\langle\mathbf{q}_{n},\mathbf{k}_{m}\rangle=f_{\mathrm{cnt}}(\mathbf{q},\mathbf{k})+f_{ \mathrm{rel}}(n-m) \tag{1}\] _where \(f_{\mathrm{cnt}}\) is a function of their content, and \(f_{\mathrm{rel}}\) is a function of their relative distance._ Appendix C.2 provides the complete proof of Theorem 2. Our theoretical results suggest that SGD can choose between relative and absolute encoding in NoPE Transformers. But, what mechanism SGD learns in practice is not clear. We next investigate this question empirically. ### NoPE learns to use relative PE in practice In order to explore the mechanisms that NoPE employs in practice, we conduct a quantitative analysis by comparing its attention pattern to models trained with different positional encoding techniques. The hypothesis is that if NoPE utilizes a similar algorithm to other PEs, then the attention patterns of these models should be quite similar. To this end, we feed the same input to both models and, at layer \(l\), we compute the minimum distance between the attention distribution of any heads in the first model and any head in the second model. Formally, let \(\mathrm{P}_{t}=p(\mathbf{k}|\mathbf{q}_{t})\) be a probability distribution produced by a causal self-attention head for query at position \(t\), over the keys \(\mathbf{k}\in[\mathbf{k}_{1},\dots\mathbf{k}_{t}]\) in a given transformer layer. Over a sequence of length \(T\), we define the similarity between two heads \(\mathrm{P}\) and \(\mathrm{Q}\) as \(D_{\mathrm{AT}}(\mathrm{P},\mathrm{Q})=\frac{1}{T}\sum_{t=1}^{T}D_{\mathrm{JSD }}(\mathrm{P}_{t}||\mathrm{Q}_{t})\) which averages the Jensen-Shannon divergence (JSD) between the two heads over all positions. For the distance of two models \(A\) and \(B\) at layer \(l\), we take the minimum distance Figure 4: Distance of NoPE attention patterns with other positional encoding schemes measured across instances of SCAN dataset. The left figure shows the distance per layer, and the right figure shows the average distance across all layers. NoPE’ denotes NoPE trained with a different seed. between all pairs of attention heads in the corresponding layer: \[D^{(l)}(A,B)=\min_{(\mathrm{P},\mathrm{Q})\in A_{l}\times B_{l}}D_{\mathrm{AT}}( \mathrm{P},\mathrm{Q}) \tag{2}\] where \(A_{l}\) and \(B_{l}\) are the attention heads in layer \(l\) of models \(A\) and \(B\) respectively. We empirically measure the distance between NoPE and other positional encoding schemes after training. Specifically, we sample examples from each length bucket and feed them (the concatenation gold input and output) to compute the attention maps and the distance using Equation (2). We also consider the distance between different seeds of NoPE as a baseline. Figure 4 shows the distance per layer for the first four layers. (later layers show similar trends Figure E.5). We find that NoPE's attention patterns are most similar to that of T5's Relative PE, and least similar to APE and Rotary. The same trend can be observed across all layers and length buckets, and even when averaged across all layers. These results potentially suggest that a Transformer model without positional encoding, trained with stochastic gradient descent learns to represent positions in a way similar to T5's Relative PE, which is a relative positional encoding scheme. ## 6 Does Scratchpad Render The Choice of Positional Encoding Irrelevant? In scratchpad/CoT prompting, the model generates intermediate computations required to reach the final answer as explicit parts of the output. Such mechanisms, in effect, provide a direct decomposition and storage for intermediate values, which has been shown to improve the length generalization of Transformers even at small scale (Nye et al., 2021). Since scratchpad only modifies the model's input and output (not the architecture), it is unclear and unexplored how architectural choices such as positional encoding affect the length generalization in the presence of scratchpad. To answer this question, we train all PEs _with_ and _without_ scratchpad on the mathematical and reasoning group of tasks, and compare their performance. Moreover, the decision of how to represent the intermediate computations in the scratchpad, i.e. the scratchpad format, is an important design choice that has a non-trivial impact on the model's performance (Bueno et al., 2022). To account for those, we consider five components in each step of scratchpad: <input>, <computation>, <output>, <variable_update>, and <remaining_input> (Figure 5). In our experiments, we create different variations of scratchpad format by enabling or disabling each component, which allows us to systematically study their impact.1 Figure 6 summarizes our results. Similar to the remarks made by (Nye et al., 2021; Anil et al., 2022) we observe that across all PEs and regardless of the format, scratchpad is beneficial solely for the addition task. Additionally, our findings indicate that having a positional encoding with robust length generalization is crucial since scratchpad/CoT alone may not enhance the generalization. Footnote 1: Since using scratchpad creates very long sequences, we follow Nye et al. (2021) and set the length threshold \(L=8\) for tasks that use it to avoid out-of-memory errors. ### Which part of the sequence is attended to? The scratchpad format that is often used (Nye et al., 2021), similar to Figure 5, contains redundant information. One such example is the repetition of the remaining portion of an input (\(\mathcal{R}\)) in each step of the scratchpad. But, the attention can attend to this information directly from the main input. So, it remains unclear which specific part of the scratchpad different PEs rely on to solve the task. To address this question, we take the models trained with full Format on addition, the case in which scratchpad is helpful across all PEs, and examine their attentions. Specifically, for tokens in the output sequence, we calculate the _distance_\(d\) between the current query \(\mathbf{q}_{t}\) and the attended Figure 5: Example of an addition task depicted with its first scratchpad step. Each step consists of five components: \(\boxed\) Step Input \(\mathcal{I}\), \(\boxed\) Step Computation \(\mathcal{C}\), Step Output \(\mathcal{O}\), Intermediate Variable Updates \(\mathcal{V}\), and Remaining Input \(\mathcal{R}\). key \(\mathbf{k}_{n}\) as \((t-n+1)\) and subsequently normalize it based on the length of the sequence at the present step. The normalized value is denoted as \(\bar{d}\). Figure 7 depicts the distribution of \(\bar{d}\). Values of \(\bar{d}\) close to 0 indicate attention to tokens near the current position (e.g. current scratchpad step), while values close to 1 signify attention to distant tokens (e.g. the input). NoPE and T5's Relative PE resemble each other and exhibit a bimodal distribution, reflecting both short-range and long-range attention. Conversely, ALiBi (due to its recency bias) strongly favors short-range attention. Rotary, on the other hand, produces a distribution resembling APE, which is more uniformly distributed. Notably, NoPE and T5's RPE are the top-performing PEs in this setup, which suggest the bimodal distribution to be more optimal. ## 7 Discussion Practitioners have to make important choices about the nuances of the Transformer architecture like positional encoding before undertaking the costly pretraining process. In the I.I.D evaluation of PEs, we demonstrate similar performance across different PEs, in line with observations of Haviv et al. (2022) and Scao et al. (2022b), which makes the choice of optimal positional encoding challenging. Moreover, human language processing may be subject to cognitive constraints (Gibson et al., 2019) that could create a false impression of how well PEs can generalize over length in natural language modeling evaluations. Indeed, Tay et al. (2022) showed that perplexity is an inadequate proxy for downstream performance of LLMs. In our paper, we utilize length generalization in downstream tasks as a means to assess the expressivity of positional encodings. Our setup, in contrast to the I.I.D. evaluation, reveals a clear distinction among approaches of encoding positions. We find that NoPE outperforms explicit PEs, and within explicit PEs, commonly used methods lag behind T5's Relative PE. In fact, the recent release of LLMs (Touvron et al., 2023; Chowdhery et al., 2022) suggests a shift towards adopting Rotary as a replacement for APE in the Transformer architecture. However, our result in Section 4 clearly demonstrates that Rotary marginally outperforms APE at length generalization. Furthermore, it exhibits similar behavior to APE, as shown in Section 6.1, indicating potential susceptibility to the same limitations. Moreover, the _Recency Bias_, embedded in positional encodings such as ALiBi, might be a reasonable choice for modeling language, however, our results in Sections 4 and 6.1 show it might not be optimal for length generalization for reasoning tasks. Figure 6: Mean ranks of scratchpad format aggregated across all models per each dataset. The effectiveness of scratchpad is task dependent. Figure 7: Distribution of the normalized distance between the query and the key of the self-attention (addition task + full scratchpad), averaged across all layers and heads. The disadvantages of explicit PEs over NoPE in length extrapolation contribute to the growing evidence that positional encodings pose challenges for Transformers (Sinha et al., 2022; Luo et al., 2021). Our empirical results and theoretical analysis suggest that removing positional encoding holds promise as a modification to the widely used decoder-only Transformer architecture. ## 8 Related Work Length Generalization Failure In TransformersThe length generalization problem has been a topic of interest in the study of neural sequence models for a long time (Graves et al., 2016; Kaiser and Sutskever, 2016; Lake and Baroni, 2018; Hupkes et al., 2020; Yehudai et al., 2021). Transformers, being state-of-the-art sequence models, have been no exception. A group of studies showed the generalization failure of conventional Transformers with APE on specific datasets such as PCFG (Hupkes et al., 2020), LEGO (Zhang et al., 2023), or CLUTRR (Sinha et al., 2019; Gontier et al., 2020). The length generalization problem has been reported even in pretrained Transformers such as T5 (Furrer et al., 2020) and LaMDA (Anil et al., 2022). Csordas et al. (2021) and Ontanon et al. (2022) study the effect of positional encoding on length generalization but mainly focus on showing relative PE outperforms APEs. Press et al. (2022), on the other hand, propose a new encoding method, ALBi, and demonstrate that it outperforms popular PEs on extrapolation but only in the context of human language modeling. Most relevant is Deletang et al. (2023)'s recent study on length generalization in various neural sequence models (including RNNs, Stacked-RNNs) for tasks from Chomsky hierarchy. However, they do not analyze positional encoding differences extensively or focus on autoregressive models. Unlike these studies, our work extensively compares length generalization in popular PEs for a wide range of tasks, specifically focusing on autoregressive models, which represent many contemporary LLMs. Positional EncodingA core component of Transformers is the positional encoding mechanism, which helps the model represent the order of the input sequence. Self-attention mechanism in the encoder of Transformers is order-invariant and requires PE to avoid becoming a bag-of-word model. Many methods have been proposed for this purpose. Originally, Vaswani et al. (2017) introduced absolute positional encoding sinusoidal functions (a learned variant popularized by Devlin et al. (2019)). Relative approach for encoding positional information was further introduced by Shaw et al. (2018), which gave rise to a number of pre-trained LM with relative encodings such as TransformerXL (Dai et al., 2019) and T5 (Raffel et al., 2020) that perform well in length generalization. More recently, Su et al. (2021) takes the concept of sinusoidal functions and suggests a new way of encoding positional information by rotating the hidden representations before applying self-attention. This method, referred to as _Rotary_, has become a popular choice in the recent LLMs. Press et al. (2022) simplify the T5's Relative encoding and introduced a more efficient variant called ALBi, while keeping the same or improving extrapolation performance. Decoder-only Transformers, due to their causal attention mask, are not order-agnostic and can be used without any positional encoding. This was observed early on by Shen et al. (2018) and later confirmed by Tsai et al. (2019). In our work, we theoretically show that they are capable of learning both absolute and relative encoding. ## 9 Conclusion We studied the robustness of different positional encodings, in decoder-only Transformers, at length generalization on various downstream mathematical and reasoning tasks. Our extensive empirical study shows the effectiveness of NoPE, and further demonstrates that widely used explicit PEs are not suited for length generalization. We also prove that NoPE can implicitly learn both absolute and relative positions, but uses the latter in practice. Finally, we find the effectiveness of scratchpad is task-dependent, and is not a reliable solution for length generalization. ## Limitations Our work primarily focuses on positional encodings as a design choice in the Transformers decoder architecture. We could not study how large-scale pretraining affects different PEs because there are no publicly available large language models trained with various PEs under similar conditions. We leave this for future work due to our limited compute budget. ## Acknowledgements The Mila-IBM grant program provided the funding for this project. SR acknowledges the support provided by the NSERC Discovery Grant program and the Facebook CIFAR AI Chair program. This research was enabled in part by compute resources provided by Mila and the Digital Research Alliance of Canada.
2309.07002
Using Evolutionary Algorithms to Find Cache-Friendly Generalized Morton Layouts for Arrays
The layout of multi-dimensional data can have a significant impact on the efficacy of hardware caches and, by extension, the performance of applications. Common multi-dimensional layouts include the canonical row-major and column-major layouts as well as the Morton curve layout. In this paper, we describe how the Morton layout can be generalized to a very large family of multi-dimensional data layouts with widely varying performance characteristics. We posit that this design space can be efficiently explored using a combinatorial evolutionary methodology based on genetic algorithms. To this end, we propose a chromosomal representation for such layouts as well as a methodology for estimating the fitness of array layouts using cache simulation. We show that our fitness function correlates to kernel running time in real hardware, and that our evolutionary strategy allows us to find candidates with favorable simulated cache properties in four out of the eight real-world applications under consideration in a small number of generations. Finally, we demonstrate that the array layouts found using our evolutionary method perform well not only in simulated environments but that they can effect significant performance gains -- up to a factor ten in extreme cases -- in real hardware.
Stephen Nicholas Swatman, Ana-Lucia Varbanescu, Andy D. Pimentel, Andreas Salzburger, Attila Krasznahorkay
2023-09-13T14:54:54Z
http://arxiv.org/abs/2309.07002v2
# Finding Morton-Like Layouts for Multi-Dimensional Arrays Using Evolutionary Algorithms ###### Abstract. The layout of multi-dimensional data can have a significant impact on the efficacy of hardware caches and, by extension, the performance of applications. Common multi-dimensional layouts include the canonical row-major and column-major layouts as well as the Morton curve layout. In this paper, we describe how the Morton layout can be generalized to a very large family of multi-dimensional data layouts with widely varying performance characteristics. We posit that this design space can be efficiently explored using a combinatorial evolutionary methodology based on genetic algorithms. To this end, we propose a chromosomal representation for such layouts as well as a methodology for estimating the fitness of array layouts using cache simulation. We show that our fitness function correlates to kernel running time in real hardware, and that our evolutionary strategy allows us to find candidates with favorable simulated cache properties in four out of the eight real-world applications under consideration in a small number of generations. Finally, we demonstrate that the array layouts found using our evolutionary method perform well not only in simulated environments but that they can effect significant performance gains--up to a factor ten in extreme cases--in real hardware. Morton curve, array layout, multi-dimensional data, evolutionary algorithm, data caching, locality + Footnote †: Also with CERN. + order to find suitable array layouts in tractable amounts of time, we propose to employ genetic algorithms--heuristics known to be able to efficiently find high-quality solutions in large search spaces (Sutton, 2016). To this end, we design a chromosomal representation of Morton-like array layouts, as well as a fitness function that uses cache simulation to estimate the performance of individual array layouts. Finally, we evaluate our evolutionary strategy and the array layouts it discovers. In short, our paper makes the following contributions: * We characterize the design space given by a generalization of the Morton array layout, and we show that that the size of this design space renders exhaustive search infeasible (Section 3); * We propose an evolutionary methodology based on genetic algorithms for exploring the aforementioned design space based on the simulated cache-friendliness of layouts (Section 4); * We design and execute a series of experiments to assess the accuracy of our fitness function, the efficacy of our evolutionary process, and the performance of the discovered array layouts, showing that our method can improve performance up to a factor ten (Section 5). ## 2. Background and Related Work In this section, we provide a brief overview of the basic concepts and notations which are essential to the remainder of this paper, and highlight relevant related work. ### Indexing Functions and Canonical Layouts Dense \(n\)-dimensional arrays can be imagined as structured grids in which each element is assigned to exactly one point in \(\mathbb{N}^{n}\). In most modern processors, multi-dimensional arrays are a software-level abstraction over the one-dimensional memory of the machine; in order to actually access multi-dimensional data, we need to define a function that converts indices in \(n\) dimensions to memory addresses1. We refer to the class of such functions as _indexing functions_, and they are isomorphic to _array layouts_. In short, an \(n\)-dimensional indexing functions is an injective (often bijective) function of the following type, where \(N_{i}\) represents the size of the array in the \(i\)th dimension: Footnote 1: In reality, address calculations must also consider array offsets (the address of the first element) and scales (the size of each element). We skip over these complications as they are handled transparently by address generation units in modern hardware, and they affect all array layouts in the same manner. \[f:\sum_{i=0}^{n-1}\left[0,N_{i}-1\right]\rightarrow\left\llbracket 0,\left( \prod_{i=0}^{n-1}N_{i}\right)-1\right\rrbracket \tag{1}\] In a multi-dimensional grid, we denote the elements along a given axis--that is to say, the sequence of elements for which all indices except one are fixed--as _fibers_(Sutton, 2016). In a two-dimensional case, fibers along the \(x\)-axis are known as _rows_, and fibers along the \(y\)-axis as columns. In order to facilitate the description of arrays in three or more dimensions, we use the term _mode-\(m\)_ fibers to describe fibers along the \(m\)th dimension, such that mode-0 fibers are synonymous with rows, mode-1 fibers refer to columns, and so forth. The most common group of multi-dimensional indexing functions are the _canonical_ layouts, sometimes known as the _lexicographic_ layouts or, in the two-dimensional case, the _row-_ and _column-major_ layouts. In a canonical layout, one-dimensional array indices are calculated according to the following equation, in which \(x_{0},\ldots,x_{n-1}\) are components of the \(n\)-dimensional index, and \(N_{0},\ldots,N_{n-1}\) represent the size of the array in each dimension: \[f(x_{0},\ldots,x_{n-1};N_{0},\ldots,N_{n-1})=\sum_{i=0}^{n-1}\left(\prod_{j=0 }^{i-1}N_{j}\right)x_{i} \tag{2}\] An important corollary of Equation 2 is that the mode-0 fibers are contiguous in memory, i.e. \(f(x_{0}+1,x_{1},\ldots,x_{n-1})=f(x_{0},x_{1},\ldots,x_{n-1})+1\). It is worth noting that the calculation of addresses in column-major layout--in which the mode-1 fibers are contiguous--is also given by Equation 2, with the order of the indices and sizes swapped. The canonical array layouts achieve perfect spatial locality in one dimension: if a kernel accesses memory along mode-\(m\) fibers, then a canonical layout where the \(m\)th dimension is major will provide the optimal translation between locality in the multi-dimensional space to locality in memory. Many real world applications, however, exhibit locality in multiple dimensions; a kernel might, for example, iterate diagonally over an array; an example of this--and the resulting locality in memory--is given in Figure 0(a). The performance of canonical storage layouts has been studied extensively. Park et al. discuss methods for compensating for the weaknesses of canonical layouts using tiling and recursive layouts (Park et al., 2017). Similarly, Kowarschik and Weiss propose a variety of strategies that mitigate cache misses in canonical storage layouts for numerical applications (Kowarschik and Weiss, 2017). Weinberg et al. propose a metric for the locality of array layouts (Park et al., 2017). Jang et al. analyze the performance of access Figure 1. Two-dimensional arrays laid out in memory along the gray arrows. An application accesses the array diagonally along the red arrows. Application locality is shown above, memory locality is shown below. patterns in multi-dimensional data in graphics processing units (GPUs) (Srivastava et al., 2017). Che et al. propose a method for automatically optimizing storage layouts (Cheng et al., 2017). ### Morton Layouts The Morton order is a notable example of a non-canonical array layout that provides balanced locality in multiple dimensions. It is conceptually simple to understand, efficient to implement in commodity hardware (as we will show in Section 3.3), and it has been shown to positively affect the efficacy of hardware caches: Al-Kharusi and Walker show the efficacy of the Morton layout in molecular dynamics applications (Dal \[f(011_{2},101_{2},100_{2})=\frac{000011000_{2}}{\begin{array}{l}\text{V }000100001_{2}\\ \frac{1000000000_{2}}{100111001_{2}}\end{array}}=313_{10} \tag{3}\] Our goal in this paper is to find Morton-like layouts, i.e. bit-interleaving patterns, that improve application performance through an increase in cache efficacy. In this section, we will show that the design space for such layouts is very large, motivating the use of genetic algorithms. This necessitates a chromosomal representation of layouts, which we also present in this section. In addition, we describe how the canonical layouts can be described using the same representation, and we delve into practical considerations such as the computational cost of computing indices and support for same-instruction multiple-data (SIMD) processing. ### Enumerating Layouts We can characterize Morton-like layouts the bit scattering pattern applied to each of the inputs (e.g., for Equation 3, the first index is scattered to the fourth, fifth, and eighth bits). However, such a characterization is unsound in the sense that is allows us to describe invalid layouts: if two bits from any of the input indices are mapped onto the same bit in the output, the bitwise disjunction becomes an information-destroying operation and the layout becomes non-injective--that is, it would cause multiple multi-dimensional indices to map onto the same location in memory, making the layout unusable. We can instead characterize layouts in a manner that is both complete and sound by enumerating the _source_ of each bit in the output index. In the remainder of this work we shall denote array layouts using sequences of the form \([i_{0},\ldots,i_{n-1}]\), indicating the source indices in order of increasing bit significance: the least significant bit in the output index is drawn from the \(i_{0}\)th input index, the second-least significant bit is drawn from the \(i_{1}\)th input, and the most significant bit is drawn from the \(i_{n-1}\)th input. Note that each input bit must be used once and only once: whenever a bit is to be drawn from a given input index, we implicitly use the least significant bit for that input which has not yet been consumed. For the layout shown in Equation 3, the two least significant bits are drawn from the second input, the third-least significant bit is drawn from the third input, and so forth: the resulting array layout is denoted using the sequence \([1,1,2,0,0,1,2,0,2]\). The aforementioned characterization of multi-dimensional layouts gives rise to families of layouts. The family of layouts over \(n\) inputs, where each input has \(b_{0},\ldots,b_{n-1}\) bits, is isomorphic to the set of permutations of the multiset \(S=\{0:b_{0},\ldots,n-1:b_{n-1}\}\). We denote this set of permutations as \(\mathfrak{S}(S)\). For convenience, we obviate the intermediate multiset such that \(\mathfrak{S}^{\prime}(b_{0},\ldots,b_{n-1})=\mathfrak{S}(\{0:b_{0},\ldots,n-1:b _{n-1}\})\). We can then determine the total number of possible layouts as the number of multiset permutations of \(\mathfrak{S}^{\prime}\)[(12, p. 42)]: \[|\mathfrak{S}^{\prime}(b_{0},\ldots,b_{n-1})|=\begin{pmatrix}\sum_{i=0}^{n-1} b_{i}\\ b_{0},\ldots,b_{n-1}\end{pmatrix}=\frac{\left(\sum_{i=0}^{n-1}b_{i}\right)!}{ \prod_{i=0}^{n-1}(b_{i}!)} \tag{4}\] ### Including Canonical Layouts It is worth noting that canonical layouts over arrays for which the size in each dimension is a power of two are, in fact, members of the family of Morton-like layouts. In order to sketch an informal argument for this, we recall that the indexing function for an \(n\)-dimensional canonical layout given array sizes \(N_{0},\ldots,N_{n-1}\) is defined as in Equation 2. If we assume that all sizes are powers of two, then the product of these sizes is guaranteed to be itself a power of two. Because multiplication by powers of two can be interpreted as a left-ward shift, the canonical layouts shift each input index \(x_{0},\ldots,x_{n}\) to a specific location in the binary expansion of the Figure 2. All 20 layouts for \(8\times 8\) arrays generated by the family of indexing schemes described in Section 3. Note that Figure 1(a) corresponds to a row-major layout, while Figure 1(c) corresponds to a column-major layout. output index. Furthermore, because we assume \(\forall i:x_{i}<N_{i}\), each bit in the output is determined by exactly one of the input indices; this allows us to interpret the summation as a series of bit-wise disjunctions, exactly like the definition of our Morton-like layouts. In general, a mode-0-major canonical layout of a \(2^{h_{b}}\times\ldots\times 2^{h_{n-1}}\) array can be characterized--in the the scheme defined in Section 3.1--by contiguous subsequences of bits, each drawn from the same index, i.e. a sequence of the following form: \[\underbrace{[0,\ldots,0}_{b_{0}\text{ times}},\underbrace{1,\ldots,1}_{b_{1} \text{ times}},\ldots,\underbrace{n-1,\ldots,n-1}_{b_{n-1}\text{ times}} \tag{5}\] Canonical layouts with different major axes can be constructed by changing the order of the contiguous subsequences. The fact that the canonical layouts are members of the Morton-like family of array layouts allows us to evaluate the performance of these layouts in the exact same framework as the rest of the Morton-like layouts, and we will exploit this in Section 5. ### Hardware-Accelerated Indexing It is tempting to extend the aforementioned ideas to even more exotic indexing functions, like the Hilbert array layout (Hanan, 1992; Han, 1995). The computational cost of many such functions renders them impractical, however: if the cost of computing memory addresses is too large, any performance gained by improving the cache-friendliness of a program will be negated. The Morton-like layouts we consider in this work allow efficient index calculations on modern commodity hardware, which we demonstrate in this section. Under canonical array layouts, indices are calculated either iteratively through repeated addition and multiplication, or in parallel through parallel multiplication followed by reduction through addition. In \(n\)-dimensional cases both approaches require \(n-1\) additions and \(n-1\) multiplications, operations which can be efficiently performed on virtually all processors. Specifically, the Intel Haswell and AMD Zen 3 microarchitectures--on which we focus in this work--can perform 64-bit register addition (ADD r64 r64) with a latency 1 cycle and a reciprocal throughput of 0.25 cycles, while they can execute multiplication (IMUL r64 r64) with a latency of 3 cycles and a reciprocal throughput of 1 cycle (Becker, 2015). Our bit-interleaving array layouts rely, in \(n\)-dimensional cases, on \(n-1\) bitwise disjunctions and \(n\) bit-scatter operations. Such disjunctions (OR r64 r64) can be performed with a latency of 1 cycle and a reciprocal throughput of 0.25 cycles--the same as the ADD instruction--on both of the aforementioned microarchitectures. We perform the bit-scattering operation using the _parallel bit deposition_ (PDEP r64 r64 r64) instruction, which is included in the BMI2 extension to the x86-64 instruction set (Becker, 2015). The Intel Haswell and AMD Zen 3 microarchitectures both perform bit deposition with a latency of 3 cycles and a reciprocal throughput of 1 cycle, identical to the IMUL instruction. It follows that Morton-like indexing requires--in theory--only a single additional instruction over canonical index calculation. The hardware extension required to perform bit deposition is widely supported: BMI2 has been included in Intel processors starting with the Haswell microarchitecture (2013) (Han, 2015), and in AMD processors starting with the Excavator microarchitecture (2015), albeit in a limited fashion; AMD processors gained full hardware support for these instructions starting with the Zen 3 microarchitecture (2020) (Han, 2015)2. Footnote 2: Pre-Zen 3 processors supported parts of the BMI2 instruction set—the PEXT and POEP instructions in particular—through emulation in microcode rather than in hardware, making them very slow. In order to further evaluate the competitiveness of Morton-like layouts compared to canonical layouts, we analyze implementations of both indexing schemes over a range of dimensionalities as compiled by GCC 12.3 and clang 15.0 using OSACA 0.5.2 (Han, 2015). All code was compiled using the -02 optimization flag. The results of this analysis are shown in Figure 3. Over the range of dimensionalities considered, the canonical layouts are consistently faster, i.e. require fewer cycles to compute, than the Morton-like layouts. However, the difference in performance--approximately one cycle--is relatively small and overshadowed by the number of cycled saved due to a reduction in cache misses. Furthermore, we focus primarily on memory-bound applications, in which a small increase in index calculation time is unlikely to affect performance. We conclude, therefore, that Morton-like layouts are competitive with canonical layouts strictly in terms of address computation costs. ### Support for SIMD An important consideration in the design of array layouts is the ability to vectorize kernels through single-instruction multiple-data (SIMD) operations. Canonical layouts guarantee the contiguity of fibers in the array, which facilitates the (automated) vectorization (e.g. the application of SIMD) of many operations, and this benefit is lost when applying the array layouts discussed in this paper. However, we posit that there remains ample opportunity to accelerate computation Figure 3. Throughput of a kernel calculating array indices using canonical layouts as well as Morton-like layouts on the Intel Haswell microarchitecture as given by OSACA. on Morton-like arrays using SIMD, and we argue this by distinguishing two classes of computation patterns. The first class consists of _unstructured_ patterns in which data is operated on element-wise without spatial context, i.e. without consideration of nearby elements; a prominent example of such an operation is matrix _addition_. In such applications, SIMD can be trivially applied to the underlying one-dimensional memory, regardless of the layout of the data: since elements can be added point-wise in any order, doing so in the order in which the data is laid out in memory is both feasible and enables SIMD. The second class of problems consists of _structured_ patterns in which operations must be performed in a specific order. A prime example of such an operation is matrix _multiplication_ where the inner product of fibers must be computed. In such cases, it is imperative that fibers can be accessed in contiguous blocks. The size of these blocks depends on the vectorization technology used as well as the size of the data type: in the x86 instruction set, SSE vectorisation requires two consecutive double-precision numbers or four consecutive single-precision numbers (Srivastava et al., 2017); the much wider ARM SVE instruction set extension (Srivastava et al., 2017) may require up to thirty-two consecutive double-precision numbers or sixty-four single-precision numbers. In order to facilitate vectorization for structured patterns of computation, we can impose certain constraints on the array layouts we consider. Indeed, if the \(n\) least-significant bits of an interleaving pattern are all drawn from the \(m\)th input index, then the layout guarantees that the mode-\(m\) fibers in the array are contiguous in blocks of \(2^{n}\) elements. This requirement can be incorporated into the selection of array layouts; for example, we can enable efficient AVX2 vectorisation (with a vector width of 256 bits) using single-precision (32-bit) floating point numbers by ensuring that the three least significant bits in an array layout are drawn from the same source. In other words, we can easily constrain our search space to include only array layouts with properties that favor vectorization, and we believe that doing so will enable SIMD-accelerated computation arrays laid out in Morton-like orders. ## 4. Exploration Through Evolution The canonical set of indexing bijections for laying out multi-dimensional memory is small: for two-dimensional data, there are two possible layouts, and the performance of these layouts can be evaluated using exhaustive benchmarks (Krishnan et al., 2015; Krizhevsky et al., 2016; Krizhevsky et al., 2016). Exhaustively exploring the family of indexing function outlined in Section 3, however, is impractical owing to the sheer number of permissible permutations. Importantly, the number of canonical layouts increases only with the number of _dimensions_, while the number of Morton-like layouts increases with both the number of dimensions and the _size_ of the array in each of those dimensions. By Equation 4, a small \(4\times 4\) array (indexed by two bits in each dimension) can be laid out in \(\nicefrac{{(2+2)!}}{{2!2!}}=6\) ways. A larger array of size \(4096\times 4096\) (twelve bits in each dimension) can be laid out in \(\nicefrac{{(12+12)!}}{{1!2!1!}}=2\,704\,156\) ways. A three-dimensional array of size \(256\times 256\times 256\) has the same number of elements as the aforementioned \(4096\times 4096\) array, but permits \(\nicefrac{{(8+8+8)!}}{{1!8!8!8!}}=9\,465\,511\,770\) permutations. As these examples indicate, the number of possible permutations quickly scales beyond what can be feasibly explored through exhaustive search; in order to tackle the explosive growth in the design space for Morton-like layouts, we propose the use of genetic algorithms (Section 2.3). ### Genetic Algorithm Configuration In this work, we employ a relatively simple \((\lambda,\mu)\)-ES genetic algorithm (Krishnan et al., 2015; Krizhevsky et al., 2016). The chromosomal representations of array layouts is identical to the characterization given in Section 3.1, and this gives rise to a combinatorial optimization problem. We facilitate the recombination of array layouts into novel layouts using the ordered crossover (OX) operator (Krishnan et al., 2015), and we employ inversion-based mutation (Krizhevsky et al., 2016). Our approach differs from classical genetic algorithms in only one significant way: our initial population is not chosen randomly from the solution space. Instead, the initial populations for our evolutionary experiments always consist of two individuals, depicting two canonical layouts for a given array size, as described in Section 3.2. We choose to do this to ensure that our initial populations are unbiased and deterministic, allowing us to more easily assess the efficacy of our genetic strategy. ### Fitness Function Design There are two general strategies for evaluating the performance, i.e. fitness, of a given array layout under a given cache hierarchy and access pattern: measurement and simulation. In order to assess fitness through _measurement_, we execute a program on actual hardware and measuring the running time of the process. Although such a fitness function is conceptually simple, it suffers from two primary flaws: 1. measurements are noisy and may suffer from run-to-run variance, which may hinder the performance of genetic algorithms (Srivastava et al., 2016)--in particular, our genetic algorithm is vulnerable to noise stemming from cache pollution effects; and 2. measurements require access to the target hardware, which may be inconvenient or even impossible--for example, in hardware-software co-design scenarios, where the hardware under consideration does not (yet) exist. For these reasons, we choose not to base our fitness function on measurements. Instead, we employ _simulation_ for which we need a simulator that can accurately compare the cache performance for different access-patterns on the same cache hierarchy. For this, we selected pycachesim, a component of the Kerncraft toolkit (Kerncraft, 2017). We use pycachesim by simulating an access pattern such as matrix multiplication and registering the relevant trace of load and store operations. After all accesses have been recorded, we force a write-back of the caches and collect the number of hits and misses in each cache level. We combine the number of hits in every cache level as well as in main memory with the latency of retrieving data from each of these levels to compute the total number of cycles spent retrieving data from the cache hierarchy. Given an array layout \(I\), an access pattern \(A\) and a simulated cache hierarchy \(H\), we calculate the total number of cycles using the following equation, in which \(\mathrm{L_{hit}}\), \(\mathrm{L_{miss}}\), and \(\mathrm{L_{lat}}\) represent the number of hits, the number of misses, and the latency of the \(i\)th cache level, and \(M\) represents main memory: \[C(I;A,H)=\mathrm{M_{hit}}(I;A,H)\mathrm{M_{lat}}(H)+\sum_{i}\mathrm{L_{hit}}(I; A,H)\mathrm{L_{lat}}(H) \tag{6}\] From this, we compute an approximation of the number of accesses performed per cycle, giving rise to a higher-is-better fitness function defined as follows: \[F(I;A,H)=\frac{\mathrm{L1_{hit}}(I;A,H)+\mathrm{L1_{miss}}(I;A,H)}{\mathrm{L1_{ lat}}(H)\cdot C(I;A,H)} \tag{7}\] Intuitively, the numerator in Equation 7 counts the total number of memory accesses, as all accesses either hit or miss in L1. The denominator, then, estimates the total number of cycles spent retrieving data from the various cache levels. The denominator is multiplied by a normalizing factor equal to the latency of the L1 cache; it follows from Equation 6 that the achievable performance is softly bound by the reciprocal of the L1 access latency. Indeed, this performance is achieved if and only if all accesses hit the L1 cache. Normalizing the fitness function using the L1 cache latency improves our ability to compare results between different cache hierarchies. ## 5. Evaluation We evaluate the efficacy of the methods hitherto discussed by demonstrating that 1. our fitness function is well-chosen, i.e. that is correlates with performance measurements in real hardware; that 2. our evolutionary process is capable of finding novel array layouts with favorable cache properties; and that 3. the layouts which are found by our evolutionary process actually lead to relevant performance gains in real hardware. Our validation is based on eight distinct access patterns and two processors with distinct cache hierarchies. ### Experimental Setup We consider a set of eight access patterns loosely based on the selection of algorithms used by Thiyagalingam et al. (Thiyagalingam et al., 2017). The access patterns were picked to represent common real-world applications (dense linear algebra and fluid dynamics), to represent both two-dimensional and three-dimensional applications, and to differ in critical properties such as memory size and number of loads and stores. A description of the access patterns we consider in this paper is given in Table 1. All our access patterns are described using C++ code--see the example in Listing 1--which ensures high performance as opposed to the Python code used for our evolutionary processes; the interaction between the C++ and Python components of our project is managed using pybind11(Prych, 2017). We use template meta-programming to generalize our access patterns in such a way that a single definition can be used for both simulation and benchmarking without loss of performance due to run-time polymorphism; this eliminates any possible discrepancies between the code used for simulation and the code used for measurement. Figure 2. Two examples of cache specifications for different CPU models. Note that these configurations are approximations of the true cache hierarchies. We conduct our experiments on two different CPUs: the Intel Xeon E5-2660 v3 (Nicholas and Sack, 2018) based on the Haswell microarchitecture (Han et al., 2017), and the AMD EPYC 7413 (Berg et al., 2016) based on the Zen 3 microarchitecture (Krishnan et al., 2017). When we perform experiments on non-simulated Haswell processors we use the the DAS-6 cluster (Berg et al., 2016), whereas we use a machine located at CERN for experiments on Zen 3 processors. When we perform experiments based on simulation, we use the the DAS-6 cluster (Berg et al., 2016) and configure our cache simulator according the cache configurations shown in Listing 2a for the Haswell processor, and Listing 2b for the Zen 3 processor. Note that the cache configurations are based on the accessibility of caches from a single core. This is especially relevant for the L3 cache on the Zen 3 chip, which is shared across groups of cores rather than the entire CPU: in the case of the AMD EPYC 7413, the CPU comes equipped with 128 MiB of L3 cache, but only 32 MiB is accessible from any single core (Krishnan et al., 2017). We simplify the cache replacement policies of the actual hardware by assuming LRU caches; in reality, the Haswell caches employ policies consistent with tree-PLRU for the L1 and L2 caches (Berg et al., 2016; Krishnan et al., 2017), while the L3 cache is consistent with a set-dueling-controlled adaptive insertion policy (Berg et al., 2016; Krishnan et al., 2017). Cache sizes were gathered from specification documents (Berg et al., 2016; Han et al., 2017), while cache latencies were obtained optimistically from sources on the fastest load-to-use latencies (Berg et al., 2016; Krishnan et al., 2017). The Zen 3 L1 cache has a fastest load-to-use latency of four cycles for integers and seven cycles for floating point values (Berg et al., 2016)--we use the latter in our simulations. Finally, we assume a constant 200 cycle access latency for main memory in both systems. ### Fitness Function Validation The fitness function we use in our evolutionary process (Section 4.2) is based on simulation results because simulation yields significant benefits over empirical measurements, primarily in terms of determinism and in the ability to simulate future hardware. However, this strategy is not without risk: the simulation we perform is based on a non-cycle-accurate simulator, uses simplified cache hierarchies, and ignores computation entirely. Consequently, we must evaluate the usefulness of our fitness function by establishing its correlation with execution time in real hardware. Ideally, the running time of a kernel using a given array layout would correlate inversely linearly with our fitness function, therefore ensuring two important properties. Firstly and most importantly, it guarantees that running time decreases monotonically with the value of the fitness function, such that an array layout with a higher fitness value is guaranteed to run more quickly; this allows us to establish a ranking of layouts and enables us to reliably select the best-performing array layout. Secondly, linear correlation guarantees proportionality between fitness and running time, which facilitates the weighted selection of individuals. To evaluate the degree to which the aforementioned criteria are met, we randomly select one hundred array layouts for each of the eight access patterns given in Table 1. We then evaluate the simulated fitness and measure the running time in real hardware of each pair of array layout and access pattern. The fitness functions of the pairs are calculated in parallel, as they are designed to be deterministic and impervious to cache pollution or resource contention. The empirical benchmarks are performed sequentially, ensuring that the benchmark is the sole user of the processor caches. All measurements are repeated ten times, and we report the mean and standard deviation of the running time. The results of this experiment are shown in Figure 4. The coefficient of variation of the measurements never exceeded a value of \(c_{\nu}=0.0801\). Accordingly, we have opted to omit error bars from the figure. Upon visual inspection, it is clear that the correlation between our fitness function and running time is not linear, although the two do appear correlated. We confirm our suspicions of correlation by computing Pearson's coefficient of correlation (\(\rho_{p}\)) and Spearman's coefficient of rank correlation (\(\rho_{s}\)); the resulting statistics are given in Table 2. We observe that our fitness function and running time correlate moderately to strongly with running time for the Intel Xeon E5 2660 v3 processor, although the correlation is weaker for the AMD EPYC 7413 processor. Although it is clear that there is space for the fitness function to be improved, we believe that it correlates sufficiently with running time to enable its use in genetic algorithms. \begin{table} \begin{tabular}{l l c c c} \hline \hline Access pattern & Description & Mem. size & Loads & Stores \\ \hline MMijK\((m;s)\) & Multiplication of two \(2^{m}\times 2^{m}\) matrices, both of \(s\)-byte real numbers. & \(3\cdot s\cdot 2^{2m}\) B & \(2\cdot 2^{3m}\) & \(2^{2m}\) \\ MMTijK\((m,m;s)\) & Multiplication of a \(2^{m}\times 2^{m}\) matrix by a transposed \(2^{m}\times 2^{n}\) matrix. & \(\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-} \phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-} \phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-} \phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-} \phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-} \phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-} \phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-} \phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-} \phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-} \phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-} \phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-} \phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-} \phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-} \phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-} \phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-} \phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-} \phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-} \phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-} \phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-} \phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-} \phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-} \phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-} \phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-} \phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-} \phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-} \phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-} \phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-} \phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-} \phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-} \phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-} \phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-} \phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-} \phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-} \phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-} \phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-} \phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-} \phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-} \phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-} \phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-}\phantom{-} \phantom ### Genetic Algorithm Performance To evaluate our evolutionary process (Section 4) as a whole, we intend to verify that it can, indeed, find Morton-like array layouts that have a higher simulated fitness than the canonical layouts. To this end, we perform the evolutionary process for each combination of our two simulated processors and eight access patterns, giving rise to a total of sixteen experiments. For all of these experiments, we configure our genetic algorithm to use \(\mu=20\), \(\lambda=20\), and a mutation rate of 25%. We simulate a total of 20 generations in each case. Figure 5 shows a violin plot of the fitness distribution of all individuals considered during the evolutionary process. Figure 6 shows the evolution of population fitness over the course of our experiments. We notice that for the MMTjK, MMikj, Jacobi2D, and Himeno access patterns, our method does not manage to discover any layouts with higher fitness than the initial population of canonical layouts. In the experiment on the MMijk access pattern, we discover layouts with a fitness 149.8% higher than the canonical layouts on the Intel Xeon E5-2660 v3 processor, and we improve on the fitness of canonical layouts by 187.5% for the AMD EPYC 7413. We also find layouts with improved fitness for the MMTijK (109.6% and 141.1% for the Intel and AMD processors, respectively), Cholesky (26.4% and 36.8%), and Crout (545.9% and 541.1%) access patterns. It is notable that we are able to find layouts with high fitness in few generations. \begin{table} \begin{tabular}{l r r r r} \hline \hline & \multicolumn{2}{c}{Intel E5-2660 v3} & \multicolumn{2}{c}{AMD EPYC 7413} \\ \cline{2-5} Access pattern & \(\rho_{p}\) & \(\rho_{s}\) & \(\rho_{p}\) & \(\rho_{s}\) \\ \hline MMijk\((9;4)\) & \(-\)0.672 & \(-\)0.480 & \(-\)0.648 & \(-\)0.489 \\ MMTijk\((9,9;4)\) & \(-\)0.810 & \(-\)0.896 & \(-\)0.863 & \(-\)0.823 \\ MMikj\((9;4)\) & \(-\)0.845 & \(-\)0.815 & \(-\)0.800 & \(-\)0.838 \\ MMTikj\((9,9;4)\) & \(-\)0.777 & \(-\)0.744 & \(-\)0.291 & \(-\)0.405 \\ Jacobi2D\((13,13;4)\) & \(-\)0.760 & \(-\)0.769 & \(-\)0.390 & \(-\)0.428 \\ Cholesky(10;4) & \(-\)0.827 & \(-\)0.953 & \(-\)0.725 & \(-\)0.892 \\ Crout\((9;4)\) & \(-\)0.846 & \(-\)0.663 & \(-\)0.213 & \(-\)0.704 \\ Himeno\((8,7,7;4)\) & \(-\)0.607 & \(-\)0.475 & \(-\)0.561 & \(-\)0.496 \\ \hline \hline \end{tabular} \end{table} Table 2. Pearson’s coefficient of correlation (\(\rho_{p}\)) and Spearman’s coefficient of rank correlation (\(\rho_{s}\)) between our simulation-based fitness function and true running time. Figure 4. Scatter plot of the fitness and measured running time on an Intel Xeon E5-2660 v3 CPU and AMD EPYC 7413 for randomly chosen array layouts. Figure 5. Distribution of the fitness for all individuals in eight evolution experiments. Figure 6. Range of fitness values across eight experiments for the Intel Xeon E5-2660 v3 (blue) and AMD EPYC 7413 (red). Mean fitness values are given by the dashed lines. ### Real-World Performance In order to evaluate whether the layouts identified by our evolutionary algorithms as superior to canonical layouts are indeed better, we evaluate them on real hardware. We collect the fittest individual from each of the successful evolution experiments, i.e. experiments in which our method improved upon canonical layouts, and evaluate the performance of those layouts compared to the canonical layouts on real hardware. Given that our genetic algorithm discovered superior layouts for four access patters--MMijk, MMTikj, Cholesky, and Crout--and that we evaluate a discovered layout and two canonical layouts for each access pattern, this gives rise to twenty-four experiments. We repeat each experiment ten times to compensate for run-to-run variance. The results of our experiments are shown in Table 3; they show that some access patterns--the Cholesky pattern in particular--benefit very little from our method, with speed-ups ranging from small on the Haswell processor to insignificant on the Zen 3 processor. The matrix multiplication access patterns benefit more, and performance for these access patterns is improved significantly. The Crout access pattern stands out as achieving very large speedup--up to a factor ten--from our method. It is worth noting that, in most cases, the Zen 3 processor benefits more from our evolutionary methodology than the Haswell processor; we do not currently have a satisfactory explanation for this behavior. It is important to note that we do not claim to have discovered a novel way of performing matrix multiplication or matrix decomposition that outperforms existing implementations. Indeed, our experiments are based on relatively naive implementations of these algorithms; high-performance implementations of matrix multiplication commonly rely on tiling to significantly improve the cache behavior of the application (Mohammad et al., 2017), and the performance of tiled matrix multiplication surpasses what we achieve in this paper. The purpose of the methodology described in this paper, rather, is to provide an alternative way of improving the cache behavior of an application in a manner which is fully agnostic of the application: unlike tiling and other application-specific optimizations, our methodology of altering the array layouts can be applied to any multi-dimensional problem without the need for application-specific knowledge. In addition, our approach requires few code changes, making it easy to implement. ## 6. Limitations and Threats to Validity Throughout this work, we evaluate cache efficacy through a simplified lens which may reduce the applicability of our methods in more complex, real-world applications. Indeed, we consider accesses to memory in isolation, decoupled from computation and cache-polluting effects. We assume single-threaded execution without scheduling, which means that our caches will not be polluted by processes sharing (parts of) the cache hierarchy, nor will the application have its cached data evicted due to context switching. We also assume scalar, in-order execution of memory accesses. Finally, we take an optimistic view of cache latencies, using the fastest load-to-use latencies provided by hardware manufacturers; in real-world scenarios, cache latencies may be both more pessimistic and less stable than we assume. The results shown in Section 5.4 indicate, however, that our fitness function is sufficiently accurate to be effective in real hardware. In addition, the family of array layouts described in this work requires array sizes to be powers of two in each dimension. In applications where this is not the case, arrays must be over-allocated. For \(n\)-dimensional applications, using the layouts described in this paper requires over-allocation by a factor of \(\mathcal{O}(2^{n})\). Furthermore, applications using such layouts must consider the use of SIMD vectorization: it remains an open question which operations on arrays laid out in non-standard ways can be (automatically) vectorized. We have argued for the feasibility of SIMD in Morton-like arrays in Section 3.4. Finally, our work considers only multiset permutations, in which the rank significance of bits in the input indices is preserved. This decision is based on current commodity hardware, which is capable of efficiently permuting bits only under this condition. There exists an even larger family of layouts in which rank bit significance is not preserved3; such layouts could be of practical use in theoretical future processors with more advanced bit manipulation instructions, or in current FPGA and ASIC devices which permit the implementation of custom bit manipulation operations. Although we have not tested our approach on this further generalization, \begin{table} \begin{tabular}{l c c c} \hline \hline Access pattern & Best can. & Best evo. & Speedup \\ \hline Intel Xeon E5-2660 v3 & & & \\ MMijk\((11;4)\) & 17.84 s & 10.94 s & 63.1\% \\ MMTikj\((11,11;4)\) & 18.13 s & 13.96 s & 29.9\% \\ Cholesky\((12;4)\) & 11.84 s & 11.43 s & 3.6\% \\ Crout\((12;4)\) & 158.54 s & 43.72 s & 262.6\% \\ AMD EPYC 7413 & & & \\ MMijk\((11;4)\) & 37.71 s & 9.58 s & 293.8\% \\ MMTikj\((11,11;4)\) & 32.35 s & 15.21 s & 112.6\% \\ Cholesky\((12;4)\) & 9.72 s & 9.55 s & 1.0\% \\ Crout\((12;4)\) & 232.84 s & 21.03 s & 1007.0\% \\ \hline \hline \end{tabular} \end{table} Table 3. Comparison of running time between the best-performing canonical layout and the best-performing layout found by our evolutionary process for four access patterns. we are confident that an evolutionary approach like the one presented in this paper could be beneficial in exploring this (even larger) design space. ## 7. Conclusions and Future Work In this paper, we have discussed a generalization of the Morton layout for multi-dimensional data and we have shown that there exist families of array layouts with strongly varying cache behavior which, in turn, impact the performance of applications. We have shown how these layouts can be systematically described, and that the number of possible layouts quickly exceed the limits of what can be feasibly explored using exhaustive search. We have proposed a method based on evolutionary algorithms for the exploration of the design space of such layouts. We have evaluated the fitness of different array layouts using cache simulation and we have presented results indicating that our fitness function correlates with real world performance. Furthermore, we have shown that the methodology described in this paper can be used to improve the performance of applications on real hardware by up to ten times. In the future, we intend to investigate the use of multi-objective optimization using NSGA-II (Kastase et al., 2017) in order to find array layouts that provide favorable cache behavior across multiple applications. We also intend to explore more advanced genetic algorithms which are known to perform well in combinatorial problems, such as RKGA (Kastase et al., 2017) and BRKGA (Kastase et al., 2018). It is our belief that exploring more evolutionary strategies will give us more insight into the convergence properties of various methods, and allow us to select the most efficient one. Although our fitness function correlates with real-world performance, the correlation is not perfect; we believe that the efficacy of our method could be improved through the development of more advanced fitness function, perhaps through the use of machine learning methods. In particular, we believe that the field of metric learning may enable us to develop more accurate fitness functions, and we aim to explore this avenue of research in the future. Finally, we aim to expand our research to a broader range of access patterns and hardware, including graphics processing units (GPUs). ## Acknowledgments The work presented in this paper was done in the context of the CERN Doctoral Student Programme. Many of the experimental results shown in this paper were gathered on the Advanced School for Computing and Imaging (ASCI) DAS-6 compute cluster (Kastase et al., 2017).
2309.11660
Counting Rotational Sets for Laminations of the Unit Disk from First Principles
By studying laminations of the unit disk, we can gain insight into the structure of Julia sets of polynomials and their dynamics in the complex plane. The polynomials of a given degree, $d$, have a parameter space. The hyperbolic components of such parameter spaces are in correspondence to rotational polygons, or classes of "rotational sets", which we study in this paper. By studying the count of such rotational sets, and therefore the underlying structure behind these rotational sets and polygons, we can gain insight into the interrelationship among hyperbolic components of the parameter space of these polynomials. These rotational sets are created by uniting rotational orbits, as we define in this paper. The number of such sets for a given degree $d$, rotation number $\frac pq$, and cardinality $k$ can be determined by analyzing the potential placements of pre-images of zero on the unit circle with respect to the rotational set under the $d$-tupling map. We obtain a closed-form formula for the count. Though this count is already known based upon some sophisticated results, our count is based upon elementary geometric and combinatorial principles, and provides an intuitive explanation.
John C. Mayer, Michael J. Moorman, Gabriel B. Quijano, Matthew C. Williams
2023-09-20T21:54:58Z
http://arxiv.org/abs/2309.11660v2
# Counting rotational sets for laminations of the unit disk from first principles ###### Abstract. By studying laminations of the unit disk, we can gain insight into the structure of Julia sets of polynomials and their dynamics in the complex plane. The polynomials of a given degree, \(d\), have a parameter space. The hyperbolic components of such parameter spaces are in correspondence to rotational polygons, or classes of "rotational sets", which we study in this paper. By studying the count of such rotational sets, and therefore the underlying structure behind these rotational sets and polygons, we can gain insight into the interrelationship among hyperbolic components of the parameter space of these polynomials. These rotational sets are created by uniting rotational orbits, as we define in this paper. The number of such sets for a given degree \(d\), rotation number \(\frac{p}{q}\), and cardinality \(k\) can be determined by analyzing the potential placements of pre-images of zero on the unit circle with respect to the rotational set under the \(d\)-tupling map. We obtain a closed-form formula for the count. Though this count is already known based upon some sophisticated results, our count is based upon elementary geometric and combinatorial principles, and provides an intuitive explanation. ## 1. Introduction ### Motivation What are "rotational sets" for laminations for the unit disk under the action of the angle \(d\)-tupling map, and why count them? Laminations are a topological and combinatorial model of the connected Julia sets of polynomials considered as functions of the complex numbers, modeled by the plane. Such models are used both to understand specific types of polynomials and their Julia sets, and to study the parameter spaces of polynomials. For example, the well-known Mandelbrot set [12] is the parameter space of quadratic polynomials of the form \(P_{c}(z)=z^{2}+c\) with parameter \(c\) with connected Julia set. The so-called hyperbolic components of that parameter space are of interest, including how they are connected to each other, how they are arranged in the Mandelbrot set, and how many components there are that are associated with attractive orbits (of the associated polynomials) of a given period, rotation number, and the like. These terms are defined below. Our research is concerned with polynomials of higher degree (\(d>2\)), about which much less is currently understood. Laminations are composed of _leaves_ (chords of the unit circle) which form a closed collection of non-crossing segments that are forward and backward invariant under a natural extension of the _angle \(d\)-tupling map_ (the angular part or argument of the complex power function \(z\mapsto z^{d}\), where \(z=re^{2\pi it}\) and the argument is the exponent of \(e\)) on the unit circle. Leaves connecting points of a rotational set in circular order form polygons in the lamination. There is a correspondence between rotational polygons in laminations and fixed points of polynomials that have a non-zero infinitesimal rotation number (determined by the argument of the derivative of the polynomial at the fixed point). Such polygons are in correspondence to a fundamental class of hyperbolic components of the parameter space of degree \(d\) polynomials with connected Julia set. For example, in the Mandelbrot set for the hyperbolic component marked star in Figure 1, all the Julia sets have a (repelling) fixed point (the marked point in the Julia set) which is represented in the lamination for that Julia set by a rotational triangle (marked star). The Julia set is actually the boundary of the shaded blue region, which contains all the points running off to infinity under iteration of the polynomial. The white regions in the lamination correspond to the black regions in the "filled-in" Julia set. The filled-in Julia set consists of all points whose orbits under iteration of the polynomial are bounded. "Counting... from First Principles" in our title indicates that we will use the most fundamental geometric and combinatorial properties of the angle \(d\)-tupling map to make the count. By studying laminations in the abstract without reference to a particular polynomial or Julia set, we aim to reverse the process by which a Julia set leads to a lamination. By understanding what is possible for laminations, we can constrain what is possible for locally connected Julia sets. Our main result is Theorem 21. For a preview of where that theorem takes us with Julia sets, skip ahead to view Figure 5. ### Orbits, the Angle \(d\)-tupling Map, and Itineraries Most of the following definitions are adapted from [1] and [2]. Elementary proofs of some propositions are left to the reader. **Definition 1**.: Let \(f:X\to X\) be a function. By \(f^{q}(x)\) we denote the composition \(f(f(\dots f(x)\dots))\), where \(f\) is composed with itself \(q\) times. By convention, \(f^{0}(x)=x\). **Definition 2**.: Given a point \(x\in X\), the _orbit_ of \(x\) is the set \(\mathcal{O}=\{f^{n}(x)\}_{n=0}^{\infty}\). We denote the unit circle in the Cartesian plane by \(S^{1}=\{(x,y)\mid x^{2}+y^{2}=1\}\). We will describe points in \(S^{1}\) by their central angle and we will measure angles in Figure 1. Hyperbolic component of the Mandelbrot set (marked star) with repelling fixed point (marked disk) in Julia set of \(P(z)=z^{2}+(-0.117+0.743i)\) modeled by a rotational triangle (marked star) in the lamination for that Julia set. revolutions rather than radians or degrees. A full circle is 1 revolution. We say we measure angles _mod 1_, such that angles which differ by a full revolution are the same. We now consider the specific function \(\sigma_{d}\) in whose orbits we are interested. **Definition 3**.: Let \(d\in\mathbb{Z}\) with \(d>1\). Define the _angle \(d\)-tupling map_\(\sigma_{d}:S^{1}\to S^{1}\) by \(\sigma_{d}(x)=dx\pmod{1}\). We represent points on the circle coordinatized by \([0,1)\) by their base \(d\) expansion. In base \(d=2\), the notation \(\_001\) denotes for us that the digits \(001\) repeat infinitely which, as the reader can check, is the point \(\frac{1}{3}\in[0,1)\). However, in base \(d=3\), \(\frac{1}{3}\) is the point \(1\_0\), which in our notation means that the initial digit \(1\) does not repeat, but the digit \(0\) repeats. We can use the tools of symbolic dynamics, particularly the _forgetful shift_. Because points under \(\sigma_{d}\) are multiplied by \(d\) and are taken modulo \(1\), the first digit of the base \(d\) expansion becomes the integer part which goes away after taking the modulus. So we can quickly calculate the next point in an orbit by simply "forgetting" the leading digit (when written as a base \(d\) expansion). For convenience, we will define \(\mathbb{Z}_{+}\) to be \(\{i\in\mathbb{Z}\mid i\geq 0\}\). We denote the set of _pre-images_ under \(\sigma_{d}^{q}\) of a point \(x\) by \(\sigma_{d}^{-q}(x)=\{y\in S^{1}\mid\sigma_{d}^{q}(y)=x\}\). If there is a \(q\in\mathbb{Z}_{+}\) such that \(\sigma_{d}^{q}(x)=x\), then the orbit of \(x\) is finite, and we say it is a _periodic orbit_ and \(x\) is a _periodic point_. If \(q\) is least for which \(\sigma_{d}^{q}(x)=x\), then we say \(q\) is the _period_ of the point (respectively, orbit). The set of points visited on those \(q\) iterations make up the orbit \(\mathcal{O}\) for that given point. If this \(q\) exists, then for the set \(\mathcal{O}=\{\sigma_{d}^{n}(x)\mid 0\leq n<q\}\) it is true that \(\sigma_{d}:\mathcal{O}\to\mathcal{O}\) and \(\sigma_{d}(\mathcal{O})=\mathcal{O}\). **Proposition 4**.: _For a given degree \(d\), the pre-images of \(0\) are_ \[\sigma_{d}^{-1}(0)=\left\{0,\frac{1}{d},\frac{2}{d},\ldots,\frac{d-1}{d}\right\} \tag{1}\] _or when written in base \(d\) expansion_ \[\sigma_{d}^{-1}(\_0)=\{\_0,1\_0,2\_0,\ldots,(d-1)\_0\} \tag{2}\] These pre-images serve as the border between neighboring intervals of length \(\frac{1}{d}\) in \(S^{1}\). **Definition 5**.: Fix \(d>1\). Define intervals \(I_{0}=[0,\frac{1}{d})\), and in general \(I_{j}=[\frac{j}{d},\frac{j+1}{d})\) for \(1\leq j\leq d-1\). Recall \(0\) and \(1\) are identical in \(S^{1}\). Then \(S^{1}\) is the disjoint union \(\bigcup_{j=0}^{d-1}I_{j}\). For instance, the smallest non-zero pre-image of \(0\), \(\frac{1}{d}\) (equivalently, \(1\_0\)), is the border between \(I_{0}\) and \(I_{1}\). Note that each interval \(I_{j}\) maps one-to-one in counterclockwise order from \(\_0\) onto \(S^{1}\). Consequently, within each \(I_{j}\) there is a preimage of every \(I_{k}\) consecutively in order, but of length \(\frac{1}{d^{2}}\). With these tools in hand, we can now express orbits in a much more useful manner. ### Itineraries We have given the points in \(S^{1}\) when considered under the map \(\sigma_{d}\) their base \(d\) expansion. This will allow us to describe the orbit of a point in the circle under \(\sigma_{d}\) in terms the visits of its orbit to the distinguished intervals in Definition 5. The _itinerary_ of a point \(s\in S^{1}\) is the ordered list of its visits in its orbit to the distinguished intervals. The proof of the following Proposition is left to the reader. **Proposition 6**.: _Under \(\sigma_{d}\), the itinerary of a point is exactly its base \(d\) expansion. Consequently, two points \(s\) and \(t\) in \(S^{1}\) under \(\sigma_{d}\) have the same itinerary if, and only if, \(s=t\)._ Now, the points of an orbit are defined by their relative placement to pre-images of zero. The utility in this is found in how orbits can be defined by the location of pre-images relative to the points of the orbit. This will be essential to counting rotational sets from first principles. Because any given itinerary for a point in a periodic orbit can be shifted to find the itineraries of every other point in that orbit, periodic orbits can be clearly referenced with the itinerary of any given point it contains. For consistency we will define orbits to have the same itinerary as their smallest (compared to _0) point. **Definition 7**.: Let \(\mathcal{O}=\{0\leq x_{1}<x_{2}<x_{3}\cdots<x_{q}<1\}\) be a periodic orbit. Define \(Itin(\mathcal{O})=Itin(x_{1})\). **Definition 8**.: A _gap_ is a complementary interval in \(S^{1}\setminus\mathcal{O}\). ### Spatial and Temporal Order of Orbits The count of rotational sets is fundamentally based upon the comparison of spatial and temporal orders of points in an orbit with reference to the pre-images of \(0\). See Figure 3. **Definition 9**.: _Spatial order_ refers to the ordering of points in an orbit by value, from smallest to largest in \([0,1)\). In terms of a map onto \(S^{1}\), this order is given by starting at _0 and following our points counterclockwise. **Definition 10**.: _Temporal order_ orders the points in an orbit by starting with the lowest valued point in our orbit (or closest to _0 spatially, measuring from _0 counterclockwise) and applying \(\sigma_{d}\) repeatedly. So, temporal ordering is based on how our itinerary is followed by repeated applications of \(\sigma_{d}\). ## 2. Rotational Sets Consider \(\sigma_{d}:S^{1}\to S^{1}\) for a particular \(d>1\). Figure 2. See if you can identify this orbit for \(\sigma_{4}\) based on the position of its points relative to pre-images of \(0\). **Definition 11**.: Let \(\mathcal{O}=\{x_{1}<x_{2}<x_{3}\cdots<x_{q}\}\) be a periodic orbit under \(\sigma_{d}\). If, and only if, there exists a \(p\in\mathbb{Z}^{+}\) such that for all \(i\in\{1,2,\ldots q\}\), \(\sigma_{d}(x_{i})=x_{i+p\pmod{q}\), then we say that \(\mathcal{O}\) is a _rotational_ periodic orbit with _rotation number_\(\frac{p}{q}\) (in lowest terms). _Remark 12_.: Note that the numerator \(p\) of the rotation number \(p/q\) of a rotational periodic orbit is sufficient to determine the temporal order of its points. Along with how many points of the orbit are in each interval, this is enough to determine its itinerary. It follows from Proposition 6 that if, and only if, two rotational periodic orbits have the same \(p/q\) and each of their corresponding points are in the same intervals, then they have the same itinerary and are the same orbit. As for the practical generation of this itinerary, it can be found by reading off the interval of each point by starting with \(x_{1}\) and "jumping" forwards \(p\) points counterclockwise along \(S^{1}\), repeating until getting back to that initial point. Consider what \(p\) is in the rotational orbit in Figure 3. A consequence of how the rotation number of an orbit can describe the forward orbits of its points is that it can also describe their pre-images. Since \(0\) lies between \(x_{1}\) and \(x_{q}\), at least one pre-image of \(0\) must lie between \(x_{q-p}\in\sigma_{d}^{-1}(x_{q})\) and \(x_{1+q-p}\in\sigma_{d}^{-1}(x_{1})\). We call such pre-images the _Principal Pre-image_ of their respective orbits. **Definition 13**.: Let the _principal preimage_ for a rotational orbit \(\mathcal{O}\) be a pre-image of \(0\) lying between \(x_{q-p}\in\sigma_{d}^{-1}(x_{q})\) and \(x_{1+q-p}\in\sigma_{d}^{-1}(x_{1})\). The reader can verify that there must always be a principal preimage. ### Rotational Sets Containing Multiple Orbits Not only can points rotate together while maintaining order, but so too can multiple orbits together, forming a rotational set. **Definition 14**.: Let \(\mathcal{P}=\{x_{i}\mid 0\leq x_{1}<x_{2}<x_{3}<\cdots<x_{qk}<1\}\) be a finite set in consecutive counterclockwise order in \(S^{1}\). We say \(\mathcal{P}\) is a _rotational set_ containing \(k\) orbits with rotation number \(\frac{p}{q}\) for \(\sigma_{d}\) if, and only if, Figure 3. Spatial ordering left; Temporal ordering right. 1. \(\sigma_{d}(\mathcal{P})=\mathcal{P}\) 2. \(x_{i},x_{i+1},\ldots,x_{i+k-1\pmod{qk}}\) for \(i\in[1,2,\ldots qk]\) are in different orbits 3. \(x_{i}\) moves to \(x_{i+pk\pmod{qk}}\) **Definition 15**.: Let \(G_{i}=\{x_{(i-1)k+1},x_{(i-1)k+2},\ldots,x_{ik}\}\) where \(i\in[1,2,\ldots,q]\). We say \(G_{i}\) is the \(i\)th _group_ and \(G_{i}\) is the set of the \(i\)th points spatially of each orbit. _Remark 16_.: Items (2) and (3) of Definition 2.3 also show that \(G_{i}\) moves together preserving spatial order to \(G_{i+p\pmod{q}}\). **Definition 17**.: Let a _principal preimage_ for a rotational set \(\mathcal{P}\) be the pre-image of _.0_ that lies between \(G_{q-p}\subset\sigma_{d}^{-1}(G_{q})\) and \(G_{q-p+1}\subset\sigma_{d}^{-1}(G_{1})\). _Remark 18_.: For the principal pre-image of a set, it must lie between \(x_{(q-p)k}\in\sigma_{d}^{-1}(x_{qk})\) and \(x_{1+(q-p)k}\in\sigma_{d}^{-1}(x_{1})\). \(x_{1+(q-p)k}\) is the smallest point in group \(G_{q-p+1}\) and \(x_{(q-p)k}\) is the largest point in group \(G_{q-p}\), therefore the principal pre-image lies between those two groups. In order to aid with the identification and counting of rotational sets, we need to differentiate gaps between points that are within groups from those outside groups. This distinction is necessary as pre-images that lie in the former are what differentiate orbits within a rotational set from each other. In other words, the pre-images that are within the groups are such that if they were removed, the orbits would no longer be different. **Definition 19**.: Let _intra-group_ gaps be gaps that lie within a group, or in other words are gaps that are in between two points from two different orbits that aren't the first and last orbits spatially in their group. **Definition 20**.: Let _inter-group_ gaps be gaps that lie outside a group (or between groups). These are all of the gaps that are not intra-group gaps. Figure 4. These two rotational orbits for \(\sigma_{4}\) form a rotational set. The high-lighted inter-group gap contains the principal pre-image of this rotational set. See if you can identify these orbits, as practice. Note that with these definitions, the principal pre-image must always lie within an inter-group gap. See if you can build intuition for this by considering Figure 4 through this new lens. ## 3. Algorithms and resulting Formulas ### Counting Rotational Sets Goldberg [3] counted the rotational orbits with a given rotation number \(\frac{p}{q}\) for \(\sigma_{d}\) and indicated that rotational sets containing multiple orbits with that rotation number could also be counted, providing an example count for \(\sigma_{3}\), but no general formula. As a corollary to her characterization of rotational orbits in terms of their temporal and spatial placement with respect to the \(d-1\) fixed points of \(\sigma_{d}\), she showed that the maximal number of orbits in a rotational set for \(\sigma_{d}\) was \(d-1\). This also follows as a corollary to our main theorem, Theorem 21. McMullen ([5], Section 2) built upon Goldberg to provide a criterion for two orbits for \(\sigma_{d}\) with the same rotation number to be compatible in one rotational set. Tan [6] used an algorithm based upon the Goldberg/McMullen criterion to count the number of rotational sets containing \(k\) orbits for \(\sigma_{d}\) with a given rotation number \(\frac{p}{q}\). So, while the content of our main theorem is known, the proof here is new and more elementary. **Theorem 21** (Identifying and Counting Rotational Sets).: _Consider the collection \(\mathcal{B}\) of all rotational sets for a given degree \(d\), rotation number \(\frac{p}{q}\) in lowest terms, and number \(k\) of distinct orbits per set. The cardinality of \(\mathcal{B}\) is given by_ \[|\mathcal{B}|=\sum_{i=k-1}^{l}\left[\binom{d+q-2}{d-2-i}\sum_{j=0}^{k-1}\left[ (-1)^{j}\binom{k-1}{j}\binom{q(k-1-j)}{i}\right]\right] \tag{3}\] _where_ \[l=\begin{cases}q(k-1)&d-2>q(k-1)\\ d-2&otherwise\end{cases} \tag{4}\] Proof.: For any given rotational set \(B\) within \(\mathcal{B}\), there are \(q\) points in each of the \(k\) orbits within \(B\). The count of each way to place pre-images between these neighboring points is the same as the count of rotational sets because the placement of pre-images dictates the digits for the itinerary of each point as can be seen in Remark 12. However, placing a pre-image between _0 and the smallest point in the set is different from placing a pre-image between the largest point and _0. The former would increase the digits in all the itineraries while the latter would not. Therefore, we need to count the number of ways to place pre-images within the gaps distinguished by \(0\) and the points within \(B\). Let the range of values from 0 to \(qk\) correspond with the gaps between neighboring points in the set of points that contain _0 and the points within each orbit in \(B\). 0 corresponds with the gap between \(0\) and the smallest point in \(B\), 1 with the gap between the smallest and second smallest, and so on. There are \(d\) pre-images to place. The first is \(0\), which is its own pre-image. The next is the principal pre-image, whose position is already determined (as can be seen in Remark 18). This leaves us with \(d-2\) pre-images to place. Now, we must concern ourselves with where these pre-images can be placed to form a valid rotational set with \(k\) orbits. **Lemma 22**.: _Label gaps with their congruence class modulo \(k\). Rotational sets in \(\mathcal{B}\) must have at least one pre-image in each non-zero congruence class. Therefore, the cardinality of \(\mathcal{B}\) is equivalent to that of \(P\) when \(P\) is defined to be the set of all sets composed of \(d-2\) non-negative integers less than or equal to \(qk\), such that each one contains at least one element from each non-zero congruence class modulo \(k\)._ Proof.: In order to ensure differentiation between orbits, each orbit must be differentiated from its intra-group neighbors (the groups are differentiated by the principal pre-image and _0). The first spatial orbit is not a neighbor with the last as they lie on opposite sides of any given group. So for each intra-group neighbor, an orbit must have at least one pre-image between one of its points and that neighbor's points. Therefore, this requirement can only be fulfilled by placing pre-images in intra-group gaps. Here is an example to provide clarity. The reader is invited to draw their own illustration for this example. For the first orbit (spatially) to be distinct from the second orbit, there must be a pre-image either in the gap between their smallest points respectively, second smallest, or any other pair of corresponding points. This particular restriction for the first and second orbits can be restated as the requirement for a pre-image to exist in a gap with label \(n\) such that \(n\) is in the congruence class of 1 modulo \(k\). This rule can be generalized for all intra-group gaps by separating them into congruence classes. For the \(n\)th and \((n+1)\)th orbits to be differentiated, there must be at least one pre-image in the set of gaps with labels in \(\{x\in[0\dots qk]\mid x\in[n]_{k}\}\). Therefore, all non-zero congruence classes require at least one pre-image for the rotational set to be valid. The set of rotational sets, with the restrictions articulated above, will have the same cardinality as \(P\) when defined as follows. \(P\) is the set of all sets composed of \(d-2\) non-negative integers less than or equal to \(qk\), such that each one contains at least one element from each non-zero congruence class modulo \(k\). We will now define sets that correspond with the choice of gaps in which to put pre-images. As of now, we are only placing one pre-image per gap even though more than one can be placed for a valid rotational set. This is done to make counting simpler later on. We will refer to the range of labels for gaps as \(\lambda\), and define it as: \[\lambda=\{x\in\mathbb{Z}_{+}\mid x\leq qk\} \tag{5}\] In other words, \(\lambda\) is the set of non-negative integers less than or equal to \(qk\). We will refer to the set of all labels for intra-group gaps as \(\psi\) and define \(\psi\) as \[\psi=\{x\in\lambda\mid x\ (\text{mod}\ k)\neq 0\} \tag{6}\] There are multiple possible values for how many pre-images may be in intra-group gaps for any given rotational set. We will call the number of pre-images in intra-group gaps \(i\). Define \(T_{i}\) as the set of all sets that contain \(i\) elements from \(\psi\). We also know that \(|T_{i}|=\binom{|\psi|}{i}=\binom{q(k-1)}{i}\) since \(|\psi|=q(k-1)\) because there are \(k-1\) non-zero congruence classes with \(q\) integers in each. \[|T_{i}|=\binom{q(k-1)}{i} \tag{7}\] **Lemma 23**.: _The range for the possible number of pre-images in intra-group gaps varies from \(k-1\) to \(l\) where_ \[l=\begin{cases}q(k-1)&d-2>q(k-1)\\ d-2&otherwise\end{cases} \tag{8}\] Proof.: Only \(k-1\) pre-images are required for differentiation. The minimal value of \(i\) is the minimal number of pre-images required for differentiation, \(k-1\). As for the maximum value of \(i\), it can be limited by either the number of empty gaps, \(q(k-1)\), or the number of pre-images to place, \(d-2\). Therefore, the maximal value of \(i\) is \(l\) where \[l=\begin{cases}q(k-1)&d-2>q(k-1)\\ d-2&otherwise\end{cases} \tag{9}\] The sets in \(T_{i}\) for \(i\in[k-1,k,\dots,l]\) that follow our requirement of differentiation are valid (yet may not be complete as these only correspond to \(i\) pre-images of the \(d-2\) that need to be placed). So we will define \(\gamma_{i}\) as the subset of \(T_{i}\) that contains all the sets where there is at least one element from each non-zero congruence class modulo \(k\). **Lemma 24**.: _The count of sets in \(\mathcal{B}\) such that each \(B\in\mathcal{B}\) has \(i\) pre-images in its intra-group gaps is given by_ \[\sum_{j=0}^{k-1}(-1)^{j}\binom{k-1}{j}\binom{q(k-1-j)}{i} \tag{10}\] Proof.: Each set within \(P\) that has \(i\) elements in non-zero congruence classes corresponds with an element in \(\gamma_{i}\). Similarly, each rotational set in \(\mathcal{B}\) that has \(i\) pre-images in intra-group gaps corresponds with an element in \(\gamma_{i}\). For counting purposes, we found it easier to find \(T_{i}\setminus\gamma_{i}\) than \(\gamma_{i}\), so we will define \(W_{i}=T_{i}\setminus\gamma_{i}\). We can identify and count the sets in \(W_{i}\) (as opposed to the valid sets in \(\gamma\)) by noticing the equivalence of the problem with finding the union of sets. The goal is to find every possible way to leave at least one congruence class unfilled. Categorize each placement as belonging to a series of sets \((C_{1},C_{2},\dots,C_{k-1})\) where \(C_{j}\) is defined as the set of elements of \(T_{i}\) where the \(j\)th congruence class is not represented. With this definition, a singular placement can belong to multiple sets \(C_{j}\), and we seek the union of each of these sets. In other words, \(\bigcup\limits_{j=1}^{k-1}C_{j}=W_{i}\) because it contains every set where at least \(1\) non-zero congruence class is not represented. This is a well known problem, and the solution utilizes the inclusion-exclusion principle [9] with the count given by \[|W_{i}|=\bigcup\limits_{j=1}^{k-1}C_{j}\Bigg{|}=\sum_{j=1}^{k-1}(-1)^{j+1} \binom{k-1}{j}\binom{q(k-1-j)}{i} \tag{11}\] Since \(\gamma_{i}\) is defined as the set difference of \(T_{i}\) and \(W_{i}\), \[|\gamma_{i}|=|T_{i}|-|W_{i}|=\binom{q(k-1)}{i}-\sum_{j=1}^{k-1}(-1)^{j+1}\binom{k -1}{j}\binom{q(k-1-j)}{i} \tag{12}\] which can be rewritten as \[\sum_{j=0}^{k-1}(-1)^{j}\binom{k-1}{j}\binom{q(k-1-j)}{i} \tag{13}\] We have identified and counted all valid placements of intra-group pre-images. In order to finish constructing a rotational set, the remaining pre-images must be placed in between groups. For each of these placements in \(\gamma\), the number of pre-images left to place naturally depends on the number of pre-images already placed. For any given set \(g\in\gamma\) with cardinality \(i\), there are \(d-2-i\) elements from \(\lambda\) (all the labels for the gaps) that need to be added to construct an element in \(\mathcal{B}\). There are a few limitations on which gaps pre-images can be placed in, due to the fact that our prior count depends on certain gaps lacking pre-images and others having at least one. Therefore, we can place pre-images in intra-group gaps that already have pre-images or any given inter-group gap. In other words, the added \(d-2-i\) elements must either be duplicates of elements already in \(g\) which are also in \(\psi\) or elements in \(\lambda\setminus\psi\). There are \(\binom{d+q-2}{d-2-i}\) ways to choose these integers, therefore there are these many elements in \(P\) and corresponding orbits for any given set \(g\). As argued before, \(i\) (the cardinality of \(g\)) ranges from \(k-1\) to \(l\). Bringing all of this together, \(|\mathcal{B}|\) and \(|P|\) are equal to the sum of \(\binom{d+q-2}{d-2-i}|\gamma|\) over possible values of \(i\). \[|P|=|\mathcal{B}|=\sum_{i=k-1}^{l}\left[\binom{d+q-2}{d-2-i}\sum_{j=0}^{k-1} \left[(-1)^{j}\binom{k-1}{j}\binom{q(k-1-j)}{i}\right]\right] \tag{14}\] ### Identifying Rotational Sets Containing a Given Orbit This count is certainly insightful, but in order to gain insight as to specific laminations that one may examine, it would be useful to be able to count and identify rotational sets that contain a given orbit. Here we state that this can be done, using a method quite similar to the method used in the proof of the previous counting theorem. **Theorem 25** (Identifying Maximal Rotational Sets that contain a given orbit).: _Consider the orbit \(\mathcal{O}\) with degree \(d\) and rotation number \(\frac{p}{q}\) in lowest terms. An exhaustive list of all the rotational sets that contain \(\mathcal{O}\) can be found algorithmically, in a way inspired by the previous proof. The count of maximal rotational sets that contain \(\mathcal{O}\) can be expressed as a closed-form formula in terms of the degree \(d\), the rotation number \(\frac{p}{q}\), and the digits of \(\mathcal{O}\)._ Similar to the count of all rotational sets under certain parameters being given through the valid placements of pre-images of \(\_0\), the orbits that belong to rotational sets containing a given orbit \(\mathcal{O}\) can be identified through a similar process. We leave this investigation to the reader. However, an algorithm for identifying such rotational sets containing an orbit \(\mathcal{O}\) based on the proof strategy of the previous theorem can be accessed on GitHub [7]. ### Examples In order to demonstrate the usefulness of the proposed algorithm, consider the rotational orbit \([\_012,\_120,\_201]\) under \(\sigma_{3}\). Applying the algorithm, we determine that this orbit can be paired with either the orbit \([\_002,\_020,\_200]\) or the orbit \([\_112,\_121,\_211]\) (but not both since a rotational set for \(\sigma_{3}\) contains at most two orbits). The rotational polygons formed by these three sets are in the first row of Figure 5. Figure 5. Triangle and two hexagon expansions for \(\sigma_{3}\). Initial rotational data (left to right): _012 orbit, _012 & _002 orbits, _012 & _112 orbits. Pullbacks laminations (left to right): 3 steps, 2 steps, 2 steps. Polynomial equations (left to right): \(P(z)=(-0.607786+1.05435i)z-(0.316863+0.325784i)z^{2}+z^{3}/3\) \(P(z)=(0.89012-0.512216i)z^{2}+z^{3}/3\) \(P(z)=-0.156693-0.682581i+(-0.355147+0.182883i)z^{2}+z^{3}/3\) The second row of the figure shows the first few stages of the pullback lamination determined by the initial polygons and an appropriate choice of branches of the inverse of \(\sigma_{3}\)[8]. The corresponding Julia sets, found using Mathematica and FractalStream [1], are displayed in the third row of the figure. Each polygon in the corresponding lamination represents a junction point in the Julia set. The central polygon in each case represents a fixed point of the polynomial for the corresponding Julia set. Note that the lamination represents reasonably well the geometry of the corresponding Julia set if you imagine the polygons shrunk to points. In this process it is important to note that we found the rotational polygons and laminations _before_ we found the polynomials and their Julia sets. ### Future Questions If you find yourself interested in continuing this work, perhaps consider the following questions as starting points for areas of research: 1. The formula for the count provided in Theorem 21 is quite complicated. Tan ([6] Theorem 3.2) provides an equivalent count, though without a basis in first principles. How can our count be simplified from first principles? 2. Verify that the formula found by Tan and the closed-form formula in our Theorem 3.1 give the same count of rotational sets. 3. Consider the closed form formula given by Theorem 21. The formula implies that for a fixed \(d\) and \(k\), you can express the formula as a polynomial equation in terms of \(q\). What is the degree of the polynomial for a given \(d\) and \(k\)? Is it possible to generate the coefficients for the polynomial for a given \(d\) and \(k\) in closed form? What can this teach us about the count of rotational sets for a fixed degree and rotational set size? 4. Consider the lattice [11] of rotational sets, partially ordered by subset inclusion, for a given degree \(d\) and rotation number \(\frac{p}{q}\), where a join [10] between sets \(A\) and \(B\) represents the union of \(A\) and \(B\). Given a degree \(d\), what can we learn about the underlying structure of these lattices, and what can it teach us about rotational sets?
2309.12369
Variance Reduction via Simultaneous Importance Sampling and Control Variates Techniques Using Vegas
Monte Carlo (MC) integration is an important calculational technique in the physical sciences. Practical considerations require that the calculations are performed as accurately as possible for a given set of computational resources. To improve the accuracy of MC integration, a number of useful variance reduction algorithms have been developed, including importance sampling and control variates. In this work, we demonstrate how these two methods can be applied simultaneously, thus combining their benefits. We provide a python wrapper, named CoVVVR, which implements our approach in the Vegas program. The improvements are quantified with several benchmark examples from the literature.
Prasanth Shyamsundar, Jacob L. Scott, Stephen Mrenna, Konstantin T. Matchev, Kyoungchul Kong
2023-09-19T22:21:18Z
http://arxiv.org/abs/2309.12369v2
# Variance Reduction via Simultaneous Importance Sampling and Control Variates Techniques Using Vegas ###### Abstract **Monte Carlo (MC) integration is an important calculational technique in the physical sciences. Practical considerations require that the calculations are performed as accurately as possible for a given set of computational resources. To improve the accuracy of MC integration, a number of useful variance reduction algorithms have been developed, including importance sampling and control variates. In this work, we demonstrate how these two methods can be applied simultaneously, thus combining their benefits. We provide a python wrapper, named CoVVVR, which implements our approach in the Vegas program. The improvements are quantified with several benchmark examples from the literature.** ###### Contents * 1 Introduction * 2 Control Variates and Vegas * 2.1 Naive Monte Carlo Integration * 2.2 Importance Sampling * 2.3 Control Variates * 2.4 Combining Importance Sampling and Control Variates * 2.5 Repurposing Vegas Intermediate Outputs as Control Variates * 3 Results * 3.1 Test Cases * 3.1.1 \(d\)-dimensional Gaussians * 3.1.2 \(d\)-dimensional Camel functions * 3.1.3 Entangled circles * 3.1.4 Annulus with cuts * 4 ###### Abstract In this paper, we study the evolution of the random variables \(\mathbf{x}\) in a class of random variables \(\mathbf{x}\), with the following properties: * \(\mathbf{x}\) is a random variable with \(\mathbf{x}\) and \(\mathbf{x}\), with \(\mathbf{x}\) and \(\mathbf{x}\). * \(\mathbf{x}\) is a random variable with \(\mathbf{x}\) and \(\mathbf{x}\). * \(\mathbf{x}\) is a random variable with \(\mathbf{x}\) and \(\mathbf{x}\). * \(\mathbf{x}\) is a random variable with \(\mathbf{x}\) and \(\mathbf{x}\). * \(\mathbf{x}\) is a random variable with \(\mathbf{x}\) and \(\mathbf{x}\). * \(\mathbf{x}\) is a random variable with \(\mathbf{x}\) and \(\mathbf{x}\). * \(\mathbf{x}\) is a random variable with \(\mathbf{x}\) and \(\mathbf{x}\). ## 1 Introduction In many fields of science, including particle physics, one has to compute the value of an integral \[F\equiv\int_{\Omega}d\mathbf{x}\,f(\mathbf{x})\,, \tag{1}\] over some domain \(\Omega\) with volume \[V(\Omega)\equiv\int_{\Omega}d\mathbf{x}\,, \tag{2}\] where \(\mathbf{x}\in\mathbb{R}^{d}\) is a \(d\)-dimensional vector of independent variables. The function \(f(\mathbf{x})\) can be evaluated for a given \(\mathbf{x}\), but can be arbitrarily complicated. In high energy physics, integrations like (1) are ubiquitous, arising when computing total cross-sections (or differential cross-sections with respect to low-dimensional event variables), particle lifetimes, convolutions with transfer functions describing detector effects, etc. Although widespread, this problem is fundamentally challenging -- with the exception of a few trivial cases (typically found in the textbooks), the integral cannot be performed analytically, and one has to resort to numerical methods for its evaluation. Monte Carlo (MC) methods are particularly suited for high-dimensional integrals, since their accuracy scales as \(\sqrt{\frac{\mathrm{Var}[f]}{N_{\mathrm{trials}}}}\), where \(\mathrm{Var}[f]\) is the variance of \(f(\mathbf{x})\) over the domain \(\Omega\) and \(N_{\mathrm{trials}}\) is the number of trials [1]. One obvious way to improve the accuracy is to increase \(N_{\mathrm{trials}}\), but this approach soon hits a wall due to resource limitations. Therefore, much attention has been placed on designing variance-reducing techniques. Some of the classic variance reduction methods include importance sampling (IS), stratified sampling, antithetic variables, control variates (CV), etc. [2]. More recent techniques use quasirandom or low-discrepancy pointsets in a class of methods known as Quasi Monte Carlo [3, 4], apply multigrid ideas in Multilevel Monte Carlo estimation [5], or leverage machine learning (ML) [6]. A parallel research thrust has been the synthesis of ideas, whereby one tries to apply two such techniques, for example, by using several control variates [7], combining antithetic variates and control variates [8, 9], or combining control variates and adaptive importance sampling [10, 11]. In MC methods, random variates are used to sample the function of interest over a number of trials. In importance sampling, the probability distribution of the random variates is adjusted based on the values of the trials to more closely mirror the behavior of the function. In general, such a change of variables must be performed on a case-by-case basis and is difficult, if not impossible, to accomplish for high-dimensional integrands. The Vegas code [12] and algorithm [13], instead, introduces a parametrization for this change of variables. The domain of integration is divided into subregions of equal importance so that the variance of the MC estimate is reduced compared to other sampling choices. A related method, additionally implemented in Vegas+ [14], is that of stratified sampling, whereby one uses stratification to homogenize the resulting weights within each subregion. The control variate method, instead, aims to reduce the variance by modifying the integrand. This is accomplished by adding and subtracting a function (the control variate) that is highly-correlated with the integrand and has a known value for its integral. The integral of the control variate converges to the known value, while the variance of the control variate partially cancels the variance in the original integrand. The challenge is to find a control variate that is indeed highly correlated (or anti-correlated) with the integrand \(f\). Usually, this can be achieved for only special, low-dimensional cases. Control variates have been used in various domain-inspired applications in [15, 16, 17, 18, 19, 20, 21]. The thrust of this paper is to develop a general strategy to combine the well-established importance sampling technique with control variates. We note that the Vegas method already computes a function that is highly correlated with the integrand in a general way. We illustrate how this function (or its evolution as it converges to an optimal value) can serve as a control variate to improve the accuracy for a given computational budget, or equivalently, to reduce the computation time for a given target accuracy. The paper is accompanied with a python code, CoVVVR, short for Control variates and Vegas variance reduction. The source code is publicly available at [https://github.com/crumpstrr33/covvvr](https://github.com/crumpstrr33/covvvr), together with an introductory tutorial for its usage. This paper is organized as follows. Section 2 lays the theoretical groundwork of our approach. Sections 2.1, 2.2 and 2.3 briefly review the basic ideas of the Monte Carlo integration, the importance sampling and the control variates methods, respectively. Section 2.4 introduces the joint application of importance sampling and control variates, while Section 2.5 explains how one can leverage the successive Vegas approximations as control variates. Sec. 3 presents the results from our numerical experiments quantifying the achieved precision improvement on some known benchmark examples. Section 4 contains a description and usage examples of the CoVVVR code. Section 5 is reserved for a summary and conclusions. ## 2 Control Variates and Vegas In this section, we describe the main idea of our method using a one-dimensional integration example \[I=\int_{a}^{b}\mathrm{d}x\,f(x) \tag{3}\] of a real function \(f(x)\) from \(a\) to \(b\), where for definiteness we take \(a<b\). In special cases, \(a\) can be \(-\infty\) and/or \(b\) can be \(+\infty\). ### Naive Monte Carlo Integration In the naive Monte Carlo approach, \(x\) is sampled uniformly over the domain \(\Omega\), i.e., \(x\sim U(\Omega)\), obtaining \(N\) independent and uniformly distributed samples \(x_{1},x_{2},\ldots,x_{N}\). An estimate \(\hat{I}_{N}\) of the integral (3) is obtained in terms of the expectation value \(E_{x\sim U(\Omega)}[f]\) of the function \(f(x)\) over the domain \(\Omega\): \[\widehat{I}_{N}=V(\Omega)\ \mathrm{E}_{x\sim U(\Omega)}\big{[}f\big{]}=V(\Omega) \times\frac{1}{N}\sum_{i=1}^{N}f(x_{i})\,, \tag{4}\] where \(x\) is sampled uniformly in the domain \(\Omega\), i.e., \(x_{i}\sim U(\Omega)\). In the case of a one-dimensional integral as in (3), \(V(\Omega)=b-a\). The uncertainty on \(\widehat{I}_{N}\) can be estimated in terms of the sample variance \(\mathrm{Var}\Big{[}\widehat{I}_{N}\Big{]}\) \[\delta\widehat{I}_{N}\approx\sqrt{\mathrm{Var}\Big{[}\widehat{I}_{N}\Big{]}}= V(\Omega)\sqrt{\frac{\mathrm{Var}[f]}{N}}\,, \tag{5}\] which decreases as \(N^{-1/2}\). However, increasing \(N\) arbitrarily is infeasible, as it runs into resource limitations. This motivates methods which attempt to reduce the variance \(\mathrm{Var}[f]\) of the integrand function \(f(x)\). Two such methods are discussed in the next two subsections. ### Importance Sampling In importance sampling, we choose a sampling function for the random variates that resembles \(f(x)\) as closely as possible, and whose integral over the range \((a,b)\) is known. By rescaling with the value of this integral, the new sampling function can be turned into a unit normalized probability distribution function (PDF), which we shall denote with \(p(x)\): \[\int_{a}^{b}\mathrm{d}x\,p(x)=1. \tag{6}\] The integral of interest (3) can be rewritten as \[I=\int_{a}^{b}\mathrm{d}x\,f(x)=\int_{a}^{b}\mathrm{d}x\ p(x)\,\frac{f(x)}{p(x )}=\mathrm{E}_{x\sim p(x)}\bigg{[}\frac{f}{p}\bigg{]}, \tag{7}\] which can in turn be estimated with Monte Carlo in \(N\) trials as \[\widehat{I}_{N}=\frac{1}{N}\sum_{i=1}^{N}\frac{f(x_{i})}{p(x_{i})}\,,\qquad \text{where }x_{i}\sim p(x). \tag{8}\] In analogy to (5) the error on the estimate (8) is given by \[\delta\widehat{I}_{N}\approx\sqrt{\mathrm{Var}\Big{[}\hat{I}_{N}\Big{]}}= \frac{1}{\sqrt{N}}\bigg{[}-I^{2}+\int_{a}^{b}\mathrm{d}x\ p(x)\,\frac{f^{2}(x )}{p^{2}(x)}\bigg{]}^{1/2}. \tag{9}\] This is minimized when \[p(x)\propto\big{|}f(x)\big{|}. \tag{10}\] In other words, whenever the two functions \(f(x)\) and \(p(x)\) have similar shapes, the variance is reduced and the precision of the estimate is improved. In fact, if we could choose \(p(x)\) to be exactly proportional to \(f(x)\) everywhere, then \(\widehat{I}_{N}\) is exactly equal to \(I\) for any \(N\). Thus, importance sampling is most beneficial when the function \(p(x)\) mimics \(f(x)\) as closely as possible. ### Control Variates In contrast to the importance sampling method, which involves a rescaling of the integrand, the control variates method _adds_ to \(f(x)\) a term The modified integrand can be taken as \[f_{c}(x)\equiv f(x)+c\bigg{(}g(x)-\frac{G}{b-a}\bigg{)}\,, \tag{11}\] with \[G\equiv\int_{a}^{b}\mathrm{d}x\,g(x) \tag{12}\] and where \(c\) is a parameter which at this point we are free to choose. It is easy to see that the modification (11) does not change the value of the integral, i.e. \[\int_{a}^{b}\mathrm{d}x\,f(x)=\int_{a}^{b}\mathrm{d}x\,f_{c}(x)\,,\qquad\forall c \in\mathbb{R}\,. \tag{13}\] \[\widehat{I}_{N}=V(\Omega)\,\frac{1}{N}\sum_{i=1}^{N}f_{c}(x_{i})\,,\qquad \text{where }x_{i}\sim U(\Omega). \tag{14}\] We can now leverage the freedom to choose the value of the parameter \(c\) in order to minimize the variance. Requiring \[\frac{\partial\text{Var}[f_{c}]}{\partial c}=0 \tag{15}\] gives the optimal value of \(c\) as \[c^{*}=-\frac{\text{Cov}(f,g)}{\text{Var}[g]}. \tag{16}\] The resulting variance is \[\text{Var}[f_{c^{*}}]=\text{Var}[f]-\frac{\text{Cov}(f,g)^{2}}{\text{Var}[g]^{ 2}}\text{Var}[g]=\bigg{[}1-\rho^{2}(f,g)\bigg{]}\text{Var}[f], \tag{17}\] where \[\rho(f,g)\equiv\frac{\text{Cov}(f,g)}{\sqrt{\text{Var}[f]\text{Var}[g]}} \tag{18}\] is the familiar Pearson correlation coefficient. Note that if \(|\rho(f,g)|>0\), the variance is necessarily reduced. Furthermore, the higher the correlation between \(g(x)\) and \(f(x)\), the larger the benefit. Therefore, just like in the method of importance sampling, we desire a function \(g(x)\) that i) is highly correlated with \(f(x)\) and, ii) has a known expectation value \(\text{E}[g]=G/(b-a)\). The method can be easily generalized to the case of multiple control variates. Appendix A contains the derivation for finding the optimal values \(c_{i}^{*}\) of the respective coefficients \(c_{i}\) in that case. ### Combining Importance Sampling and Control Variates The two methods discussed in the previous two subsections 2.2 and 2.3 (importance sampling and control variates) can be combined together as follows. Given a known PDF \(p(x)\) and a control variate function \(g(x)\), we modify the integrand as \[I=\int_{a}^{b}\mathrm{d}x\ p(x)\biggl{[}\frac{f(x)}{p(x)}+c\biggl{(}\frac{g(x)} {p(x)}-E_{p}\biggl{[}\frac{g}{p}\biggr{]}\biggr{)}\biggr{]}. \tag{19}\] The corresponding Monte Carlo estimate is \[\widehat{I}_{N}=\frac{1}{N}\sum_{i=1}^{N}\biggl{[}\frac{f(x_{i})}{p(x_{i})}+c \biggl{(}\frac{g(x_{i})}{p(x_{i})}-E_{p}\biggl{[}\frac{g}{p}\biggr{]}\biggr{)} \biggr{]}\,,\qquad\text{where }x_{i}\sim p(x). \tag{20}\] We would still need a \(p(x)\) that is approximately proportional to \(f(x)\) and a \(g(x)\) whose integral is known, so that \(f/p\) is correlated with \(g/p\). In this case, the optimal value \(c^{*}\) is given by \[c^{*}=-\frac{\text{Cov}(f/p,g/p)}{\text{Var}[g/p]}. \tag{21}\] ### Repurposing Vegas Intermediate Outputs as Control Variates Vegas is an adaptive and iterative Monte Carlo where, at each iteration \(i\), a unit-normalized probability distribution \(p_{i}(x)\) is updated to serve as a probability distribution for importance sampling as described in Section 2.2[13]. Since Vegas stores and reports these sampling distributions for each iteration, they can be usefully repurposed as control variates. We can then apply the result in (20), by choosing the IS function as \(p(x)=p_{n}(x)\) for the final iteration \(n\) and the control variate function \(g(x)=p_{i}(x)\) from some previous iteration \(i<n\). Note that since \(p_{i}(x)\) is unit-normalized the last term in (20) becomes \[E_{p}\biggl{[}\frac{g}{p}\biggr{]}=E_{p_{n}}\biggl{[}\frac{p_{i}}{p_{n}}\biggr{]} =\int_{a}^{b}\mathrm{d}x\ p_{n}(x)\,\frac{p_{i}(x)}{p_{n}(x)}=\int_{a}^{b} \mathrm{d}x\ p_{i}(x)=1\,. \tag{22}\] In this way, Vegas can be used to simultaneously provide both the sampling distribution \(p(x)\) and the control variate \(g(x)\). In principle, different prescriptions for selecting the previous iteration \(i\) to be used in (22) can be designed. Choosing the optimal among those prescriptions can be done on a case by case basis, as discussed below in Section 3. ## 3 Results ### Test Cases For comparison with other studies, we compute values for the benchmark functions presented in Ref. [6]. The domain of integration for each variable is over the range of 0 to 1, and the corresponding values are given in the second column of Table III in Ref. [6]. We confirmed those values as shown in Table 1 and, in what follows, focus on the accuracy of the MC estimate. For clarity, the definition of the functions used as benchmarks is reproduced below. #### 3.1.1 \(d\)-dimensional Gaussians The \(d\)-dimensional Gaussians are defined as \[f_{1}(\mathbf{x})=\frac{1}{(\sigma\sqrt{\pi})^{d}}\exp\Biggl{(}-\frac{1}{\sigma^ {2}}\sum_{i=1}^{d}(x_{i}-\mu)^{2}\Biggr{)}, \tag{23}\] with mean \(\mu=0.5\) and standard deviation \(\sigma=0.2\). They serve as a good starting point and display the effectiveness of control variates for separable functions. We consider \(d=2\), \(d=4\), \(d=8\) and \(d=16\). #### 3.1.2 \(d\)-dimensional Camel functions The \(d\)-dimensional Camel functions are defined by \[f_{2}(\mathbf{x})=\frac{1}{2(\sigma\sqrt{\pi})^{d}}\Biggl{[}\exp\Biggl{(}- \frac{1}{\sigma^{2}}\sum_{i=1}^{d}(x_{i}-\mu_{1})^{2}\Biggr{)}+\exp\Biggl{(}- \frac{1}{\sigma^{2}}\sum_{i=1}^{d}(x_{i}-\mu_{2})^{2}\Biggr{)}\Biggr{]} \tag{24}\] in terms of three parameters, \(\mu_{1}=1/3\), \(\mu_{2}=2/3\), and \(\sigma=0.2\). Unlike the Gaussians case, the integration variables are not separable. We consider \(d=2\), \(d=4\), \(d=8\) and \(d=16\). #### 3.1.3 Entangled circles This function is given by \[f_{3}(x_{1},x_{2}) =x_{2}^{a}\exp\bigg{[}-w\Big{|}(x_{2}-p_{2})^{2}+(x_{1}-p_{1})^{2 }-r^{2}\Big{|}\bigg{]}\] \[+(1-x_{2})^{a}\exp\bigg{[}-w\Big{|}\big{(}x_{2}-(1-p_{2})\big{)}^ {2}+\big{(}x_{1}-(1-p_{1})\big{)}^{2}-r^{2}\Big{|}\bigg{]} \tag{25}\] It is largely concentrated on two overlapping circles of radius \(r\) with centers at \((p_{1},p_{2})\) and \((1-p_{1},1-p_{2})\), respectively. For the numerical experiments, we use \(p_{1}=0.4\), \(p_{2}=0.6\), \(r=0.25\), \(w=1/0.004\), and \(a=3\). #### 3.1.4 Annulus with cuts The fourth function is an annulus defined by cuts at \(r_{\rm min}\) and \(r_{\rm max}\): \[f_{4}(x_{1},x_{2})=\begin{cases}1&\text{if }r_{\rm min}<\sqrt{x_{1}^{2}+x_{2}^{2 }}<r_{\rm max},\\ 0&\text{else}.\end{cases} \tag{26}\] The cut parameters are chosen to be \(r_{\rm min}=0.2\) and \(r_{\rm max}=0.45\). #### 3.1.5 One-loop scalar box integral The fifth function represents a one-loop scalar box integral encountered in the calculation of \(gg\to gh\), an important contribution to Higgs production from gluon fusion. The integral is \[I_{5}= S_{\rm Box}(s_{12},s_{23},s_{1},s_{2},s_{3},s_{4},m_{t}^{2},m_{t}^{2},m_ {t}^{2},m_{t}^{2},m_{t}^{2}) \tag{27}\] \[+ S_{\rm Box}(s_{23},s_{12},s_{2},s_{3},s_{4},s_{1},m_{t}^{2},m_{t} ^{2},m_{t}^{2},m_{t}^{2},m_{t}^{2})\] \[+ S_{\rm Box}(s_{12},s_{23},s_{3},s_{4},s_{1},s_{2},m_{t}^{2},m_{t} ^{2},m_{t}^{2},m_{t}^{2})\] \[+ S_{\rm Box}(s_{23},s_{12},s_{4},s_{1},s_{2},s_{3},m_{t}^{2},m_{t} ^{2},m_{t}^{2},m_{t}^{2}),\] where \[S_{\text{Box}}(s_{12},s_{23},s_{1},s_{2},s_{3},s_{4},m_{1}^{2},m_{2}^{2},m_{3}^{4},m_{4}^{2})=\int_{0}^{1}\frac{\mathrm{d}x_{1}\,\mathrm{d}x_{2}\,\mathrm{d}x_{3}} {\widetilde{\mathcal{F}}_{\text{Box}}^{2}} \tag{28}\] and \[\begin{split}\widetilde{\mathcal{F}}_{\text{Box}}=&-s _{12}x_{2}-s_{23}x_{1}x_{3}-s_{1}x_{1}-s_{2}x_{1}x_{2}-s_{3}x_{2}x_{3}-s_{4}x_ {3}\\ &+(1+x_{1}+x_{2}+x_{3})(x_{1}m_{1}^{2}+x_{2}m_{2}^{2}+x_{3}m_{3}^ {2}+m_{4}^{2}).\end{split} \tag{29}\] The integrand function \(f_{5}(x_{1},x_{2},x_{3})\) in this case is a sum of four terms of the type \(1/\widetilde{\mathcal{F}}_{\text{Box}}^{2}\). We test with the same parameters as in [6]: \(s_{12}=130^{2}\), \(s_{23}=-s_{12}\), \(s_{1}=s_{2}=s_{3}=0\), \(s_{4}=125^{2}\), and \(m_{i}=173.9\) for \(i=1-4\). #### 3.1.6 Polynomial functions The final set of integrand functions are quadratic polynomials of \(d\) variables \[f_{6}(\mathbf{x})=\sum_{i=1}^{d}x_{i}(1-x_{i}). \tag{30}\] We consider \(d=18\), \(d=54\) and \(d=96\). ### Numerical Comparisons Results from integrating the six types of test functions from Section 3.1 are shown in Table 1. The first two columns list the name of the function and the dimension of the integration. The third column shows the expected true answer, while the next two columns give the MC estimates from Vegas and from CoVVVR, respectively. For the latter, we use the one control variate which gives the maximum variance reduction, as explained \begin{table} \begin{tabular}{||l|r|r||r|r||r|r||} \multicolumn{1}{||c||}{} & \multicolumn{2}{c||}{**Mean**} & \multicolumn{2}{c||}{**Normalized RMS Error**} \\ \hline Function & Dim & True Value & Vegas & CVInt & Vegas & CVInt \\ \hline \hline & 2 & 0.999186 & 0.999190 & 0.999188 & 1.0049e-04 & 9.1006e-05 \\ Gaussian & 4 & 0.998373 & 0.998379 & 0.998379 & 0.000145 & 0.000132 \\ & 8 & 0.996749 & 0.996750 & 0.996866 & 0.000202 & 0.000231 \\ & 16 & 0.993509 & 0.993507 & 0.993507 & 0.000293 & 0.000265 \\ \hline & 2 & 0.981660 & 0.981618 & 0.981617 & 0.001286 & 0.001283 \\ Camel & 4 & 0.963657 & 0.963559 & 0.963560 & 0.003180 & 0.003173 \\ & 8 & 0.928635 & 0.929091 & 0.929223 & 0.010283 & 0.010295 \\ & 16 & 0.862363 & 0.784645 & 0.784860 & 0.546183 & 0.544413 \\ \hline Entangled Circles & 2 & 0.013680 & 0.013687 & 0.013687 & 0.004349 & 0.004348 \\ \hline Annulus with Cuts & 2 & 0.127627 & 0.127644 & 0.127642 & 0.003773 & 0.003789 \\ \hline Scalar Top Loop & 3 & 1.9374e-10 & 1.9370e-10 & 1.9370e-10 & 0.000328 & 0.000322 \\ \hline & 18 & 3.000000 & 2.999998 & 2.99999 & 2.6498e-05 & 2.1906e-05 \\ Polynomial & 54 & 9.000000 & 8.999992 & 8.99995 & 4.1208e-05 & 2.9833e-05 \\ & 96 & 16.000000 & 15.999929 & 15.999954 & 7.2611e-05 & 5.1897e-05 \\ \hline \end{tabular} \end{table} Table 1: Results for the test case integrals described in Section 3.1 with \(n=50\) iterations, \(N_{\text{events}}=5,000\) events per iteration, and averaged over \(N_{\text{runs}}=1,000\) runs. The results displayed in the CVInt column were obtained with the one control variate that gives maximum variance reduction. The normalized RMS error is given by (31). below, and choose the optimal value of \(c^{*}\) according to (21). In the last two columns we show the normalized RMS error defined as \[\frac{1}{I}\ \sqrt{\frac{1}{N_{\text{runs}}}\sum_{i=1}^{N_{\text{runs}}}(\widehat{I }_{i}-I)^{2}}. \tag{31}\] We see that, as expected, the accuracy using a CV is comparable or improved. As implied by eq. (19), the variance reduction results from the presence of a correlation between the function ratios \(f(x)/p(x)\) and \(g(x)/p(x)\). This correlation is illustrated in Figure 1 for the case of the 16-dimensional Gaussian (left) and the 96-dimensional polynomial (right) function. The iteration used for the CV was chosen automatically. The correlation is readily visible by eye, and the correlation coefficient is \(\rho\approx 0.42\) (left) and \(\rho\approx 0.75\) (right). The effect of adding more than one CV is illustrated in Table 2 and Figure 2. We show results with one (1 CV), two (2 CV) or all 49 intermediate approximations (All CVs) from Vegas as control variates. The results are quoted in terms of the variance reduction in percentage (VRP) and the corresponding computational cost (in seconds, as well as relative to the Vegas benchmark time). In each case, we select the CV or CVs that reduce the variance the most. Table 2 confirms that, by construction, the use of CVs always improves the accuracy of the estimate. The size of the VRP effect depends on the type of function at hand, and can vary from \(\sim 1\%\) to as much as \(50\%\) for one CV and \(60\%\) for two CVs. The associated computational cost is an increase of about \(1.5-2.5\) times for one CV and \(2-4\) times for 2 CVs. Note that an increase in accuracy could also be achieved by increasing the number of events used in a standard Vegas calculation. We discuss the benefits of the CV method later. The dependence of the variance reduction on the choice of CVs is illustrated in Figure 2. The heat map in each panel depicts the variance reduction due to 1 Figure 1: Correlation between the function \(f/p\) and the control variate \(g/p\). This is shown for the 16d Gaussian (left) and the 96d polynomial (right) run for \(n=100\) iterations with, at most, \(N_{\text{events}}=10000\) evaluations per iteration. The iteration used for the control variate (CV) was chosen automatically using the parameter cv_iters=“auto1” as described in Section 4. We see that there is a correlation between the functions shown both qualitatively by the plot and quantitatively by a correlation coefficient of \(\rho\approx 0.42\) (left) and \(\rho\approx 0.75\) (right). Additionally, the expectation is approximately 1, in agreement with (22). and 2 CVs for all combinations. We show results averaged over 10 runs for a 16d Gaussian (left panel) and the scalar top loop (right panel). In each run, we perform 50 iterations with at most 25,000 evaluations per iteration. Along each axis, each panel contains a plot of the normalized variance (red line) and the correlation coefficient between the target function \(f(x)\) and the respective CV (purple line). Figure 2 shows that the optimal iteration for choosing a CV can vary significantly -- in the case of a 16d Gaussian, it is around 30, while for the scalar top loop integral, it is at the beginning. When we have the freedom to choose two CVs, some interesting patterns appear as shown in the heat maps, and the optimal choice for the Gaussian are the iterations around 25 and 40, respectively. Since the optimal choice of the iteration is _a priori_ unknown, we show in Table 3 the corresponding results when the iteration is decided and fixed at the very beginning. In this case, we pick the CV from the iteration which is \(1/4\) of the way from the beginning, i.e., \(i=\lfloor n/4\rfloor\). The number of events \(N_{\text{events}}\) and total number of iterations \(n\) are chosen based on reaching the precision required in Ref. [6], namely, relative uncertainty of \(10^{-4}\) for the first 11 cases and \(10^{-5}\) for the last three. We see that even when the CV is fixed rather arbitrarily, the variance reduction is still significant, and can be \(\sim 50\%\), as in the case of the polynomial functions. ## 4 CoVVVR: Control Variates & Vegas Variance Reduction The paper is accompanied with a python code, CoVVVR, short for Control variates & Vegas Variance Reduction, that can be used to reproduce our results. The source code is publicly available at [https://github.com/crumpstrr33/covvvr](https://github.com/crumpstrr33/covvvr). In this section, we provide an introduction tutorial. \begin{table} \begin{tabular}{|c|c||c|c|c||c|c|c||c|c|} \hline & \multicolumn{2}{c||}{VEGAS} & \multicolumn{2}{c||}{10V} & \multicolumn{2}{c||}{20V} & \multicolumn{2}{c||}{All Cov.} \\ \hline Function & Dim & Time (s) & VRP & Time (s) & VRP & Time (s) & VRP & Time (s) \\ \hline \hline \multirow{4}{*}{Gaussian} & 2 & 0.07 & 17.02\% & 0.09 & (1.4) & 31.40\% & 0.14 & (2.1) & 47.15\% & 1.76 (26.2) \\ & 4 & 0.10 & 15.86\% & 0.14 & (1.5) & 26.55\% & 0.22 & (2.3) & 39.78\% & 2.91 (30.1) \\ & 8 & 0.15 & 17.28\% & 0.25 & (1.7) & 23.11\% & 0.38 & (2.6) & 33.95\% & 5.74 (39.4) \\ & 16 & 0.23 & 13.95\% & 0.49 & (2.1) & 17.22\% & 0.79 & (3.4) & 23.87\% & 13.34 (56.9) \\ \hline \multirow{4}{*}{Camel} & 2 & 0.07 & 0.38\% & 0.09 & (1.4) & 0.81\% & 0.14 & (2.0) & 1.24\% & 1.75 (25.7) \\ & 4 & 0.10 & 0.12\% & 0.14 & (1.5) & 0.33\% & 0.22 & (2.2) & 0.66\% & 2.94 (30.0) \\ & 8 & 0.15 & 0.20\% & 0.25 & (1.6) & 0.25\% & 0.39 & (2.5) & 0.37\% & 5.86 (37.9) \\ & 16 & 0.26 & 0.74\% & 0.52 & (2.0) & 3.17\% & 0.81 & (3.2) & 7.72\% & 13.70 (53.4) \\ \hline Entangled Circles & 2 & 0.08 & 0.31\% & 0.10 & (1.2) & 0.36\% & 0.15 & (1.8) & 0.67\% & 1.81 (22.4) \\ \hline Annulus with Cuts & 2 & 0.05 & 1.25\% & 0.08 & (1.6) & 2.94\% & 0.12 & (2.3) & 16.93\% & 1.77 (33.1) \\ \hline Scalar Top Loop & 3 & 0.12 & 7.31\% & 0.14 & (1.2) & 49.33\% & 0.21 & (1.8) & 57.91\% & 2.30 (19.9) \\ \hline \multirow{4}{*}{Polynomial} & 18 & 0.26 & 29.50\% & 0.56 & (2.2) & 29.74\% & 0.89 & (3.5) & 51.36\% & 15.49 (59.9) \\ & 54 & 0.70 & 42.65\% & 1.65 & (2.4) & 58.97\% & 2.63 & (3.8) & 71.77\% & 47.94 (68.8) \\ \cline{1-1} & 96 & 1.33 & 49.63\% & 3.04 & (2.3) & 63.77\% & 4.79 & (3.6) & 78.18\% & 86.55 (65.3) \\ \hline \end{tabular} \end{table} Table 2: Results for the variance reduction in percent (VRP) after using one, two, or all 49 intermediate approximations from Vegas as control variates. In each case, we ran \(n=50\) iterations and at most \(N_{\text{events}}=5000\) events per iteration averaged over \(N_{\text{runs}}=100\) independent runs. The time column shows the corresponding computational cost (in seconds and relative to the Vegas benchmark time). **Installation**: CoVVVR can be installed via pip: $ python -m pip install covvvr **Usage**: The workflow involves creating a class that inherits the Function subclass and passing that to the CVIntegrator. The Function class contains the function to be integrated but also other information such as its name, the true value of the integration (if available) and parameters of the function. This class can be a built-in function, such as those used in this paper, or a custom-made function. The CVIntegrator class does the integration and stores the results like mean and variance. **Using a Built-In Function**: ``` 1fromcovvrimportGVIntegrator 2fromcovvr.functionsimportNGauss 3 4#Create16-dimensionalGaussian 5ng=NGauss(16) 6 7#Printoutallparametersoftclassinstance print(ng,'\n') 8 9#Createintegratorclass@usethe20thiterationascontrolvariate 10cvi=CVIntegrator(ng,evals=1000,tot_iters=50,cv_iters=20) 11 Figure 2: The variance reduction due to 1 control variate (along the diagonal) and 2 control variates for all combinations. We show results for \(n=50\) iterations with at most \(N_{\text{events}}\) =5,000 evaluations per iteration averaged over \(N_{\text{runs}}\) =100 runs for a 16d Gaussian (left panel) and the scalar top loop (right panel). * Run_theintegration * cvi.integrate() * Printinfo * cvi.compare(rounding=5) This integrates a 16-dimensional Gaussian with 50 iterations and 5000 evaluations per iteration in Vegas. It uses the 20th iteration adaptation from Vegas as the control variate. The output is NGauss(dimension=16, name=16D Gaussian, true_value=0.9935086032227194, mu=0.5, sigma=0.2) No CVs | With CVs -------+------- Mean | 0.99528 | 0.99579 Variance | 4.17808e-06 | 3.54189e-06 St Dev | 0.00204 | 0.00188 VRP | | 15.22696% **Adding User-Defined Function**: The make_func function allows for the definition \begin{table} \begin{tabular}{||l|r|r|r|r|r|r||} \hline \multicolumn{1}{|c|}{} & \multicolumn{1}{c||}{} & \multicolumn{1}{c||}{} & \multicolumn{3}{c||}{**Saturated Event of a user-defined functions via the Function subclass. As an example, consider the 2-dimensional function \(f(x_{1},x_{2})=ax_{1}^{2}+bx_{2}^{2}\). It can be defined as ``` 1fromcovvrimportGVIntegrator,make_func 2 3#Createfunction,notethatitsverizedusingNumyslicing 4def(self,x): 5returnself.a*x[:,0]**2+self.b*x[:,1] 6 7#Creatingclasswithname'WeightedPoly'andassigningvaluestothe 8#parametersinthefunction 9wpoly=make_func( 10cname='WeightedPoly', 11dimension=2, 12function=f, 13name='WeightedPolynomial', 14a=0.3, 15b=0.6 16} 17#Printoutparametersofclass(note'true_value'isn'tshownbutit 18#canbeaddedifonewantstokeeptrackoftthat 19print(wpoly,'\n') 20 21#Createintegratorclassandusemultiplecontrolvariates 21cvi=CVIntegrator( 24function=wpoly, 25vals=1000, 26tot_iters=50, 27cv_iters=[10,15,20,25,30,35] 28} 29#Runtheintegration 30cvi.integrate() 31 32#Printinfo 33cvi.compare(rounding=5) ``` which outputs ``` 1WeightedPoly(dimension=2,name=WeightedPolynomial,a=0.3,b=0.6) 21NoCVs|WithCVs 22{-------+--------Mean|0.39984|0.40004Variance|5.83781e-08|4.90241e-08StDev|2.41616e-04|2.21414e-04VRP|16.02309% ``` Note that vectorization of the integrand greatly increases the speed of the computation. To not have to deal with classes, one can use the classic_integrate function that does the steps above and returns the CVIntegrator object. To run the previous code block, one would use ``` 1cv1=classic_integrate( 2function=f, 3ewals=1000, 4tot_iters=50, 5bounds=[(0, 1), (0, 1)], 6cv_iters=20, 7cname="WeightPoly", 8name="WeightPolyynomial", 9a=0.3, 10b=0.6 11} 12cv1.compare() ``` which outputs the same results as before. ``` 1NoCVs|WithCVs 2{-------+--------Mean|0.400|0.400Variance|5.826e-08|5.025e-08StDev|2.414e-04|2.242e-04VRP|13.746% ``` Note that in the classic_integrate function, bounds is not optional as it is needed in order to define the dimension of the integral. (In contrast, the bounds argument is optional for the CVIntegrator class, since CVIntegrator determines the dimension of the integral from the Function class being passed, and then sets the limits of the integration from 0 to 1 by default.) **Specifying Control Variate Iterations**: There are multiple valid arguments that can be passed to cv_iters for both the CVIntegrator class and classic_integrate which we list here. * For using one control variate, pass an integer representing the iteration to use. * For multiple control variates, pass a list of integers representing the iterations to use. * The string 'all' will use every iteration. * The string all%n will use every iteration mod \(n\). So if one specifies tot_iters=15 and cv_iters='all%3', then then the iterations used will be [3, 6, 9, 12] * The previous result can be shifted by using all%n+b where \(b\) is the shift. So for tot_iters=15 and cv_iters='all%3+2', you'll get [2, 5, 8, 11, 14]. * The string auto1 will estimate the best single CV to use by sampling each possibility using auto1_neval samples specified by the user. **Manual Use of Function Class**: To access the function call manually, use function or f. Using the second example, one can run * wpoly.f([1:2, 1.5]) # returns 1.3319999999999999 * wpoly.f([0:2, 1], [0:8, 0:8], [1, 2]) # returns array([0.612, 0.672, 1.5]) This wraps around the private, vectorized _function/_f used by Vegas. ## 5 Summary and Outlook The MC method is widely used in many disciplines, and the issue of variance reduction in MC estimates has existed as long as the method. A number of techniques have been developed to reduce the variance of such MC estimates relative to brute force sampling, including importance sampling and control variates. In this paper, we combined these two methods by leveraging the ability of the importance sampling method implemented in Vegas to automatically provide viable control variate candidates. We demonstrate and quantify the resulting variance reduction using several benchmark functions considered previously in a similar context [6]. Our new approach was able to reduce the variance by \(\mathcal{O}(1\text{-}50)\%\) using \(1\) CV and \(\mathcal{O}(1\text{-}60)\%\) using \(2\) CVs. The reduced variance comes at the cost of some computational time. In our experiments, the main source of additional computational overhead in our approach is the computation of the control variates for all the datapoints, using the intermediate Vegas distributions. The inherent theoretical advantages of our scheme should always be weighed against the possibility to improve the precision by simply increasing the number of data points. The recommended use case of our technique is in situations where the latter approach is comparatively less effective, which will be the case if computing the integrand \(f\) is sufficiently computationally expensive. Potential examples of such situations in particle physics include inference involving the slow simulation of the experimental apparatus, or the presence of additional convolutions in the definition of the integrand \(f(\mathbf{x})\) itself, e.g., due to transfer functions. There are several possible avenues for refining the technique proposed in this paper. The Vegas algorithm has a few parameters which control how the sampling distributions are iteratively adapted. The effect of these parameters on the performance of the Vegas-based control variates can be studied in order to construct good controls. Furthermore, the Vegas grid adaptation algorithm could potentially be modified for the specific purpose of providing good control variates. There are other possible avenues for reducing the variance of a MC estimate which leverage deep learning [22, 23] or specific domain knowledge about the integrand [24]. These ideas are left for future work. ## Acknowledgements The idea for this work arose during the Summer 2022 workshop "Interplay between Fundamental Physics and Machine Learning" at the Aspen Center for Physics, which is supported by National Science Foundation grant PHY-1607611. The authors would like to thank the Aspen Center for Physics for hospitality during the summer of 2022. Funding informationThis work is supported in parts by US DOE DE-SC0024407 and DOE DE-SC0022148. JS is supported by University of Kansas Research Excellence Initiative Award and US DOE AI-HEP grant. SM and PS are partially supported by the U.S. Department of Energy, Office of Science, Office of High Energy Physics QuantISED program under the grants "HEP Machine Learning and Optimization Go Quantum", Award Number 0000240323, and "DOE QuantiSED Consortium QCCFP-QMLQCF", Award Number DE-SC0019219. This manuscript has been authored by Fermi Research Alliance, LLC under Contract No. DEAC02-07CH11359 with the U.S. Department of Energy, Office of Science, Office of High Energy Physics. ## Code and data availability The code and data that support the findings of this study are openly available at the following URL: [https://github.com/crumpstrr33/covvvr](https://github.com/crumpstrr33/covvvr). ## Appendix A Multiple Control Variates For \(N_{\text{CV}}>1\) control variates \(g_{i}(\mathbf{x})\), the integrand is modified in analogy to (11) \[f_{c}(\mathbf{x})=f(\mathbf{x})+\sum_{i=1}^{N_{\text{CV}}}c_{i}(g_{i}(\mathbf{ x})-E[g_{i}]) \tag{32}\] and its variance is: \[\begin{split}\text{Var}[f_{c}]&=\text{Var}\Bigg{[} f(\mathbf{x})+\sum_{i=1}^{N_{\text{CV}}}c_{i}(g_{i}(\mathbf{x})-E[g_{i}]) \Bigg{]}\\ &=\text{Var}[f]+2\text{Cov}\Bigg{(}f,\sum_{i=1}^{N_{\text{CV}}} c_{i}(g_{i}(\mathbf{x})-E[g_{i}])\Bigg{)}+\text{Var}\Bigg{[}\sum_{i=1}^{N_{ \text{CV}}}c_{i}(g_{i}(\mathbf{x})-E[g_{i}])\Bigg{]}\\ &=\text{Var}[f]+2\text{Cov}\Bigg{(}f,\sum_{i=1}^{N_{\text{CV}}} c_{i}g_{i}\Bigg{)}+\text{Var}\Bigg{[}\sum_{i=1}^{N_{\text{CV}}}c_{i}g_{i} \Bigg{]}\\ &=\text{Var}[f]+2\sum_{i=1}^{N_{\text{CV}}}c_{i}\text{Cov}(f,g_ {i})+\sum_{ij}^{N_{\text{CV}}}c_{i}c_{j}\text{Cov}(g_{i},g_{j})\end{split} \tag{33}\] where \(\text{Cov}(g_{i},g_{i})=\text{Var}[g_{i}]\). Taking derivatives with respect to the coefficients \(c_{j}\) gives \[\frac{\partial\text{Var}[f_{c}]}{\partial c_{j}}=2\text{Cov}(f,g_{j})+2\sum_{ i=1}^{N_{\text{CV}}}c_{i}\text{Cov}(g_{i},g_{j}). \tag{34}\] Setting that equal to zero implies \[\sum_{i=1}^{N_{\text{CV}}}c_{i}\text{Cov}(g_{i},g_{j})=-\text{Cov}(f,g_{j}). \tag{35}\] If we let \(A_{j}=-\text{Cov}(f,g_{j})\) and \(B_{ij}=\text{Cov}(g_{i},g_{j})\) then \[\sum_{i=1}^{N_{\text{CV}}}B_{ij}c_{i}=A_{j}\qquad\text{or}\qquad\mathbf{c}= \mathbf{B}^{-1}\mathbf{A}. \tag{36}\]
2309.05615
Skyrmions in nanorings: a versatile platform for Skyrmionics
The dynamical properties of skyrmions can be exploited to build devices with new functionalities. Here, we first investigate a skyrmion-based ring-shaped device by means of micromagnetic simulations and Thiele equation. We subsequently show three applications scenarios: (1) a clock with tunable frequency that is biased with an electrical current having a radial spatial distribution, (2) an alternator, where the skyrmion circular motion driven by an engineered anisotropy gradient is converted into an electrical signal, and (3) an energy harvester, where the skyrmion motion driven by a thermal gradient is converted into an electrical signal, thus providing a heat recovery operation. We also show how to precisely tune the frequency and amplitude of the output electrical signals by varying material parameters, geometrical parameters, number and velocity of skyrmions, and we further prove the correct device functionality under realistic conditions given by room temperature and internal material defects. Our results open a new route for the realization of energy efficient nanoscale clocks, generators, and energy harvesters.
Dimitris Kechrakos, Vito Puliafito, Alejandro Riveros, Jiahao Liu, Wanjun Jiang, Mario Carpentieri, Riccardo Tomasello, Giovanni Finocchio
2023-09-11T17:03:00Z
http://arxiv.org/abs/2309.05615v1
# Skyrmions in nanorings: a versatile platform for Skyrmionics ###### Abstract The dynamical properties of skyrmions can be exploited to build devices with new functionalities. Here, we first investigate a skyrmion-based ring-shaped device by means of micromagnetic simulations and Thiele's equation. We subsequently show three applications scenarios: (1) a clock with tunable frequency that is biased with an electrical current having a radial spatial distribution, (2) an alternator, where the skyrmion circular motion driven by an engineered anisotropy gradient is converted into an electrical signal, and (3) an energy harvester, where the skyrmion motion driven by a thermal gradient is converted into an electrical signal, thus providing a heat recovery operation. We also show how to precisely tune the frequency and amplitude of the output electrical signals by varying material parameters, geometrical parameters, number and velocity of skyrmions, and we further prove the correct device functionality under realistic conditions given by room temperature and internal material defects. Our results open a new route for the realization of energy efficient nanoscale clocks, generators, and energy harvesters. Introduction Magnetic skyrmions are fascinating topological textures with a number of intriguing fundamental properties and several potential engineering applications. Skyrmions are characterized by an integer topological index (skyrmion number) \(Q=\frac{1}{4\pi}\int\mathbf{m}\cdot\big{(}\partial_{x}\mathbf{m}\times\partial_{y}\mathbf{m} \big{)}dxd\mathbf{y}\)[1, 2], which represents the number of times the magnetization vector wraps a surface of the unit sphere. Topologically, skyrmions transformation into a phase with a different \(Q\) is forbidden from a mathematical point of view [1, 2]. However, physically, such a "topological protection" provides an additional barrier for skyrmion annihilation/nucleation [3, 4]. In view of skyrmion-based technological applications, two crucial requirements are necessary: electrical control and room temperature stability of skyrmions. So far, skyrmion manipulation by spin-orbit torque (SOT) is the most frequently studied approach [5, 6, 7, 8, 9, 10]. However, promising alternative methods have been also proposed. Some of them are based on gradients of external magnetic field [11, 12, 13, 14], perpendicular anisotropy [15, 16], as well as thermal gradient (skyrmion-caloritronics) [17, 18, 19]. Room-temperature skyrmionic states have been observed in a wide range of materials, such as \(B_{20}\) compounds [20, 21], ferromagnetic single layer (FM) in contact with a heavy metal (HM) [10, 22, 23], HM\({}_{1}\)/FM/HM\({}_{2}\) ferromagnetic multilayers [8, 9, 24, 25, 26, 27, 28, 29], ferrimagnets [30, 31], synthetic antiferromagnets [32, 33, 34], and combination of the previous ones [35, 36, 37]. A prime application of skyrmions is the racetrack memory that has stimulated a lot of research efforts [38, 39, 40]. In those devices, the digital information can be coded in the presence/absence of a skyrmion or in two different types of skyrmions [35, 37, 41]. However, skyrmion racetrack memory suffers from a big drawback, intrinsically linked to the topological character of ferromagnetic skyrmions, which is the occurrence of a finite skyrmion Hall angle [9, 10, 42] that quantifies the undesired drag of skyrmions in a direction that is transverse to the racetrack. Many strategies have been proposed to reduce and ultimately suppress the skyrmion Hall angle, from engineering anisotropy in ferromagnetic tracks [15, 43, 44] to the use of ferri [30]-and antiferro-magnets [45, 46, 47, 48], including SAFs [49, 50]. Predicted conventional applications also include spin-torque oscillators (STO) [51, 52, 53, 54], microwave detectors [55], as well as logic gates [56, 57, 58]. Furthermore, skyrmions have been suggested for unconventional applications, such as in a reshuffle device for probabilistic computing [59, 60], reservoir computing [61, 62, 63], true random number generators [64], Boolean logic computing gates [56, 65, 66, 67], neuromorphic computing [68, 69], artificial spiking neuron [70, 71], memristive networks [72, 73, 74], and solution of the shortest path problem [75]. A promising geometry for skyrmion-based devices is a nanoring, where skyrmions have been previously studied experimentally with respect to their field gradient-driven motion [12], and theoretically for microwave emissions [76], and controlled nucleation [77]. Here, we expand the potential usage of such structure by designing three applications: (1) a clock with tunable frequency, (2) a skyrmion-alternator, and (3) a skyrmion-based energy harvester. We first demonstrate these functionalities by means of massive micromagnetic simulations and analytical calculations based on Thiele formalism. Subsequently, we propose their potential experimental implementations. The working principle of the skyrmion clock (device-1) is based on the usage of a nanoring to generate a 2/30 periodic electrical signal by repeatedly passing skyrmions through a detector region of the nanoring. Skyrmion motion is induced via standard SOT originated from a spin-current with a radial distribution. Skyrmion detection can be achieved via either the topological Hall resistivity [27, 28, 78] or the tunnelling magnetoresistance (TMR) [79, 80]. These devices allow not only for frequency tunability via the number of skyrmions circulating in the nanoring, but also for a precise tunability on injected current and/or material parameters. The skyrmion alternator (device-2) is realized with a contactless detection via a Faraday coil. The time variation of the stray field, generated by the periodic passage of the skyrmion under the Faraday coil, generates an electrical ac voltage with an amplitude proportional to the skyrmion velocity. As we demonstrate in the present work, for a set of parameters corresponding to state-of-the-art skyrmionic materials, the generated voltage amplitude can reach values of the order of a 1 \(\mu\)V. In such an application, to increase the energy conversion efficiency, we envisage a skyrmion motion driven by an engineered magnetic anisotropy [15, 16], which can be considered as a ultralow power driving mechanism. The skyrmion alternator is an important prediction of our work since it represents the standard approach used in today's technology for the generation of electrical energy, which can be potentially used to drive nano/micro machines. The skyrmion energy harvester (device-3) is based on the partial conversion of thermal energy due to heat dissipation into electrical energy. We envisage the presence of a heat source, for example, a microprocessor, located at the center of the nanoring. The thermal energy is dissipated towards the outer part of the nanoring thus creating a thermal gradient. The latter can trigger the skyrmion motion that is converted into an output voltage when the skyrmion passes underneath a Faraday coil. Considering also the feature of zero-input energy for gradient-driven motion, the suggested skyrmion application represents a very promising path towards the on-chip conversion from skyrmion motion and dissipated heat into electricity. We prove the stable operation of device-1 also under the influence of thermal fluctuations and internal material defects, and the robustness we demonstrated is also valid for the devices-2,3 scheme. ## 2 Device and Modeling ### Device description Figure 1(a) shows the proposed device for the skyrmion clock with tunable frequency. It is composed of a FM nanoring on top of an extended HM layer, with two gold contacts (yellow in the figure), one is inside the nanoring and the other surrounds the nanoring. The nanoring has external diameter \(d_{\rm nr}\), track width \(w\) and a thickness \(t_{\rm FM}\). Typical material parameters are chosen to stabilize Neel skyrmions [39, 81]. The driving force is a non-uniform radial charge current \(j\) flowing into the HM that varies along the radial directions as \(j_{HM}(r)=jR_{0}/r\), where \(j\) is the injected current density and \(R_{0}\) the inner radius of the nanoring. The current density generates an SOT with a counterclockwise (CCW) circular spin polarization **p**[39]. The skyrmion dynamics is characterized by a circular trajectory (see red curve in Fig. 1(a)) where the radial motion due to the skyrmion Hall angle [42] is almost compensated by the confining potential [39]. For this choice, the current-induced skyrmion Hall effect leads to a weakly inward trajectory of the skyrmions that causes their eventual annihilation at the inner boundary of the nanoring at large currents. The choice of an outward trajectory is less preferable, since the skyrmions would gradually enter regions of the nanoring with lower current density that would halt their motion. The device is equipped with a detector that is based on an MTJ having a width \(w_{\delta}\)=60 nm and a length equal to the width \(w\) of the nanoring. Note that the detector can be also implemented with elliptical or circular geometry. The detector width \(w_{\delta}\) is designed to be larger than the single skyrmion size considered (skyrmion diameter \(D_{\rm{sk}}\approx\) 35-40nm) to achieve an efficient detection. The presence of a skyrmion in the detector region leads to variation of the magnetoresistive signal of the MTJ [79, 80, 82]. Moreover, this scheme allows for generating a clock signal in different positions of the nanotrack thanks to additional MTJ detectors and to the fact that the skyrmion is moving at the same velocity along the nanoring. In other words, we can achieve an intrinsic spatial synchronization of the clock frequency. Figure 1(b) and (c) show the proposed device for the skyrmion alternator and energy harvester, respectively. Geometrically, these are similar to the previous one, but here the extended HM is necessary only to provide a sufficiently large interfacial Dzyaloshinskii-Moriya interaction (iDMI). The skyrmion motion is promoted by an engineered perpendicular anisotropy radial gradient (device-2) and a thermal radial gradient (device-3). The skyrmion motion in converted into an output electrical voltage \(v_{L}\) when skyrmions pass below a Faraday coil (yellow circle in Fig. 1 (b) and (c)) located at height \(h\) from the nanoring surface due to the well-known Faraday-Neumann-Lenz effect \(v_{L}=-\,d\Phi/dt\,\), where \(d\Phi\) represents the flux variation generated by the skyrmion movement below the coil. Table 1 summarizes the three proposed applications. ### Micromagnetic model The device design is based on massive micromagnetic simulations performed by means of the state-of-the-art micromagnetic solver _PETASPIN_, which numerically integrates the Landau-Lifshitz-Gilbert equation, augmented by the Slonczewski SOT term [83, 84]: \[\frac{d\mathbf{m}}{d\tau}=\mathbf{m}\times\mathbf{h}_{eff}+\alpha_{G}\left(\mathbf{m}\times \frac{d\mathbf{m}}{d\tau}\right)-\frac{g\mu_{B}\sigma_{SH}}{2\gamma_{0}eB^{2}_{s} \epsilon_{FM}}\left[\mathbf{m}\times\left(\mathbf{m}\times\left(\mathbf{2}\times\mathbf{j}_{HM} \right)\right)\right] \tag{1}\] where \(\mathbf{m}=\mathbf{M}/M_{s}\) is the normalized magnetization, \(\tau=\gamma_{0}\,M_{s}\)t is the dimensionless time, with \(\gamma_{0}\) being the gyromagnetic ratio and \(M_{s}\) the saturation magnetization. \(\mathbf{h}_{eff}\) is the dimensionless effective field that includes the external, exchange, iDMI, magnetostatic and perpendicular anisotropy fields. \(\alpha_{G}\) is the Gilbert damping, \(\mathbf{g}\) is the Lande factor, \(\mu_{B}\) is the Bohr magneton, \(\theta_{SH}\) is the spin-Hall angle, \(e\) is the electron \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline **Application** & **Source of skyrmion motion** & **Detection / Conversion** \\ \hline Skyrmion clock (device-1) & SOT with a tangential spin-polarization (**p**) & Magnetoresistive effect in a MTJ \\ \hline Skyrmion alternator (device-2) & Linear gradient of the perpendicular anisotropy & Faraday coil \\ \hline Energy harvester (device-3) & Linear thermal gradient & Faraday coil \\ \hline \end{tabular} \end{table} Table 1: Summary of the versatile applications for skyrmions in nanoring. charge, \(t_{FM}\) is the thickness of the ferromagnetic layer, \(\mathbf{\hat{z}}\) is the unit vector along the out-of-plane direction and \(\mathbf{j}_{HM}\) is the electrical current density flowing through the HM layer and giving rise to the SOT. The results presented here are for skyrmions in a nanoring with external diameter \(d_{m}\)=460 nm, width \(w\)=150 nm and thickness \(t_{FM}\)=1 nm. Similar qualitative results are also observed for different nanoring size. We discretize the ring with cell dimensions 2\(\times\)2\(\times\)1 nm\({}^{3}\). We use the following parameters, unless differently indicated [39], \(M_{S}=600\) kA/m, exchange constant \(A=\)10 pJ/m, iDMI parameter \(D_{m}=1.8\) mJ/m\({}^{2}\), perpendicular anisotropy constant \(K_{u}=0.80\) MJ/m\({}^{3}\), spin-Hall angle \(\theta_{SH}=0\).33 and \(\alpha_{G}=0.01\). ### Thiele's equation The analytical description of the skyrmion dynamics in the nanoring for the three device mechanisms can be achieved by the Thiele equation [39, 85] \[\mathbf{G}\times\mathbf{v}-\alpha_{G}D\cdot\mathbf{v}+\mathbf{F}_{SOT}+\mathbf{F}_{K}+\mathbf{F}_{T}= 0 \tag{2}\] where \(\mathbf{G}=G\hat{\mathbf{z}}\) is the gyrovector, \(\mathbf{v}\) the skyrmion (core) velocity and \(D\) the dissipative tensor matrix, respectively, while \(\mathbf{F}_{SOT}\) is the force due to the SOT with tangential spin polarization (device-1), \(\mathbf{F}_{K}\) the force due to the radial gradient of perpendicular anisotropy (device-2) and \(\mathbf{F}_{T}\) the force due to the radial temperature gradient (device-3). In particular, \(\mathbf{F}_{SOT}=4\pi B\,j_{HM}\), where \(\mathbf{j}_{HM}(\mathbf{r})=(jR_{0}/r)\hat{\mathbf{r}}\) is the radially symmetric charge current density, \(B=g\mu_{B}\theta_{SH}/2\gamma_{0}{e}M_{S}{}^{2}t_{FM}\) multiplied by a scaling factor to allow for racetrack shape effects [39], and \(R_{0}\) the internal radius of the nanoring. On the other hand, the forces due to anisotropy and thermal gradient for devices-2,3 can be calculated through derivatives of the effective potential \(V_{eff}\) associated with the effective field \(\mathbf{h}_{eff}\) as obtained within the effective field theory [19]: \[V_{eff}=2\pi\ \Big{(}\ 4A\left(\ \frac{\Delta}{R_{Sk}}+\frac{R_{Sk}}{ \Delta}\right)+4K\ \Delta-2\pi\ D_{m}R_{Sk}\Big{)}, \tag{3}\] where \(\Delta=\sqrt{\text{A/K}}\) is the domain wall width, \(R_{SK}=\Delta/\sqrt{2\ \Big{(}\ 1-\frac{\pi D_{m}}{4\sqrt{AK}}\Big{)}}\) the equilibrium skyrmion radius, \(K=K_{u}-\mu_{0}M_{S}^{2}/2\) the effective magnetic anisotropy strength within the thin-film approximation and \(D_{m},K_{u}\) are the iDMI parameter and perpendicular anisotropy strength. Note that \(V_{eff}\) is spatially uniform for device-1. Nevertheless, for device-2 it varies as \(K_{u}=a_{K}\ r+b_{K}\) leading to a radially-dependent potential \(V_{eff}=V_{eff}(\mathbf{r})\) which induces the force due to anisotropy gradient as \[\mathbf{F}_{K}=-\frac{\hat{r}}{\mu_{0}M_{S}^{2}}\alpha_{K}\frac{\partial V_{eff}} {\partial K}=\hat{r}F_{K}(\mathbf{r})\, \tag{4}\] with \(a_{K}\) the anisotropy gradient. Similarly, for device-3 we assume a temperature gradient in the radial direction, \(T=a_{T}\ r+b_{T}\). We allow for temperature-dependent micromagnetic material parameters through the well-established scaling relations [19, 81], \(M_{S}\equiv M_{S}(T)=M_{S0}\big{(}1-(T/T_{lim})^{\delta}\big{)}\), \(A\equiv A(T)=A_{0}\ m(T)^{\alpha}\), \(D_{m}\equiv D_{m}(T)=D_{m0}\ m(T)^{\beta}\), and \(K_{u}\equiv K_{u}(T)=K_{u0}\ m(T)^{\gamma}\), where \(m(T)=M_{S}(T)/M_{S0}\) and \(M_{S0},A_{0},D_{m0},K_{u0}\) are the corresponding material parameters at zero temperature. The exponents are given as [19, 81]\(\alpha=\beta=\delta=3/2\) and \(\gamma=3\) and \(\text{T}_{\text{lim}}=1120\) K is the Curie temperature. Eventually, the temperature gradient leads to a temperature-dependent effective potential \(V_{eff}=V_{eff}(T)\), with a position dependent temperature \(T=T(r)\) and the resultant force due to the temperature gradient in device-3 reads: \[\mathbf{F}_{T}=-\frac{\hat{r}}{\mu_{0}\alpha_{s}^{2}}\,a_{T}\,\frac{\partial\nu_{eff} }{\partial\tau}=\hat{r}F_{T}(r). \tag{5}\] Since the three forces \(\mathbf{F}_{SOT}\), \(\mathbf{F}_{K}\) and \(\mathbf{F}_{T}\) are all in the radial direction it is convenient to express Eq. (2) in polar coordinates \((r,\phi)\) of the skyrmion-core position, which can then be cast in matrix notation in the polar basis as \[\begin{pmatrix}-\alpha_{G}D&-G\\ G&-\alpha_{G}D\end{pmatrix}\begin{pmatrix}\hat{r}\\ r\phi\end{pmatrix}=-F(r)\begin{pmatrix}1\\ 0\end{pmatrix} \tag{6}\] where \(F(r)\) denotes any of the three types of forces \(F_{SOT}(r)\,,F_{K}(r),F_{T}(r)\) acting on the skyrmion. From Eq.(6) we obtain for the radial and angular components of the skyrmion velocity: \[\hat{r}=\alpha_{G}D\;F(r)/(G^{2}+\alpha_{G}^{2}D^{2})\quad\quad\text{and}\quad \quad r\phi=G\;F(r)/(G^{2}+\alpha_{G}^{2}D^{2}), \tag{7}\] with the dot sign indicating a derivative with respect to dimensionless time \(\tau\). Elimination of time and integration of Eq. (7) provides the skyrmion core trajectory \[\mathbf{r}(\phi)=\tau_{0}\,\exp\left[\alpha_{G}D(\phi-\phi_{0})/G\right], \tag{8}\] which is a logarithmic spiral for the skyrmion motion in the three devices and \(\tau_{0}\), \(\phi_{0}\) are the initial values (\(\tau\)=0). Notice that the shape of the skyrmion trajectory, Eq.(8), is independent of the particular functional form of the radial force \(F(r)\). On the other hand, the time evolution of the radial and azimuthal components of the skyrmion trajectory depend explicitly on the functional form of \(F(r)\), which can be obtained by integrating Eq. (7). Analytical expressions of \(F(r)\) were obtained from Eq.(4) and Eq.(5) using a symbolic algebra software. ## III Numerical results - Skyrmion clock (Device-1) We discuss first our proposal for a skyrmion clock, where the motion is driven by the SOT from the electrical current and the skyrmion detection is performed via an MTJ detector with perpendicular pinned layer in order to sense, through a change of its resistance, the variation of the out-of-plane spatially-averaged magnetization component \(<\!\!m_{\mathrm{z}}\!\!>\). In the case of a single skyrmion circulating in the nanoring, the micromagnetic results are summarized in Fig. 2. Clearly, the time evolution of \(<\!\!m_{\mathrm{z}}\!\!>\) under the MTJ is characterized by a series of periodic dips ("comb" structure, Figs. 3(a)-(d)), which occur when the skyrmion passes underneath the detector. Simultaneously to the drop of magnetization, we show that the topological charge in the detector region reaches the value \(Q=-1\), signifying the presence of a skyrmion under the MTJ. The most important feature here is an evident periodicity of the skyrmion signal for up to 4-5 laps around the nanoring in a time interval up to approximately 200 ns that points to efficient pulse generation with constant frequency. However, we observe two features linked to the interaction of the skyrmion with the internal boundary of the nanoring as it completes more laps: (_i_) a delay in the last peak of the \(<\!\!m_{\mathrm{z}}\!\!>\) signal indicating the slow-down of the skyrmion, and (_ii_) smaller \(<\!\!m_{\mathrm{z}}\!\!>\) variations below the MTJ indicating a skyrmion size reduction, while \(Q\) remains the same. From recording the time-instants corresponding to the dips of the \(<\!\!m_{\mathrm{z}}\!\!>\) signal, we show that successive skyrmion laps by the detector vary linearly with time in Fig. 2(e). To support the results from micromagnetic simulations, we compare also with the analytical computations with the Thiele's equation described in previous section. For the clock device, \(F(r)=F_{SOT}(r)=4\pi B\,jR_{0}/r\) and Eq. (7) can be written as \[\dot{r}=u\,\alpha_{G}D/G\tau\quad\mbox{and}\quad\quad r\dot{\phi}=u/r, \tag{9}\] with \(u=\ [4\pi BG/(G^{2}+\alpha_{G}^{2}D^{2})]R_{0}j\). Then the time evolution of the radial and azimuthal components read \[r(\tau)=\sqrt{r_{0}^{2}+2u\alpha_{G}D\tau/G}\qquad\mbox{and}\quad\quad\phi(\tau )=\phi_{0}+\frac{G}{2\alpha_{G}D}\ln\left(1+\frac{2u\alpha_{G}D}{G\tau_{0}^{2} }\tau\right) \tag{10}\] The elapsed time \(\tau_{n}\) for a skyrmion to perform \(n\)-laps through the detector is obtained from Eq. (10) by setting \(\phi(\tau_{n})=\phi_{0}\pm 2\pi n\), as: \[\tau_{n}=\frac{G\,r_{0}^{2}}{2\alpha_{G}Du}\left(e^{\pm 4\pi na\alpha_{G}D/G}-1\right) \tag{11}\] with the upper (lower) sign in the exponential term corresponding to CCW (CW) motion of the skyrmion along the spiral trajectory. The frequency (in Hz) of the \(n\)-th full circle along the spiral is defined as \(f_{n}=(\tau_{n}-\tau_{n-1})^{-1}\)and becomes: \[f_{n}=\gamma_{0}M_{s}\,\frac{2\alpha_{G}Du}{Gr_{0}^{2}}\,e^{\mp 4\pi na \alpha_{G}D/G}\left(1-e^{\mp 4\pi a\alpha_{G}D/G}\right)^{-1} \tag{12}\] Noted that the ratio of frequencies for successive laps is given by \[\frac{f_{n+1}}{f_{n}}=\ e^{\mp 4\pi a\alpha_{G}D/G} \tag{13}\] which implies that the frequency of the generated electrical pulses due to sequential passages of the skyrmion through the detector region, is not constant. However, for most ferromagnetic materials of interest \(\alpha_{G}D/G\ll 1\) and thus, nearly constant frequency can be achieved as indicated by Eq.(13). As shown in Fig. 2(e) and (f), the analytical and numerical outcomes are in a good agreement for both the skyrmion frequency and its number of laps in the detector region for different current densities and magnetic parameters, where the number of skyrmion laps and frequency predicted by the Thiele formalism were obtained by Eqs. (11,12), respectively. A time delay of the laps relative to the predictions of the analytical model is attributed to the skyrmion-boundary interaction that hinders the skyrmion motion. Interestingly, the boundary-free motion under a radial current density predicts a weakly accelerated skyrmion motion (see Fig. 4, in next section B), but the skyrmion-boundary interaction competes with this effect, restoring eventually a constant velocity (see Fig. 2(e)). An advantage of this device is its frequency tunability which can be achieved by changing the driving current density, as indicated by the linear scaling of the frequency with current density in Fig. 2(e), and the number of skyrmions as discussed in the next paragraph. In addition, the material parameters can be used to set the desired frequency region [39][81] (Fig. 2(f)). We also wish to highlight that the skyrmion configuration has an intrinsic reset mechanism due to the magnetostatic interactions which re-align the skyrmion position to the center of the nanoring in absence of applied current. The thickness of the ferromagnet can influence different parameters. The most sensitive are the interfacial perpendicular anisotropy, the iDMI and the amplitude of the SOT. All of the three contributions scale linearly as a function of the thickness [86; 87]. The first two parameters affect the skyrmion stability and, above a certain critical thickness, the skyrmion is no longer stable. In addition, the reduction of the SOT with thickness, for a fixed set of parameters, gives rise to a reduction of the slope of the velocity-current curve. We have performed simulations with different values of the ferromagnetic film thickness (from 1.0 nm up to 2.0 nm) keeping a fixed set of parameters (see Section B) and constant current density. Our simulations indicated a drop of frequency up to 50% when the layer thickness increases from 1.0nm to 2.0 nm (not shown here). ### Clock with multiple skyrmions in the nanoring Higher pulse frequencies can be achieved when an N-skyrmion (\(N_{\rm Sk}\)) chain is stabilized in the nanoring. In simulations, we place multiple skyrmions in the nanoring and relax the system prior to applying the electrical current. The skyrmions are located at symmetrical points in the device. In particular, for the case of \(N_{\rm Sk}\)=2, we create the skyrmions on a circle in the half-width of the nanoring at angular positions of 0\({}^{\circ}\) and 180\({}^{\circ}\) with respect to the detector region and, for the case of \(N_{\rm Sk}\)=4, we place skyrmions at angular positions of 0\({}^{\circ}\),90\({}^{\circ}\),180\({}^{\circ}\) and 270\({}^{\circ}\). Figures 3(a) and (b) show the TMR signal due to the passage of multiple skyrmions for a fixed \(j\)=20 MA/cm\({}^{2}\) for \(N_{\rm Sk}\)=2 and \(N_{\rm Sk}\)=4, respectively, where the decrease of the \(<\)\(m_{\rm z}\)\(>\) period with the number of skyrmions is evident. To support this finding, we depict in Fig. 3(c) the number of laps _vs._ time for different current densities, where the curve slopes for \(N_{\rm Sk}\)=4 is always larger than the \(N_{\rm Sk}\)=2 case. Eventually, the previous results are confirmed in Fig. 3(d), where the frequency as a function of the current density for different \(N_{\rm Sk}\) is shown, thus underlying the role of the number of skyrmions in tuning the working frequency of device. The Thiele's equation can be extended to account for \(N_{\rm Sk}\) skyrmions. In particular, Eq. (12) is also valid in the case of a skyrmion chain composed of \(N_{\rm Sk}\) equidistant and non-interacting skyrmions circulating on the nanoring. In this case, the detected frequency corresponds to the frequency of a single skyrmion scaled by the number of skyrmions in the chain \[f_{n}(N_{sk})=\ N_{\rm Sk}\cdot f_{n}(1) \tag{14}\] However, for a dense skyrmion chain, namely when the core-to-core distance approaches the skyrmion diameter, the detected frequency is anticipated to deviate slightly from the predictions of Eq.(14). The underlying physical mechanism causing these deviations is due to the skyrmion-skyrmion repulsive forces that alter the skyrmion shape and hence the dynamics of the interacting skyrmion chain. Nonetheless, for our study, the analytical predictions are in line with the micromagnetic simulations results. Weak deviations from the constant pulse frequency can be observed for high current density and long times (Fig. 3(c)) due to the interaction of the skyrmion with the inner boundary of the nanoring, however, due to the increased number of skyrmions in the device, a larger number of pulses has been generated relative to the single skyrmion case, before the linearity is perturbed. Noticeably, the overlap of different datasets in Fig. 3(c), as for example the case of 2 skyrmions under \(j\)=20 MA/cm\({}^{2}\) and the case of 4 skyrmions under \(j\)=10 MA/cm\({}^{2}\), demonstrates the equivalence of the two controlling parameters of the pulse frequency, namely the driving current density \(j\) and the number of injected skyrmions \(N_{\rm sk}\). We wish to stress again that an important requirement for achieving linear scale-up of the signal frequency is to have the skyrmions at equidistant in the device, as non-equidistant skyrmions would generate an electrical pulse with two or more constituent frequencies, for initial states where the skyrmions are not equidistant we observe a rearrangement of them to the equidistant configuration thanks to the magnetostatic field (not shown) [12]. ### Extended results of the Thiele's equation Figure 4 shows the results as predicted from the solution of Thiele's equation, Eq. (11) for a larger time interval and the material parameters mentioned earlier [39]. Interestingly, the time evolution of the laps number follows a weak logarithmic dependence, which suggests that intrinsically the frequency generated by the skyrmion rotation cannot be constant, unless a damping parameter close to zero is considered. However, for an initial time interval (\(\sim\)200 ns), the number of laps-time relation can be quite accurately approximated by a linear dependence, that in turn implies a constant frequency of motion which is a prerequisite for clock functionality. ### Effect of sample disorder Typical FM/HM samples are polycrystalline and include internal defects, which are usually responsible for skyrmion pinning [8, 88]. To study the effects of sample defects on the skyrmion circular motion, we divide the nanoring in regions of random shape and size (grains) implementing the Voronoi tessellation algorithm [88]. We attribute a random value to the perpendicular anisotropy \(K_{u}\) of each grain with a dispersion of 5% around the mean value. In addition to the dispersion in anisotropy values, another important parameter in polycrystalline samples is the ratio of the mean grain size \(\overline{D}_{g}\) to the skyrmion diameter \(D_{Sk}\)[50], which in our simulations is set to \(D_{Sk}=30\)nm. Figure 5 shows the results for the mean value of the number of skyrmion laps as a function of time. The average is extracted with 5 different Voronoi maps with \(\overline{D}_{g}\) equal to 10 nm and 60 nm, respectively. We fixed \(j\)=40 MA/cm\({}^{2}\) which gives the largest frequency of the output signal in the ideal sample (Fig.2(f)). Our results demonstrate that for both small and large grain size the effect of the internal defects is negligible, thus proving the robustness of this application to structural disorder. ### Effect of thermal fluctuations Temperature effects are important factors introducing randomness in the dynamics of skyrmion motion [60, 82, 89]. The thermal effects are accounted in Eq. (1) via a stochastic field \(\mathbf{h}_{\mathrm{th}}\), that is applied at each computational cell and reads \(\mathbf{h}_{th}=\frac{\mathbf{\chi}}{M_{s}}\sqrt{2\alpha_{G}k_{B}T/\mu_{0}\gamma_{0} \Delta VM_{s}\Delta t}\,,\) with \(k_{B}\) being the Boltzmann constant, \(\Delta V\) the volume of the computational cubic cell, \(\Delta t\) the simulation time step, \(T\) temperature of the sample, and \(\mathbf{\chi}\) a three-dimensional white Gaussian noise with zero mean and unit variance [90, 91]. Values of stochastic field in different cells are uncorrelated. Figure 6 compares the \(<\)\(m_{z}\)\(>\) for a single skyrmion in the nanoring at \(T\)=0 K with a typical signal at \(T\)=300 K. The latter is noisy due to random thermal fluctuations and a running-average method with a time window of 1ns was used to extract the smooth signal. Figure 6(c) shows the number of skyrmion laps as a function of time. The frequency \(\bar{f}\) at \(T\)=300 K has been averaged over 10 repetitions over the thermal disorder and the corresponding dispersion is illustrated by the horizontal error bars. The micromagnetic results predict an average frequency \(\bar{f}\) =49 MHz at \(T\)=300 K that is more than twice the frequency \(f=\)20 MHz at \(T\)=0 K. Examination of the magnetization snapshots showed that, in the early stage of the motion (up to \(\sim\)20ns), the skyrmion develops an outward radial velocity that drives it to the outer boundary, where it continues the circular motion with a higher velocity. We attribute such a higher velocity to the synergy between the current-driven motion and the temperature-driven gyrotropic motion of the skyrmions that has been previously studied in nanodots [81]. We have also checked that the width of the nanoring does not modify this behavior by performing simulations in wider nanorings (\(d_{\alpha}\)=460 nm, \(w_{\bar{d}}\)=190 nm), which produce similar frequency enhancement results. Therefore, not only our proposed application can work at room temperature, but also the performance in terms of achievable frequency is better. ## 4 Numerical Results - Skyrmion Alternator and Skyrmion Energy Harvester The second part of the work is dedicated to the design of a skyrmion alternator and a skyrmion energy harvester. Both applications share the same geometry (Figs. 1(b) and (c)) and working principle for reading the signal, but different driving mechanisms for the skyrmion circulating motion. In particular, here it is driven by gradients in the parameters (anisotropy, temperature), and not by an electrical current. ### Skyrmion Alternator For the skyrmion alternator, we considered a free-energy input source related to the presence of an engineered linear perpendicular anisotropy gradient. Its effect in extended FM has been already analyzed experimentally and theoretically [15, 16], thus making it a viable alternative to the electrical current. In particular, the nanoring-shaped FM can be designed to have a radial gradient of the \(K_{\rm u}\), (see Fig. 1(b) for more details). Based on our previous results [16], where skyrmions move mainly perpendicularly to the \(K_{\rm u}\) gradient direction, we expect the skyrmion to circulate in the nanoring, and, every time it passes underneath a Faraday coil, the time variation of the stray field flux \(\Phi\) is converted into a voltage pulse at the terminal of the coil due to the well-known Faraday-Neumann-Lenz effect \(v_{L}=-\,d\Phi/dt\). In other words, the skyrmion rotation gives rise to an ac voltage, similarly to what occurs in the majority of today's electrical generators, where the mechanical rotation of a rotor generates an ac voltage. ### Skyrmion energy harvester The skyrmion motion in the energy harvester is instead promoted by a linear temperature gradient (Fig. 1(c)). Thermal gradients have been already proved to be a reliable motion source for skyrmions [17, 19]. In particular, previous theoretical results pointed out that thermal gradients can be taken into account via linear gradients of the magnetic parameters (\(A\), \(D_{m}\), \(K_{u}\), \(M_{s}\)) computed with proper scaling relations with temperature [19] (see also paragraph II.C before Eq. (5)). The combinations of all the parameters gradients give rise to a skyrmion motion mostly perpendicular to the gradient direction. With this in mind, our idea is to build the skyrmion-based nanoring around a thermal source, such as a microprocessor (Fig. 1(c)). Then, the dissipated thermal energy propagates radially towards the outer boundary of the nanoring, and we assume that the heat propagation induces a linear temperature gradient (hotter in the inside and colder in the outside), which is expected to drive the circular motion of skyrmion. Similar to the case of skyrmion alternator, this motion can be converted into an ac voltage. In other words, the proposed skyrmion energy harvester can partially recover dissipated heat in the form of electrical energy. Here, we are considering a point source for the thermal gradient in order to have a radial distribution, however the concept is also applicable for arbitrary spatial distribution of thermal sources which generates, however, nonperiodic output signals across the coil. ### Conversion of the skyrmion motion into an electrical signal For both cases of the skyrmion alternator and the skyrmion harvester, we are only interested in the maximum skyrmion velocity for given material parameters, since it determines the value of the generated voltage pulse. Therefore, we do not consider the effect of thermal fluctuations or internal material defects, which we expect to modify quantitatively our results (see previous paragraphs III.C and III.D), e.g. a smaller skyrmion velocity, but not qualitatively. To this aim, we developed a post-processing tool for the calculation of the voltage pulse \(v_{L}=-\,d\Phi/dt\) generated by the skyrmion motion. As a first step, we performed micromagnetic simulations with the parameters as in Ref. [81]. For the \(K_{u}\) gradient, we considered a minimum (maximum) value at the inner (outer) edge of the nanoring equal to 0.6 MJ/m\({}^{3}\) (0.8 MJ/m\({}^{3}\)). For the thermal gradient, we considered a minimum (maximum) value at the outer (inner) edge of the nanoring equal to 100K (300K). In both cases, the skyrmion rotates in the nanoring with a velocity \(v_{\rm{sk}}\approx\)20 m/s. These simulations are needed to: (_i_) confirm a steady-state size of the skyrmion while moving, and (ii) use that value of the skyrmion velocity as a reference for the post-processing tool in order to predict how the generated voltage changes with the velocity around that value (see Fig. 7). We computed the spatial distribution of the stray field due to the skyrmion up to 50 nm away from the nanoring surface, and considered a Faraday coil with a diameter similar to the skyrmion one. Figure 7(a) shows the change of the stray field flux through the Faraday coil located at height \(h\)=20 nm from the surface of the nanoring due to the passage of a skyrmion below it. The flux \(\Phi\) becomes negative as the skyrmion enters the region below the coil, and symmetrically goes back to a positive value as the skyrmion goes out from the coil region. The negative flux value is due to the skyrmion polarity which is negative in this study (-\(z\)-direction). The corresponding nV voltage pulse is illustrated in Fig. 7(b). This simple numerical experiment proves the working principle of the proposed device as alternator and energy harvester. The amplitude of the voltage can be linearly tuned by the skyrmion velocity (Fig. 7(c)) which is directly linked to the amplitude of the anisotropy and temperature gradients, and non-linearly decreases with the height of the coil from the nanoring surface, as shown in Fig. 7(d). To obtain a steady-state ac voltage signal, we can symmetrically deploy multiple skyrmions. Indeed, Fig. 7(e) depicts the output voltage due to the circulating motion of 10 skyrmions. In addition, a strategy to enhance the \(\nu_{L}\) amplitude is to use \((\text{HM}_{1}/\text{FM}/\text{HM}_{2})_{n}\) ferromagnetic multilayers [8, 9, 24, 25, 26, 27, 28, 29]. The asymmetric \(\text{FM}/\text{HM}_{1,2}\) interfaces enhance the iDMI leading to increased skyrmion stability. Hybrid (i.e. thickness-dependent) or homochiral (i.e. pure Neel) skyrmions extending through all FM layers can form in such a multilayer [92, 93], as a trade-off among the anisotropy, magnetostatic and IDMI energies. A skyrmion extending in all layers produces larger changes in the magnetic flux, when it crosses the Faraday coil, leading to larger values of the voltage amplitude. Therefore, magnetic multilayers instead of a single FM/HM bilayer, could be used to enhance the amplitude of the voltage pulse. In our study, we considered the physical parameters of Ref. [81] and the same geometrical parameters as in Fig. 1 for the FM layers. Successive FM layers are separated from each other by a 1 nm non-magnetic layer. We relax a skyrmion as a function of the number of layers and we obtain a pure Neel skyrmion in all the cases. We compute the stray field and from this the \(\nu_{L}\) amplitude. Figure 7(f) shows that \(\nu_{L}\) linearly increases with the number of layers up to almost half a \(\mu\)V. Eventually, we wish to mention here that we are considering only one wrap in the Faraday coil, but the output voltage increases linearly with the number of wraps. For instance, with 10 wraps, the voltage can be easily increased to more than 1 \(\mu\)V. We have also compared the simulation results of the skyrmion velocity in the device-2 and device-3 with the developed Thiele formalism, which are given by Eq.(7), with \(F(r)=F_{K}(r)\) for device-2 and with \(F(r)=F_{T}(r)\) for device-3. For a skyrmion moving along the mid-circle of the circular track, \(r\ =\ R_{0}\ +\ w/2\) and the skyrmion velocity is calculated to be \(v_{\text{sk}}=37\) m/s for device-2 and \(v_{\text{sk}}=58\) m/s for device-3. These values are of the same order but higher than the simulation results (\(v_{\text{sk}}\approx\)20 m/s). We attribute the enhanced velocity in the analytical model due to the absence of repulsive forces from the inner boundary of the nanoring when the skyrmion approaches. In particular, both gradient-induced forces \(F_{K}(r)\) and \(F_{T}(r)\) point in the inward radial direction being the parameters used and the edge force points in the outward radial direction. To take into account this effect, we have considered a fitting factor in the radial forces as \(F(r)\ \rightarrow\ \ n_{f}F(r)\). The analytical calculations are in good agreement with the simulation results for \(n_{f}\ \approx\ 0.5\) in device-2 and \(\ n_{f}\ \approx\ 0.3\) in device-3 showing that the effect of the edge force has a key role in the skyrmion dynamics in those devices as already shown for racetrack memories [39]. ## 5 Summary and Conclusions We have demonstrated the versatile potential usages of skyrmions in nanorings by combining numerical micromagnetic simulations and analytical calculations based on the Thiele's equation. In particular, we have proposed and examined three applications that rely on the conversion of the skyrmion circulating motion into an electrical signal: (1) a skyrmion clock, where the skyrmion motion driven by a radially-flowing current generates periodic voltage pulses when the skyrmion pass below an MTJ. The frequency of the device can be tuned via current, material parameters, and number of skyrmions, reaching 200 MHz. This design allows for an intrinsic spatial synchronization of the clock frequency; (2) a skyrmion alternator, where the skyrmion motion driven by an engineered anisotropy gradient (no electrical input) generates a voltage pulse close to the \(\upmu\)V order due to the variation of the stray field through a Faraday coil. This idea is analogous to today's electrical generators, where the mechanical rotation of a rotor generates electrical voltage. We anticipate here that the gradient of other parameters can be used for the development of skyrmion alternators, such as magnetic field gradient and/or iDMI gradient; (3) if the skyrmion motion is driven by thermal gradients, an energy harvester can be designed to partially recover the dissipated heat from a thermal source, such as microprocessor. Here, we summarize that experimental studies of those nanoring geometries can use low-damping ferromagnetic materials, such as amorphous CoFeB [60]. Stacks hosting hybrid skyrmions can be designed in such a way that the skyrmion Hall angle is reduced to optimize the performance of the nanoring-based devices in order to increase the skyrmion velocity. This can reflect not only in faster clock, but also in an enhanced of the voltage generated by the skyrmion motion. In addition, we wish to stress that those skyrmion hosting materials can be integrated with MTJs stacks, as already demonstrated experimentally, making those device configurations more feasible [80]. At this stage of the research, it is not possible to perform a quantitative performance comparison between skyrmionic devices and existing clock solutions since the development of skyrmionic devices is still at the level of critical function or proof of concept establishment (Technological Readiness Level 3). From a qualitative point of view, the skyrmion-based clock implementation can be used to reduce the clock skew, enforcing the time synchronization between different parts of the circuits. In other words, the skyrmion being a soliton can travel with a reduced distortion and can be used locally for the generation of the clock timing. Our vision about the skyrmion alternator is the possibility to create a new generation of nano engines, hence the potential scalability at nanoscale will be the advantage as compared to current technology. The same idea of scalability at nanoscale applies to the concept of energy harvesting. In this field, the current technology is facing several challenges and skyrmion based energy harvesting can give rise to a new direction in this research field [94]. ## Acknowledgments DK acknowledges financial support from the Special Account for Research of the School of Pedagogical and Technological Education through program "_Educational and Research Infrastructure Support_" (No 52922) and hospitality by the Politecnico di Bari during the course of this work. The research has been supported by the project PRIN 2020LWPKH7 funded by the Italian Ministry of Research. RT, VP, MC and GF are with the PETASPIN team and thank the support of PETASPIN association (www.petaspin.com). AR acknowledges the kind hospitality in the University of Messina, during the course of this work and financial support from CIP2022036. The work carried out at Tsinghua University was supported by the Basic Science Center Project of National Natural Science Foundation of China (NSFC Grant No. 52388201), National Key R&D Program of China (Grant No. 2022YFA1405100), the NSFC distinguished Young Scholar program (Grant No. 12225409), the Beijing Natural Science Foundation (Grant No. Z190009), the NSFC (Grant Nos. 52271181, 51831005), the Tsinghua University Initiative Scientific Research Program and the Beijing Advanced Innovation Center for Future Chip (ICFC).
2309.14718
Optimizing delegation between human and AI collaborative agents
In the context of humans operating with artificial or autonomous agents in a hybrid team, it is essential to accurately identify when to authorize those team members to perform actions. Given past examples where humans and autonomous systems can either succeed or fail at tasks, we seek to train a delegating manager agent to make delegation decisions with respect to these potential performance deficiencies. Additionally, we cannot always expect the various agents to operate within the same underlying model of the environment. It is possible to encounter cases where the actions and transitions would vary between agents. Therefore, our framework provides a manager model which learns through observations of team performance without restricting agents to matching dynamics. Our results show our manager learns to perform delegation decisions with teams of agents operating under differing representations of the environment, significantly outperforming alternative methods to manage the team.
Andrew Fuchs, Andrea Passarella, Marco Conti
2023-09-26T07:23:26Z
http://arxiv.org/abs/2309.14718v2
# Optimizing Delegation Between Human and AI Collaborative Agents+ ###### Abstract In the context of humans operating with artificial or autonomous agents in a hybrid team, it is essential to accurately identify when to authorize those team members to perform actions. Given past examples where humans and autonomous systems can either succeed or fail at tasks, we seek to train a delegating manager agent to make delegation decisions with respect to these potential performance deficiencies. Additionally, we cannot always expect the various agents to operate within the same underlying model of the environment. It is possible to encounter cases where the actions and transitions would vary between agents. Therefore, our framework provides a manager model which learns through observations of team performance without restricting agents to matching dynamics. Our results show our manager learns to perform delegation decisions with teams of agents operating under differing representations of the environment, significantly outperforming alternative methods to manage the team. Keywords:Reinforcement Learning Markov Decision Process Delegation Learning to defer Hybrid Decision-making ## 1 Introduction Assuming a context with humans working directly in collaboration with autonomous/artificial agents, it is essential to enable a team dynamic eliciting the best combined performance or reduce agent-specific costs. For example, think of an autonomous car, where a human driver or an AI agent can take decisions on the next driving action. It is well known [1, 19] that neither the human nor the agent is always the best choice, as either can make mistakes depending on the driving context. Therefore, it is of utmost importance to design a _delegation policy_ deciding, at any point in time, who between the human driver and the AI agent should operate the car. More specifically, a reasonable goal is to
2303.00050
Dynamic Multi-View Scene Reconstruction Using Neural Implicit Surface
Reconstructing general dynamic scenes is important for many computer vision and graphics applications. Recent works represent the dynamic scene with neural radiance fields for photorealistic view synthesis, while their surface geometry is under-constrained and noisy. Other works introduce surface constraints to the implicit neural representation to disentangle the ambiguity of geometry and appearance field for static scene reconstruction. To bridge the gap between rendering dynamic scenes and recovering static surface geometry, we propose a template-free method to reconstruct surface geometry and appearance using neural implicit representations from multi-view videos. We leverage topology-aware deformation and the signed distance field to learn complex dynamic surfaces via differentiable volume rendering without scene-specific prior knowledge like template models. Furthermore, we propose a novel mask-based ray selection strategy to significantly boost the optimization on challenging time-varying regions. Experiments on different multi-view video datasets demonstrate that our method achieves high-fidelity surface reconstruction as well as photorealistic novel view synthesis.
Decai Chen, Haofei Lu, Ingo Feldmann, Oliver Schreer, Peter Eisert
2023-02-28T19:47:30Z
http://arxiv.org/abs/2303.00050v1
# Dynamic Multi-View Scene Reconstruction Using Neural Implicit Surface ###### Abstract Reconstructing general dynamic scenes is important for many computer vision and graphics applications. Recent works represent the dynamic scene with neural radiance fields for photorealistic view synthesis, while their surface geometry is under-constrained and noisy. Other works introduce surface constraints to the implicit neural representation to disentangle the ambiguity of geometry and appearance field for static scene reconstruction. To bridge the gap between rendering dynamic scenes and recovering static surface geometry, we propose a template-free method to reconstruct surface geometry and appearance using neural implicit representations from multi-view videos. We leverage topology-aware deformation and the signed distance field to learn complex dynamic surfaces via differentiable volume rendering without scene-specific prior knowledge like template models. Furthermore, we propose a novel mask-based ray selection strategy to significantly boost the optimization on challenging time-varying regions. Experiments on different multi-view video datasets demonstrate that our method achieves high-fidelity surface reconstruction as well as photorealistic novel view synthesis. Decai Chen\({}^{1}\), Haofei Lu\({}^{1,2}\), Ingo Feldmann\({}^{1}\), Oliver Schreer\({}^{1}\), Peter Eisert\({}^{1,3}\)\({}^{1}\)Fraunhofer HHI \({}^{2}\)TU Berlin \({}^{3}\)HU Berlin Multi-view reconstruction, neural dynamic surface, ray selection ## 1 Introduction Recent success of deep learning techniques in 2D domain has sparked a surge of interest in higher dimensional problems, such as 3D computer vision tasks and their application fields in VR/AR, movie production, games etc. For instance, the neural radiance field (NeRF) [1] demonstrates that Multi-Layer Perceptron (MLP) neural networks can represent a scene by implicitly encoding the geometry and appearance into the parameter of networks. The learned model allows free-viewpoint photorealistic novel view synthesis through traditional volume rendering techniques. While NeRF was initially designed for static content, subsequent dynamic works either condition the neural field with additional temporal input [2, 3, 4], or jointly optimize a deformation field and a neural radiance network [5, 6, 7]. Nevertheless, compared to remarkable achievements in view synthesis, their performance in geometry representation is relatively unsatisfying. To address this problem, several works [8, 9, 10, 11] employ the signed distance function(SDF) for implicit surface representation. Compared to density, SDF can be more efficiently regularized, relieving the entanglement between shape and appearance. However, these methods only reconstruct static scenes instead of more commonly seen dynamic ones. Another closely related research topic is recovering dynamic humans from videos using an articulated human body model such as SMPL [12]. Although these methods [13, 14, 15, 16, 17, 18] have demonstrated impressive efficacy in view synthesis and reconstruction in terms of the human body, the requirement of human priors like the SMPL model hinders them from generalizing to other dynamic scenes. General dynamic scenes are challenging but widely used in applications, such as interaction with props and objects like balls, loose garments (skirts, cloaks), and movements of animals or robots. To overcome the above limitations, we propose DySurf, a neural multi-view dynamic surface reconstruction method without prior knowledge of the target shape. We map the spatial points along a camera ray from observation space to the canonical space, which is explicitly parameterized by the SE(3) field computed from a deformation network. This enables the model to share the coherence of geometry and appearance information across time. To handle challenging topology changes in dynamic scenes, we adopt the design of a hyperdimensional network [19]. For rendering, we employ the neural SDF and radiance field to represent geometry and appearance in the canonical space. To recover dynamic areas with fast motion, we propose a novel ray selection strategy that assigns a higher sampling probability to pixels of interest derived from dynamic masks. We show the performance of our method across different scenarios on a public dataset (GeneBody [20]) as well as a multi-view dataset captured by ourselves. With the focus on dynamic surface reconstruction, we also demonstrate the capability of our work on view synthesis. In summary, the main contributions of this work are as follows: 1) An end-to-end framework called DySurf for topology-aware dynamic surface reconstruction from multi-view videos without templates; 2) A novel mask-based ray selection strategy to boost the optimization by focusing more on the time-varying foreground region; 3) Extensive evaluation of high-fidelity surface reconstruction on the public GeneBody dataset as well as our captured dataset. ## 2 Method The overview of our method is demonstrated in Figure 1. Our goal is to reconstruct high-fidelity time-varying surfaces from multi-view videos. The inputs are multi-view RGB image sequences \(\{I_{i,j},S_{i,j}:i\in[1,N],j\in[1,M]\}\) containing \(N\) frames and \(M\) views, where \(I_{i,j}\) is the color image of the \(i\)-th frame from the \(j\)-th view and \(S_{i,j}\) Figure 1: Overview of our approach. is the corresponding object segmentation mask. Masks can be obtained by off-the-shelf foreground-background segmentation framework, e.g., BackgroundMattingV2 [21]. Camera parameters including intrinsics and extrinsics are also provided. They are typically calculated from a dedicated calibration process. Given the above inputs, we aim to create a time-coherent 4D representations of the dynamic scene, where we can recover high-quality geometry surfaces and synthesize realistic novel view images. ### Neural Dynamic Surface Representation **Deformation Field**. In deformation-based dynamic scene representation, it is important to properly define the connection between the time-varying observation space where the cameras capture the scene, and the canonical space where the underlying geometry and appearance can be queried. Given the learnable per-frame latent deformation code \(\varphi_{i}\), we model the spatial deformation using a SE(3) field network [6]\(T:(\mathbf{x},\varphi_{i})\rightarrow(\hat{\mathbf{r}},\hat{\mathbf{t}})\), where \(\hat{\mathbf{r}},\hat{\mathbf{t}}\in\mathbb{R}^{6}\) is a 6-DOF vector in the continuous SE(3) field parameterizing the rotation and translation, respectively. Inspired by HyperNeRF [19], we utilize a hyper-coordinates network \(H:(\mathbf{x},\varphi_{i})\rightarrow\mathbf{w}\in\mathbb{R}^{n}\) to gain more flexibility for representing various topologies of dynamic scene surfaces, by extending the conventional 3D canonical space with additional \(m\) dimensions. To summarize, the mapping from a 3D sampled point in the observation space (also known as deformed space) \(\mathbf{x}\) to the hyper canonical-space coordinates \((\mathbf{x}^{\prime},\mathbf{w})\) is defined as: \[(T(\mathbf{x},\varphi_{i}),H(\mathbf{x},\varphi_{i}))\rightarrow(\mathbf{x}^{ \prime},\mathbf{w})\in\mathbb{R}^{3+m}. \tag{1}\] **Neural SDF and Radiance Field**. In this work, we model the surface geometry using an SDF network \(F:(\mathbf{x}^{\prime},\mathbf{w})\rightarrow(d,\mathbf{z})\), where \(\mathbf{z}\in\mathbb{R}^{q}\) is the geometry feature to condition the radiance network \(R\). In addition, we calculate the surface normal using gradient from auto-differentiation \(\mathbf{n}=\nabla F(\mathbf{x}^{\prime},\mathbf{w})\), to better disentangle the geometry and appearance fields. For volume rendering, we follow VolSDF [8] to transform the SDF value \(d\) to the density \(\sigma\) used in volume integration. To offset the variation of illumination and exposure across different input frames and camera views, we additionally condition appearance codes \(\psi_{i}\) and \(\chi_{j}\) to the \(i\)-th frame and the \(j\)-th view. Finally, the radiance network \(R\) can be formulated as: \[R(\mathbf{x}^{\prime},\mathbf{w},\mathbf{n},\mathbf{z},\mathbf{v},\psi_{i}, \chi_{j})\rightarrow\mathbf{c}. \tag{2}\] To approximate the ray rendering integral via discretization, we sample \(N_{s}\) points along each ray based on an error bound for the opacity approximation [8]. Finally, the volume rendered color for a pixel is obtained by alpha blending over all sampled colors \(\mathbf{c}\) along the ray. ### Mask-based Ray Selection Previous implicit neural reconstruction methods in both monocular [5, 6, 19] and multi-view [22, 9, 8] sample a certain number of pixels uniformly over the whole input image to generate a batch for training. In this case, a large proportion of selected pixels may fall into the background, which usually dominates an image but contributes little to the reconstruction result. Therefore, we develop a novel masked-based ray selection algorithm for volume rendering to assign higher probability in the regions of interest, especially the time-varying foreground areas. The key to the ray selection strategy is to design a probability map to guide the pixel sampling. One simple solution is dividing the probability map into two constant parts according to the segmentation mask at the current frame (e.g., Fig. 1(b)) with a higher value for the foreground region. We call this solution "naive ray selection" in Section 3.3. However, the motion of foreground objects is generally not uniform. For instance, in Fig. 1(a), where a standing person is throwing up a ball, the movements of the ball and human hands are more significant than other parts of the foreground objects, and thus require more attention during optimization. Assuming the cameras are fixed over time, we propose a temporally global strategy to find the time-varying region over frames. Specifically, for each view \(j\), we calculate a foreground frequency map \(Q_{j}\) by summing up the segmentation masks over all \(N\) frames before normalization: \[Q_{j}=\frac{1}{N}\sum_{i=1}^{N}S_{i,j}, \tag{3}\] where \(S_{i,j}\in\{0,1\}^{H\times W}\) is the segmentation mask of the \(i\)-th frame and \(j\)-view. Then we derive a dynamic motion map \(D_{j}\) from: \[D_{j}=\mu_{max}-(\mu_{max}-\mu_{min})Q_{j}. \tag{4}\] Since the frequency map \(Q_{j}\) is normalized to the interval \([0,1]\), the dynamic motion map \(D_{j}\) ranges from \(\mu_{min}\) to \(\mu_{max}\) while higher values represent more significant motion (see an example in Fig. 1(c)). Similarly, we also differentiate the background area. In our method, the rays in the background area are supervised by a binary cross-entropy loss to carve the empty space along the rays. Compared to the background regions which are far from the foreground objects, we argue that the closer areas are of more interest. To this end, we generate a buffer region in the background by morphologically dilating the object mask with a radius of \(r_{dilate}\) before subtracting the original mask by the dilated one. Since this buffer region deserves more attention than the rest of the background, we assign it with a higher probability for ray selection. Finally, the probability map of the \(i\)-th frame and the \(j\)-th view \(P_{i,j}\) for sampling rays is defined as: \[P_{i,j}(\mathbf{p})=\begin{cases}D_{j}(\mathbf{p})&\text{if }\mathbf{p}\in\mathcal{F}_{i,j}\\ \mu_{buffer}&\text{if }\mathbf{p}\in\mathcal{B}_{i,j}\\ \mu_{rest}&\text{if }\mathbf{p}\in\mathcal{R}_{i,j}\end{cases}, \tag{5}\] Figure 2: Development of probability map for ray selection. where \(\mathbf{p}\) denotes a pixel, while \(\mathcal{F}_{i,j}\), \(\mathcal{B}_{i,j}\) and \(\mathcal{R}_{i,j}\) are the sets of pixels in the foreground, buffer and the remaining background regions, respectively. Note that the values in the probability map serve as the pixel sampling weights, and will be further normalized over the whole map to compute the final sampling probability. Therefore, only the relative values rather than the absolute ones between \(\mu_{max}\), \(\mu_{min}\), \(\mu_{buffer}\) and \(\mu_{rest}\) affect the ray selection. As shown in Figure 2d, the time-varying regions, including the ball and hands, are of higher interest inside the foreground region, while the buffer area in the background is also given more attention than the rest. ### Loss Function Each batch of training data consists of \(N_{ray}\) rays sampled from one single image \(I_{i,j}\) of the \(i\)-th frame and the \(j\)-th view. For simplicity, we drop the indices \(i\) and \(j\) in this section. We first minimize the difference between the rendered colors in the foreground and the ground truth ones, denoted by \(\mathcal{L}_{rgb}\). Then we compute a mask loss to supervise the geometry field in canonical space: \[\mathcal{L}_{mask}=\frac{1}{N_{ray}}\sum_{\mathbf{r}\in\mathcal{R}}\text{ BCE}(\hat{S}(\mathbf{r}),S(\mathbf{r})), \tag{6}\] where \(\hat{S}\) is the volume rendered mask and BCE is the binary cross-entropy loss. Similar to the elastic regularization in [6], we regularize the local deformations by encouraging a rigid motion using \(\mathcal{L}_{rigid}\). Lastly, we adopt the Eikonal loss [23]\(\mathcal{L}_{eikonal}\) to regularize the SDF gradients (i.e., surface normals \(\mathbf{n}\)) of both \(\mathcal{X}\) and an additional set of sample points \(\mathcal{P}\) distributed uniformly in the bounding volume. Finally, the total loss function is formed as: \[\mathcal{L}_{total}=\mathcal{L}_{rgb}+\lambda_{1}\mathcal{L}_{mask}+\lambda_ {2}\mathcal{L}_{rigid}+\lambda_{3}\mathcal{L}_{eikonal}, \tag{7}\] where \(\lambda_{1},\lambda_{2},\lambda_{3}\) are the weights for respective term. ## 3 Experiments ### Experimental Settings **Implementation Details.** The networks of \(T,H,F,R\) are MLPs containing seven, seven, nine, and five fully-connected layers, respectively. We empirically set the number of hyper-coordinates \(m\) as 2. The deformation code \(\varphi_{i}\) and appearance codes \(\psi_{i},\chi_{j}\) all have 8 dimensions. And we set the frequencies of positional encoding [1] as 6, 4, and 1 for spatial coordinates in observation and canonical space, view direction, and hyper-coordinates, respectively. In each training iteration, a batch contains \(N_{ray}=512\) rays, along which \(N_{s}=128\) spatial points are sampled. The loss weights are empirically set as \(\lambda_{1}=1.0,\lambda_{2}=0.05,\lambda_{3}=0.1\). Adam [24] is adopted for training, with the learning rate of \(2.5\times 10^{-4}\) for the deformation network \(T\) and \(5\times 10^{-4}\) for the rest of the parameters. For generating probability maps for ray selection, we set \(r_{dilate}\), \(\mu_{max}\), \(\mu_{min}\), \(\mu_{buffer}\) and \(\mu_{rest}\) as 80, 1, 0.3, 0.1, 0.001, respectively. After training, surface meshes are extracted via Marching Cubes [25] at a resolution of \(512^{3}\) voxels. We conduct the training on one single NVIDIA GTX1080Ti GPU. The training on a video of 100 frames takes about 300k iterations (around 70 hours) to converge. **Datasets.** To evaluate the performance of our method, we train our model on both Genebody [20] and our datasets. They consist of human performers of various ages, intricate gestures, clothing types, and accessories. In particular, Genebody [20] is captured by 48 evenly spaced cameras with a resolution of \(2448\times 2048\), while we only take 16 out of 48 cameras for our experiments. In addition, we collect a volumetric video dataset by 16 uniformly distributed cameras at resolution of \(5120\times 3840\) and 25 fps, while only the down-sampled images (\(1280\times 960\)) are used for training. Both datasets consist of human performers of various ages, intricate gestures, clothing types, and accessories.All sequences have a length between 100 and 150 frames. Figure 3: Dynamic surface reconstruction results on our collected dataset (the first 2 rows) and Genebody (the last 2 rows) datasets. For each scene, we show two views (horizontally aligned) and two frames (vertically aligned). ### Comparison Since most of state-of-the-art multi-view dynamic reconstruction approaches are template-based, we have evaluated Neural Body [17], A-Nerf [13] and AniSDF [14] on both mentioned datasets as they are also human-driven dynamic scenes, following the official codes and instructions. However, we found that the performance of the last two methods on the above datasets is unsatisfying, so we only compare our method with Neural Body in this paper. Figure 3 demonstrates qualitative results for dynamic surface reconstruction. While Neural Body [17] fails to recover objects beyond human bodies and loose dresses, our method is robust to complicated motions with topological changes (e.g., taking off a jacket), and reconstruct high-fidelity surface meshes even for challenging thin objects, including the violin bow and the round handheld fan. In addition, our method requires no prior knowledge on the scene objects like parametric templates and thus is able to reconstruct general dynamic scenes, including loose clothing and various props. Figure 4 shows a qualitative comparison on rendering novel views (excluded from the training) on different datasets. The rendering results from Neural Body suffers from the ghost effect and missing parts of objects (e.g., the basketball). In contrast, our method synthesizes photo-realistic images from novel views with more appearance details. To measure the rendering quality on testing views not used for training, we choose three metrics: peak signal-to-noise ratio (PSNR), structural similarity index (SSIM) and LPIPS [26]. Following previous works [17, 14], we set the values of the background pixels as zero. Instead of evaluating the whole images, which leads to meaningless high scores, we crop images using minimal bounding boxes of the corresponding foregrounds and only calculate the metrics on the cropped regions. As illustrated in Table 1, our method outperforms Neural Body with a large margin on novel view synthesis on both our collected dataset (Volumetric Videos) and Genebody dataset. ### Ablation Study We present an ablation study to report the effect of the important components of our method on the final reconstruction performance. Figure 5 demonstrates the reconstruction results with different choices between hyper-coordinates (HC) and ray selection (RS) strategies. Fig. 4(b) shows that ablating the hyper-coordinates network (i.e., representing the canonical space by only 3D instead of adding more dimensions) fails to fully recover both arms due to their topological changes resulting from situations where arms and the body touch or release each other. The evaluation of the ray selection shows, that uniformly sampling rays over the whole image causes an imbalanced training: the model is trained well for most parts of the body that remain approximately static over time, but the geometric details of the moving hands and head are missing (see Fig. 4(c)). As shown in Fig. 4(d), the naive ray selection strategy improves the result by increasing the weights for the foreground, while it has no attention on the time-varying regions. In comparison, our temporally global ray selection strategy focuses on optimizing the dynamic regions and differentiates the background area, thus faithfully recovering the challenging moving objects, such as the fingers and the face (see figure 4(e)). In addition to surface reconstruction, our ray selection strategy also achieve the best rendering quality, as illustrated in Table 1. ## 4 Conclusions We have introduced DySurf, a template-free method for reconstruction of general dynamic scenes from multi-view videos using neural implicit surface representation. We employ a deformation field to warp observed frames into a static hyper-canonical space, which is jointly optimized with an SDF network and a radiance network through volume rendering. Our novel ray selection strategy allows to specifically train the time-varying regions of interest. This significantly improves the reconstruction quality. Extensive experiments show that our method outperforms a state-of-the-art template-based method on different datasets, achieving high-quality geometry reconstruction as well as photorealistic novel view synthesis. ## 5 Acknowledgement This work has partly been funded by the H2020 European project Invictus under grant agreement no. 952147 as well as by the Investiionsbank Berlin with financial support by European Regional Development Fund (EFRE) and the government of Berlin in the ProFIT research project KIVI. \begin{table} \begin{tabular}{c|c c c|c c c} & \multicolumn{3}{c|}{Volumetric Videos} & \multicolumn{3}{c}{Genebody} \\ & PSNR\(\uparrow\) & SSIM\(\uparrow\) & LPIPS\(\downarrow\) & PSNR\(\uparrow\) & SSIM\(\uparrow\) & LPIPS\(\downarrow\) \\ \hline Neural Body & 26.84 & 0.894 & 0.246 & 19.87 & 0.723 & 0.234 \\ \hline No HC; no RS & 25.83 & 0.872 & 0.280 & 20.25 & 0.714 & 0.252 \\ No RS & 27.44 & 0.902 & 0.178 & 24.81 & 0.803 & 0.189 \\ Naive RS & 27.26 & 0.909 & 0.175 & 25.15 & 0.820 & 0.171 \\ Ours & **28.33** & **0.923** & **0.143** & **26.66** & **0.841** & **0.145** \\ \end{tabular} \end{table} Table 1: Quantitative results of novel view synthesis. Figure 4: Qualitative comparison of novel view synthesis. Figure 5: Ablation of hyper-coordinates (HC) and ray selection (RS).
2309.16669
Training a Large Video Model on a Single Machine in a Day
Videos are big, complex to pre-process, and slow to train on. State-of-the-art large-scale video models are trained on clusters of 32 or more GPUs for several days. As a consequence, academia largely ceded the training of large video models to industry. In this paper, we show how to still train a state-of-the-art video model on a single machine with eight consumer-grade GPUs in a day. We identify three bottlenecks, IO, CPU, and GPU computation, and optimize each. The result is a highly efficient video training pipeline. For comparable architectures, our pipeline achieves higher accuracies with $\frac{1}{8}$ of the computation compared to prior work. Code is available at https://github.com/zhaoyue-zephyrus/AVION.
Yue Zhao, Philipp Krähenbühl
2023-09-28T17:59:50Z
http://arxiv.org/abs/2309.16669v1
# Training a Large Video Model on a Single Machine in a Day ###### Abstract Videos are big, complex to pre-process, and slow to train on. State-of-the-art large-scale video models are trained on clusters of \(32\) or more GPUs for several days. As a consequence, academia largely ceded the training of large video models to industry. In this paper, we show how to still train a state-of-the-art video model on a single machine with eight consumer-grade GPUs in a day. We identify three bottlenecks, IO, CPU, and GPU computation, and optimize each. The result is a highly efficient video training pipeline. For comparable architectures, our pipeline achieves higher accuracies with \(\frac{1}{8}\) of the computation compared to prior work. Code is available at [https://github.com/zhaoyue-zephyrus/AVION](https://github.com/zhaoyue-zephyrus/AVION). ## 1 Introduction Video understanding has witnessed remarkable advances in the past decade. Much of the progress on standard benchmarks [7] is powered by higher-capacity models [3, 7, 19, 80] trained on ever larger datasets [21, 47, 59, 82]. The result is an ever increasing training cost, exacerbated by the recent shift from convolutional [7, 18, 35, 77] to Transformer architectures [3, 5, 40, 44, 80]. For much of their evolution, video models followed the success of their image-based counterparts [16, 26, 29, 43]. However, working with videos offers a series of unique challenges: Videos are highly compressed, up to an order of magnitude more than images. Video decoding consumes a sizeable fraction of the overall computation in state-of-the-art training pipelines [20]. Finally, the decompressed soup of pixels grows not just quadratically with the input resolution, but also with temporal length. This puts a strain on pre-processing pipelines and significantly increases the GPU memory that a video model uses. In this paper, we examine the training pipeline of a modern video Transformer architecture [5, 42] from three perspectives: model, video loading, and video pre-processing, which are GPU-, IO-, and CPU-bound respectively. We find that there is plenty of room for improvement in all aspects. Through careful designs we improve the training time by almost an order of magnitude. From the model perspective, we start with the plain, non-hierarchical Vision Transformer and reduce the memory bottleneck from \(O(N^{2})\) to \(O(N)\), where \(N\) is the length of the cubified video tokens. We achieve this by adopting FlashAttention [15] which decomposes the whole sequences into SRAM-friendly blocks and combines the block-wise results into the final output without explicitly storing the full attention weight matrix. This results in a reduced per-video memory cost as well as an increased training throughput. The reduced per-instance memory footprint enables training a video model with a significantly larger batch size on a single multi-GPU server. This is particularly useful for training CLIP-style models [53] for videos, which typically requires as many as \(32{\sim}64\) GPUs or TPUs [42, 48] to construct a batch of \({\sim}1K\) video instances. The increased throughput, however, introduces additional challenges to video loading and pre-processing. In Figure 1: Over the past decade training time of state-of-the-art video models increased by two orders of magnitude, despite drastic improvements in GPU hardware. State-of-the-art video models train on 6 GPU-months to 14 GPU-years of computation on cutting-edge hardware. We show how to train an equally performant large video model in under a day on a machine with eight workstation GPUs. (Metrics not normalized for GPU generations). our pipeline, we redesign the video loader around a series of trimmed fixed-length chunks of a long-form video. Each chunk is still compressed using a modern video codec [55]. The GPU hardware determines the length of each chunk. A chunk-based representation reduces the IO bottleneck and increases the decoding speed. We merge the commonly used RandomResizedCrop operation into the video decoding stage as a cropping filter. This ensures the video decoder executes a minimal amount of decoding operations to retrieve the required data. Furthermore, we move all other data augmentations to the GPU to make use of its massive parallelism. We evaluate our pipeline on contrastive video-language pre-training on the Ego4D video-narrative pairs [25]. Our pipeline trains a contrastive video-language model on 4M video-text pairs with a total batch size of \(2K\) clips using **one**\(8\times\) A5000 (24GB) GPU server in **18** hours. The same model used to require \(32\times\) 40GB A100 GPUs to run for 2 days [42]. Our pipeline leads to a \(6.7\times\) reduction of memory consumption, \(11.8\times\) reduction of GPU-hours, and \(15\times\) reduction in hardware cost1. With an LLM-augmented set of video narratives [87], our model is able to achieve state-of-the-art performance on Epic-Kitchens 100 Multi-Instance Retrieval in both zero-shot and fine-tuning evaluation protocols. With a comparable model (ViT-Base _vs_. a TimeSformer-Base), our model is 2.0% higher in terms of zero-shot average mAP and 1.3% better after fine-tuning. Footnote 1: We only compare GPUs’ MSRP: One A5000 costs \(\sim\)\(\$2,600\) while one A100 costs \(\sim\)\(\$10,000\). Networking and distributed filesystem are likely to cost more for multi-node setup. Our optimized pipeline as an application works beyond large video-language modeling. We show additional results on training Video Masked Auto-Encoders (MAE) where data loading is a bottleneck, our techniques reduce the data-loading overhead by \(3\times\) and overall training time by \(35\%\). ## 2 Related Work **Computationally Efficient Video Recognition.** Video models that are inflated from image models through time are computationally expensive [7]. Architectural improvements include channel-wise separable convolution [64], temporal channel shuffling [41], dilated temporal convolution [30], depth-parallel pipelining [8], and progressive expansion of multiple network axes across space, time, width and depth [18]. Some works attempt to represent motion information using compressed videos [72, 84] or temporal difference [67] to avoid the expensive computation of optical flow in the two-stream network [58]. Other works focus on reducing the spatial-temporal redundancy in videos by selecting informative frames [36, 76] or regions [50, 69], and quantization [46, 60]. Training can be sped up by applying a multigrid schedule across variable spatial-temporal resolutions [73] or curriculum strategy [4]. Our contributions are complementary and focus on the IO and preprocessing bottlenecks on standard video training pipelines. Our main architectural improvements are Transformers [65] specific. **Efficient Transformers.** The dot-product attention in the original Transformer [65] requires quadratic computation in the input sequence length. This becomes prohibitive for long sequences, and many works focus on reducing this computation. Representative approaches include low-rank approximation of the attention matrix [12], sparsity patterns [83], reversible transform [34], query-based cross attention via memory [54] and recurrence [13], and kernel decomposition [33]. In video understanding, Transformers are tailored to modeling video sequences by (1) computing attention across separate dimensions [5, 51], (2) building hierarchy through shifted local windows [44] and multi-scale pooling attention [17]. MemViT [74] models \(30\times\) longer temporal support with a marginal extra cost via recurrence. RevViT [45] reformulates the self-attention block with reversible transform [24]. TeSTra [85] adopts temporal-smoothing kernels to process streaming videos with both constant computation and memory per frame. In contrast, we take a brute-force approach to the problem. In video transformers, the quadratic memory consumption is a much larger problem, than the actual increase in computation. We leverage recent advances in efficient implicit computation of the attention weights [31] implemented in a computationally efficient block-wise manner in FlashAttention [15]. FlashAttention eliminates the memory bottleneck and significantly increases the computation throughput of the attention operation. The result is a ViT-base network that is as efficient as factorized representations, but uses a fraction of the memory. Keeping the original ViT-base structure also allows us to make use of image-based pre-training either contrastive [53, 56] or self-supervised [6, 28]. **Memory-Efficient Video Models.** To fit longer video clips into GPUs, many approaches resort to extracting frame-level or short-clip features and building an additional model for temporal reasoning on top [47, 79, 88]. The performance of such models is heavily constrained by the representation capability from the frame-level model. For end-to-end video models, efforts that aim to reduce memory footprint include sparse sampling frames [86], dropping out gradients [11], and skipping intermediate blocks [75]. However, most of them either focus on the inference stage or speed up training a particular family of models. In contrast, our optimization on the IO- and CPU-bound operations should be applicable to all kinds of video models. ## 3 Preliminary Let \(\mathbf{x}\in\mathbb{R}^{3\times T\times H\times W}\) be a video clip of length \(T\) and resolution \(W\times H\). The goal of a video model is to analyze this clip and produce a \(d\)-dimensional output feature \(\mathbf{y}\in\mathbb{R}^{d}\). This feature may correspond to an embedding in large vision language models [42], a classifier in action recognition [19], or a generic feature map for down-stream applications [70]. **Video Transformer.** We focus much of our modeling assumptions and improvements on the joint space-time Vision Transformer (ViT) [16]. For any video clip \(\mathbf{x}\in\mathbb{R}^{3\times T\times H\times W}\), we first divide it into \(N=\frac{T}{t}\times\frac{H}{h}\times\frac{W}{w}\) non-overlapping cubes of size \(t\times h\times w\). For each cube, the ViT learns a visual token with \(D\) channels, and a space-time positional embedding in the form of learnable parameters \(\mathrm{PE}\in\mathbb{R}^{N\times D}\). Each visual token then passes through \(L\) Transformer Encoder layers, each of which contains a multi-head self-attention (MHA) layer and a 2-layer MLP. As we will show in the next section, the ViT is an ideal candidate for large-batch training. With minor architectural improvements, the ViT is more memory efficient than more complex baselines [5, 17, 40, 43, 44]. At the same time, it is more than capable of reaching a state-of-the-art performance on large video-language tasks. **Flash Attention.** Attention [65] computes a weighted average of the input features, whose weights are determined by the dot-product similarities between the key and query elements on an input sequence. For \(N\) keys and queries, a naive implementation of attention does not only require \(O(N^{2})\) computation but also \(O(N^{2})\) memory. This memory consumption matters for two reasons: First, it limits the maximum batch size. Second, most operations in attention are memory-bound and thus limit throughput. FlashAttention [15] resolves the memory bottleneck of the attention operations. First, it computes softmax weights implicitly, shrinking the overall memory footprint to \(O(N)\). Second, it computes attention in a block-wise fashion making use of highly efficient on-chip SRAM caches. **Video Training Pipeline.** A typical training pipeline for video models works similarly to that for image models. First, it reads a video clip as a compressed bitstream and decodes the bitstream into a sequence of frames. Next, a subset of the frames are randomly selected, grouped into a tensor over time, and passed through a set of transformations, or data augmentations. Typical augmentations include (1) cropping into a fixed target size, RandomResizedCrop at training and CenterCrop at validation, (2) geometric augmentations such as Flipping and Rotation, (3) photometric augmentations such as ColorJittering and GaussianBlurring, and (4) normalization. Finally, the transformed tensors from all videos in the same batch are collated and fed into the video model. In this pipeline, loading video is an IO-bound operation. Both decoding and transformations are CPU intensive while the model is executed on the GPU side. **Video Decoder.** A video decoder takes as input a compressed video bitstream and performs decompression on it. Decoding speed is determined by various factors, including (1) the size of the bitstream, (2) an efficient frame-seeking strategy to locate the closest key-frames, and (3) slice- or frame-level multi-threading. ## 4 Method Training of large video models is bottlenecked on two fronts: memory consumption and throughput. A model's memory consumption limits the maximum batch size, which in turn reduces throughput and even impacts its convergence rate for embedding-based training [10, 27, 49, 53]. In the absence of a dedicated video storage and decoding machine, standard IO and pre-processing pipelines are not able to keep up with the GPU throughput, especially on a multi-GPU node. Fig. 2 illustrates the impact of these bottlenecks on the training throughput. We show how to reduce each of these bottlenecks and obtain a video training pipeline that performs up to \(9\times\) faster. Figure 2: **Training throughput _vs._ number of GPUs using the standard training pipeline (_left_) and ours (_right_). In the standard training pipeline, the CPU throughput only doubles from single-GPU to 8-GPU scenario as GPUs starve. Our pipeline significantly increases both CPU and GPU throughputs. For a fair comparison, we keep a constant batch size.** ### A Memory-Efficient Video ViT Fig. 3 analyzes the overall memory consumption of the video ViT model. In a plain video ViT, the attention operator dominates the overall memory consumption with \(>60\%\) of the memory use. We completely remove this memory footprint through the use of FlashAttention [15]. We can further trade computation for memory efficiency through gradient checkpointing [9]. Due to the isotropic nature of Vision Transformer, where the output shape after each layer is identical throughout the network, the memory complexity can be reduced from \(O(LND)\) to \(O(\sqrt{L}ND)\) for \(L\) layers, of \(N\) tokens of dimension \(D\). **Discussion.** With sufficient memory optimization, the plain Video ViT is a very memory and computationally-efficient video architecture. Due to the efficient block-wise GPU accelerated implementation of FlashAttention the potential cubic computational cost of attention in a Video ViT is not an issue for the workloads to experimented with. (1) Compared to anisotropic (or hierarchical) Transformer architectures, _e.g_. Swin [43, 44] or MViT [17, 40], ViT contains fewer memory-bound operations, such as window-shifting and pooling. (2) Compared to another isotropic architecture TimeSformer [5], which reduces FLOPs by conducting spatial-only or temporal-only attention separately, ViT has a smaller per-sample memory footprint with gradient checkpointing since the model parameters and number of attention layers are halved. We illustrate this effect in Fig. 4. A memory-efficient ViT with FlashAttention achieves \(1.7\times\) throughput than the baseline and \(3\times\) batch size. Gradient checkpointing increases the batch size by \(13.8\times\), at the cost of slightly reduced throughput (\(1.4\times\)). (3) Finally, the ViT benefits from large-scale pre-trained image models on vision-language tasks [53] or self-supervised objectives [28]. Starting from pre-trained image models significantly speeds up training on videos. ### Increasing CPU Utilization in Pre-processing With a larger batch size, video pre-processing becomes a clear bottleneck. Without dedicated hardware solutions, the CPU on a single node server is simply not able to supply eight GPUs with sufficient data, and thus largely starves the GPUs. This effect is highlighted in Fig. 5. At its peak, a video ViT is able to process \(60\)\(\sim\)\(70\) video clips per second per GPU or \(400\)\(\sim\)\(500\) clips per second on an 8-GPU node. A standard video training pipeline supplies at most \(100\)\(\sim\)\(120\) clips per second, thus utilizing GPUs at \(\sim\)\(25\%\). Increasing the number of worker threads only marginally improves the pipeline efficiency. As shown in Fig. 4(a), a standard video pipeline spends the majority of its computation on decoding and the random resized cropping (RRC) operation. It first completely decodes a larger-than-needed video clip, and subsequently crops it, both of which are CPU and CPU-memory intensive operations. To address this, we propose to merge RRC into the video decoding stage as a cropping filter. **RandomResizedCrop (RRC).** RRC [61] takes as input three tuples, namely the target size \((H_{t},W_{t})\), scale range \((s_{min},s_{max})\), and aspect ratio range \((r_{min},r_{max})\). First, it computes the area of the frame \((HW)\). Second, it randomly sample a target area \(A\) and aspect ratio \(r\) by \(A\sim U(s_{min}HW,s_{max}HW),r\sim U(r_{min},r_{max})\) so that the cropping size should be: \[W_{crop}=\lfloor\sqrt{Ar}\rceil,H_{crop}=\lfloor\sqrt{A/r}\rceil \tag{1}\] Next, it randomly samples the left edge and the top edge: \[x=\lfloor U(0,W-W_{crop})\rceil,y=\lfloor U(0,H-H_{crop})\rceil. \tag{2}\] Finally, the cropped output \(\mathbf{x}[:,:,y:y+H_{crop},x:x+W_{crop}]\) is rescaled to \(\mathbf{x}^{\prime}\in\mathbb{R}^{T\times 3\times H_{t}\times W_{t}}\). Figure 4: **Throughput and maximum batch size for a video-text Dual-Encoder model [87] using a TimeSformer-Base (TSF-B) and ViT-Base (ViT-B) architecture. We use \(4\) input frames. The numbers are measured on a single A5000 (24GB) GPU using torch.float16. All input data is kept on the GPU memory for benchmarking purposes only.** Figure 3: **Memory footprint of the Video ViT for an input clip of resolution \(224\times 224\) and \(4\) frames, and cube size \(16\times 16\times 1\) without a temporal extent. Longer clips exhibit a similar memory footprint. We consider three variants: a plain ViT baseline, a ViT with FlashAttention [15], and a ViT with FlashAttention and gradient checkpointing [9]. The ViT features three layers that consume memory: LayerNorm, Multi-Head Attention (MHA), and Multi-Layer Perceptrons (MLP).** **RandomResizedCrop as a cropping filter.** The cropping region is only conditioned on the frame size \((H,W)\) and agnostic to the frame contents. We thus first generate cropping coordinates from the meta-data, specifically the width and height, of the video without decoding it. Next, we conduct decoding and cropping simultaneously by adding a cropping filter at the video decoder. This ensures that the video decoder executes the minimal amount of decoding operations to retrieve the data needed. Fig. (b)b in SSA illustrates the Pythonic pseudo-code. The resulting data-loader features a close to linear scaling as the process pool increases from 8 to 64 processes (Fig. (c)c). The latency only increases from 97 to 152ms per video per process (Fig. (a)a). **Beyond RandomResizedCrop.** Fused DecodeCrop naturally extends to most cropping-based augmentation, _e.g_. SimpleRandomCropping, which was first proposed in AlexNet [37] and recently reused in DeiT III [63] to great effect. After cropping, all tensors have a fixed shape and are readily batched. We move the data to the GPU at this stage and apply other augmentations, such as photometric augmentation and normalization, on the GPU. This eliminates the CPU bottleneck in current video training pipelines. The final bottleneck is disk IO, as most video datasets are too large to fit into memory. ### Eliminating IO bottleneck for Long Videos Long-term videos have become an increasingly popular resource for multi-modal contrastive pre-training [2, 48]. The most straightforward way is to trim the long videos according to the annotated timestamps beforehand. The drawbacks are twofold: (1) Trimming may increase the storage if there are multiple annotations in one video and the annotated clips overlap. (2) Trimming ignores the large proportion of the unannotated parts, which may have benefited video representation through pseudo-labeling [87]. An attractive alternative way is to split each input video into multiple fixed-length chunks [42, 87]. The length of these chunks is often chosen heuristically, _e.g_. \(T=5{\sim}10\ \mathrm{min}\) long. The trade-offs are clear: Shorter chunks reduce the IO bottleneck. Longer chunks reduce potential duplication of the input data. Ideally, one chooses the largest chunk size that reduces the IO bottleneck. Let \(B\) denote the batch size, \(\rho\) denote the average bitrate of a video, \(S_{r}\) denote the maximum read speed, and \(\Delta\) denote the time of a training step. To hide the IO bottleneck from the training, we require the video model to consumer fewer bits \(B\times\rho\times T\) than the disk can afford \(S_{r}\times\Delta\): \[B\times\rho\times T\leq S_{r}\times\Delta. \tag{3}\] Note, that we only control the length \(T\) of each chunk. The bitrate \(\rho\) depends on the resolution and the codec. Maximum read speed \(S_{r}\) varies significantly according to the hardware, _e.g_. \(80\ \mathrm{MB/sec}\) for HDD, \(500\ \mathrm{MB/sec}\) for SATA SSD, and \(3\ \mathrm{GB/sec}\) for NVMe SSD. In our experimental setup, typical values are \(N=1024\), \(\rho=1\ \mathrm{Mb/sec}\), \(\Delta=4\ \mathrm{sec}\) and \(S_{r}=500\ \mathrm{MB/sec}\), which leads to \(T\leq 16\ \mathrm{sec}\). We use 15-second chunks in practice to avoid GPU starvation due to fluctuations in the disk read speed. For most video tasks, the size of the video clip fed into the network is much smaller than our chunk size. The pipeline thus avoids having to read multiple consecutive chunks. ## 5 Experiments To show the effectiveness of our expedited training pipeline, we conduct video-language pre-training on the Ego4D egocentric video dataset and evaluate the performance on Epic-Kitchens 100 (EK-100). We summarize dataset statistics and evaluation protocols in SS5.1. Experimental setups including the model configuration and the hardware specifications are elaborated in SS5.2. After discussing the main Figure 5: **CPU utilization of a standard video processing pipeline _vs._ ours.** We build an in-memory toy dataset of 1,024 15-second video clips and measure the average elapsed time of sampling \(4\) frames with a pool of \(M\) processes, where \(M\) varies across \(\{8,16,32,64\}\). We measure (a) the processing time per video (latency), (b) the throughput per process, and (c) the overall throughput of the video loader. The numbers are measured on a server with \(2{\times}\) Intel Xeon 24-Core CPU @ 2.20GHz (96 threads in total). We ignore other augmentation techniques in this experiment. results in SS5.3 and ablation studies in SS5.4, we present an application of our optimizing techniques to other representative video models in SS5.5. ### Datasets and Evaluation Protocols **Ego4D**[25] is the largest egocentric video dataset to date, including 3,670 hours of videos with temporally dense free-form narratives. Following the training split and pairing strategy in EgoVLP [42], we get around 4M video-text pairs with an average length of 1 second. These pairs are further augmented by LaViLa [87] to boost contrastive pre-training. **EK-100**[14] is a popular and challenging egocentric video recognition benchmark with 100 hours of cooking scenarios. We focus on two tasks: Multi-Instance Retrieval (**EK-100 MIR**) and Action Recognition (**EK-100 CLS**). The MIR task requires retrieving the text given videos (V\(\rightarrow\)T) and videos given text (T\(\rightarrow\)V). It contains 67,217/9,668 video-text pairs in the training/testing split respectively. We use two evaluation protocols: (1) _Zero-shot_, meaning that we apply the video-text encoders pre-trained on Ego4D directly on the EK-100 testing split without any additional tuning; (2) _Fine-tuned_, meaning that we take the pre-trained video-text encoder and perform end-to-end fine-tuning on the EK-100 training split. The evaluation metrics are mean Average Precision (mAP) and normalized Discounted Cumulative Gain (nDCG) of V \(\rightarrow\) T, T \(\rightarrow\) V, as well as the average of V \(\rightarrow\) T and T \(\rightarrow\) V. The CLS task requires classifying each video clip into one of 97 verbs and 300 nouns each, resulting in a combination of 3,806 action categories. We report top-1 accuracy on verbs, nouns, and actions after finetuning the video encoder. Among the three accuracies, the action-level accuracy is emphasized. ### Experimental Setups **Video-language model architecture.** The video-language model follows CLIP [53], which is composed of a vision encoder and a text encoder. The vision encoder is a Vision Transformer Base (ViT-B) model, whose weights are initialized from CLIP [53] except that we randomly initialize the temporal position embedding \(\mathrm{PE}_{t}\in\mathrm{R}^{T\times N\times D}\) and add it to the original spatial position embedding \(\mathrm{PE}_{s}\in\mathrm{R}^{N\times D}\), _i.e_. \(\mathrm{PE}[i,:,:]=\mathrm{PE}_{t}[i,:,:]+\mathrm{PE}_{s}\). We represent each video clip by \(T=4\) frames when pre-training on Ego4D. When fine-tuning on EK-100, we increase \(T\) from 4 to 16 and linearly interpolate \(\mathrm{PE}_{t}\) along the temporal dimension. The text encoder is a 12-layer GPT-like Transformer [52, 65]. It takes as input one video narrative after a BPE tokenizer [57] with at most 77 tokens. With memory-efficient attention, gradient checkpointing, and automatic mixed-precision training, we are able to fit \(256\) video clips on a 24GB GPU so that the total batch size will be 2,048. **Hardware.** We conduct experiments on two types of hardware. One is a server with \(8\times\) NVIDIA RTX A5000 GPU and \(2\times\) Intel Xeon Gold 5220R 24-Core CPU @ 2.20GHz (96 threads in total); the other has \(4\times\) A5000 GPU and \(1\times\) AMD Ryzen Threadripper PRO 5975WX 32-Core CPU (64 threads). The videos reside on an NVMe data server via a Network File System (NFS) with 10Gbps Ethernet. Both of the hardware is much more available in academia compared to a gigantic cluster of A100 GPUs inter-connected by InfiniBand. We report the main quantitative results by using the 8-GPU server and perform the stress test on data loading using the 4-GPU one if not otherwise specified. ### Main Results We present our main results from two aspects: training efficiency compared to previous works on Ego4D, and strong accuracy _vs._ prior methods on EK-100 MIR. **Pre-training efficiency on Ego4D.** We compare the compute cost for Ego4D pre-training in Table 1. With the original 4M ground-truth narratives, our model can be trained in 5 full epochs using \(8\times\) A5000 GPUs in 18 hours. In contrast, it takes 1,536 GPU-hours to train an EgoVLP [42] video-text model, which is around \(11.8\times\) than ours. Thanks to the increased batch size, the zero-shot result is also better: ours is 4.7% better than EgoVLP in terms of zero-shot average mAP on EK-100 MIR. The effect of batch size on embedding losses is generally well understood, and higher batch sizes almost always lead to better performance [53]. Our pipeline also benefits from larger-scale video-text pairs generated by Visual Language Models [87]. We follow LaViLa [87] and extend the training schedule to cover 10 "effective" epochs. In this setting, our training pipeline achieves an mAP of 31.7% within 33 hours. This is 2.2% higher at \(\frac{1}{5}\) of the compute cost of LaViLa. The increase in performance is again likely due to the larger batch size. **EK-100 MIR.** We evaluate our pre-trained model on EK-100 MIR in Table 2 using \(T=16\) for fair comparison with prior methods. In the zero-shot setup, our model achieves 33.2% average mAP and 33.0% average nDCG, which is 2.3% and 1.0% higher than the previous state-of-the-art. Next, we fine-tuned the video-text encoder on the EK-100 MIR train split by replacing the InfoNCE loss with the max-margin loss following Wray _et al_. [71]. We see a consistent improvement of 1.3% (51.8 _vs._ 50.5) in average mAP and 1.8% (66.8 _vs._ 65.0) in average nDCG. When we upgrade the backbone to ViT-Large, the gain is boosted to 3.6% in average mAP and 2.5% in average nDCG respectively. **EK-100 CLS.** We fine-tune our pre-trained model on EK-100 CLS and show results in Table 3. With ViT-Base as the backbone, our model achieves 49.1% top-1 action accuracy, which is 2.2% higher than LaViLa with a similar TimeSformer-Base encoder and the same pre-training data. It is also comparable with prior methods while requiring significantly fewer pre-training videos. When we upgrade the backbone to ViT-Large, the gain is amplified: our method achieves 54.4% top-1 action accuracy, which is 3.4% higher than LaViLa with TimeSformer-Large. It also beats the best single model from M&M [78], the 2022 EPIC-Kitchens Challenge winner, which uses extra modalities (RGB+Optical Flow+Audio) and doubled resolution (432\(\times\)432 crop), by a clear margin (53.6% _vs._ 54.4%). ### Ablation Studies **Benefits of large-batch pre-training.** Next, we further study the benefits of large-batch training for video-language models. Fig. 6 summarizes the results. First, we observe that a larger corpus size benefits more from the large-batch training: In the original narratives, any gains are marginal (\(\sim\)\(0.2\%\)) with an increased batch size. However, with additional augmentation by a large language model [87], a larger batch size significantly increases the mAP. One reason might be that the current data scale is still insufficient for training a video-language model in full gear. Second, with other settings fixed the same, our model with a ViT-Base backbone is consistently better than LaViLa with a TimeSformer-Base backbone. ViT-Base inherits the full topology of the pre-trained image encoder [53] with the only exception of the temporal positional embedding, making itself easier to fine-tune than TimeSformer. This reveals the effectiveness of isotropic Transformer architectures compared to other variants given the same setting (_e.g_. same batch size in our case), echoing the discovery in other tasks [39, 63]. Our memory optimization makes it possible to use ViT-Base as is. **Model runtime after fixing different bottlenecks.** We analyze the IO and CPU bottleneck separately under simplified conditions in SS4.3 and SS4.2. Here, we measure the runtime of training video-text dual-encoder in the real world. We summarize our findings in Table 4 by starting from the LaViLa baseline. First, shortening the length chunk from 5 minutes to 15 seconds reduces the data-loading overhead \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c} Method & Corpus size & Hardware & Batch size & Memory & GPU-hour & kg CO\({}_{2}\)eq. & 0-shot Avg. mAP \\ \hline \multicolumn{8}{l}{(Original narratives)} \\ \hline EgoVLP [42] & 3.8M & 32\(\times\) A100 & 16 & 22 & 1,536 & 227.33 & 23.3 \\ Ours & 4.0M & 8\(\times\) A5000 & 256 & 19 & 130 (.92\%) & 11.06 (.94\%) & 28.4 (+5.1) \\ \hline \multicolumn{8}{l}{(LLM-augmented)} \\ \hline LaViLa [87] & 35.0M & 32\(\times\) V100 & 32 & 25 & 1,824 & 202.46 & 30.9 \\ Ours & 35.0M & 8\(\times\) A5000 & 256 & 19 & 260 (.86\%) & 22.12 (.89\%) & 32.7 (+1.8) \\ \hline \end{tabular} \end{table} Table 1: **Pre-training cost and 0-shot generalization performance of large video-language models on EK-100 MIR. We compare our training pipeline to the standard training pipeline for large video-language models for two baselines: EgoVLP [42] and LaViLa [87]. Each baseline was originally trained on a multi-node cluster, while our training pipeline fits onto a single 8-GPU machine. We compare training time (GPU-hours), total carbon emission (kg CO\({}_{2}\)eq.) estimated using [1, 38], and zero-shot generalization performance to EK-100 MIR.** \begin{table} \begin{tabular}{c|c|c|c|c} Method & Backbone & \multicolumn{2}{c|}{mAP} & \multicolumn{2}{c}{nDCG} \\ & V\(\rightarrow\)T & T\(\rightarrow\)V & Avg. & V\(\rightarrow\)T & T\(\rightarrow\)V & Avg. \\ \hline \multicolumn{8}{l}{(Zero-shot)} \\ \hline EgoVLP [42] & TSF-B & 19.4 & 13.9 & 16.6 & 24.1 & 22.0 & 23.1 \\ EgoVLP* [42, 87] & TSF-B & 26.0 & 20.6 & 23.3 & 28.8 & 27.0 & 27.9 \\ LaViLa [87] & TSF-B & 35.1 & 26.6 & 30.9 & 33.7 & 30.4 & 32.0 \\ Ours & ViT-B & **37.1** & **28.7** & **32.9** & **34.4** & **31.0** & **32.7** \\ LaViLa [87] & TSF-L & 40.0 & 32.2 & 36.1 & 36.1 & 33.2 & 34.6 \\ Ours & ViT-L & **41.7** & **33.5** & **37.6** & **36.8** & **33.9** & **35.3** \\ \hline \multicolumn{8}{l}{(Finetuned)} \\ \hline MME [71] & TBN & 43.0 & 34.0 & 38.5 & 50.1 & 46.9 & 48.5 \\ JPoSE [71] & TBN & 49.9 & 38.1 & 44.0 & 55.5 & 51.6 & 53.5 \\ EgoVLP [42] & TSF-B & 49.9 & 40.5 & 45.0 & 60.9 & 57.9 & 59.4 \\ LaViLa [87] & TSF-B & 55.2 & 45.7 & 50.5 & 66.5 & 63.4 & 65.0 \\ Ours & ViT-B & **55.9** & **47.8** & **51.8** & **68.2** & **65.4** & **66.8** \\ \hline LaViLa [87] & TSF-L & 54.7 & 47.1 & 50.9 & 68.1 & 64.9 & 66.5 \\ Ours & ViT-L & **57.9** & **51.1** & **54.5** & **70.4** & **67.6** & **69.0** \\ \hline \end{tabular} \end{table} Table 2: **The performance of multi-instance retrieval on EK-100. Our method outperforms previous works on both zero-shot and fine-tuned settings with similar model complexity. Specifically, our model with a ViT-Base video encoder achieves 2.3% higher zero-shot mAP than LaViLa with TimeSformer-Base. Note that this is achieved with a significantly smaller amount of compute cost, details of which are given in Table 1. EgoVLP* indicates that we evaluate the EgoVLP’s checkpoint using our data format for a fair comparison.** \begin{table} \begin{tabular}{c|c|c} Method (Backbone) & Pretrain Data & \multicolumn{2}{c}{Top-1 accuracy} \\ & Verb & Noun Action \\ \hline IPL (I3D) [68] & K400 & 68.6 & 51.2 & 41.0 \\ ViViT-L [3] & IN-21k+K400 & 66.4 & 56.8 & 44.0 \\ MoViNet [35] & N/A & 72.2 & 57.3 & 47.7 \\ MTV [80] & WTS-60M & 69.9 & 63.9 & 50.5 \\ Omnivore (Swin-B) [22] & IN-(21k+1k)+K400+sun & 69.5 & 61.7 & 49.9 \\ MeMViT [74] & K600 & 71.4 & 60.3 & 48.4 \\ LaViLa (TSF-B) [87] & WIT+Ego4D & 69.0 & 58.4 & 46.9 \\ **Ours** (ViT-B) & WIT+Ego4D & 70.0 & 59.8 & 49.1 \\ \hline LaViLa (TSF-L) [87] & WIT+Ego4D & 72.0 & 62.9 & 51.0 \\ **Ours** (ViT-L) & WIT+Ego4D & **73.0** & **65.4** & **54.4** \\ \hline \end{tabular} \end{table} Table 3: **The performance of action recognition on EK-100. We report top-1 accuracy on verbs, nouns, and actions. Ours outperforms all prior works in terms of action-level top-1 accuracy.** by \(6\times\) and increases the overall training speed by \(2.6\times\). Interestingly, this also reduces the model's forward and backward times. Next, we switch to decoding and cropping simultaneously. We can see that the data-loading overhead is further reduced by 0.4 seconds per iteration and the overall training speed is faster. ### Application: Expedite Training MAE in Videos The optimized CPU and GPU computation is not limited to training large video-language models. We take VideoMAE [62] as an example. VideoMAE operates on only a small subset of input tokens and masks out others. This leads to light-weight encoder and decoder computations where data-loading becomes a bottleneck [20, 23, 62]. We conduct VideoMAE pre-training on the training split of Kinetics-400 [7], which contains 241,258 videos. We follow the default setting in [62]. The encoder is a standard ViT-Base model while the decoder has 4 additional Transformer Encoder layers. Each input clip contains 16 frames with a sample stride of 4 and is split into non-overlapping \(8\times 14\times 14=1568\) cubes of size \(t\times h\times w=2\times 16\times 16\). Since the number of visible tokens at the encoder side is only 10%, the memory reduction of using memory-efficient attention is marginal. As such, we only apply memory-efficient attention to the decoder. Fig. 7 shows the improved training speed of using Fused DecodeCrop: It reduces the data loading overhead by almost \(3\times\), _i.e._ from 0.74 to 0.25 seconds per iteration. As a result, the overall training speed decreases from 2.4 to 1.55 seconds per iteration, resulting in a 35% reduction in training time. Finally, we conduct a system-level comparison between the original VideoMAE and ours in Table 5 with the same 4-GPU hardware. Under the same 800-epoch schedule, our training pipeline achieves the same level of accuracy after supervised fine-tuning while running \(1.7\times\) faster than VideoMAE. ## 6 Conclusion We study the bottleneck of training video models from the perspectives of IO, CPU, and GPU computation. With a combination of a memory-efficient attention-based video model, fused decode-cropping operator, and chunk-based video loading, we show the feasibility of training a state-of-the-art video model in a day on a single machine. **Acknowledgements.** This material is based upon work in part supported by the National Science Foundation under Grant No. IIS-1845485. YZ would like to thank Lingfan Yu for the helpful discussions on profiling training throughput. \begin{table} \begin{tabular}{c|c|c|c|c|c|c} Batch & Mem.-eff. & Shorter & Merged & Data-loading & Training & Actual \\ size & Attention & Chunks & RRC & overhead & speed & Throughput \\ & (§4.1) & (§4.3) & (§4.2) & (sec/fit) & (sec/fit) & (vid/sec) \\ \hline 64 & & & & 0.5 & 3.9 & 130 \\ 64 & & & ✓ & 0.3 & 3.5 & 146 \\ 64 & & ✓ & & 0.1 & 1.84 & 278 \\ 64 & & ✓ & ✓ & 0.1 & 1.84 & 278 \\ \hline 256 & & & & (OOM) & (OOM) & N/A \\ 256 & ✓ & & & 10.1 & 20.8 & 98 \\ 256 & ✓ & & ✓ & 8.3 & 17.8 & 115 \\ 256 & ✓ & ✓ & & 1.3 & 6.5 & 315 \\ 256 & ✓ & ✓ & ✓ & 0.9 & 5.9 & 347 \\ \hline \end{tabular} \end{table} Table 4: **The effect on the runtime after improvements to the standard video training pipeline.** The original model did not fit in the GPU memory in our setup, while all other improvements significantly reduced the training time. Figure 6: **Effect of pre-training batch size.** The numbers are reported using \(T=4\) frames as input. Large-batch training, which was not possible without multi-node training, benefits the video-language contrastive models consistently, especially in the presence of larger-scale narratives. \begin{table} \begin{tabular}{c|c|c|c||c} Method & backbone & epochs & GPU-hour & Top-1/5 Acc. (ft.) \\ \hline VideoMAE [62] & ViT-B & 800 & 995 & 80.0/94.4 \\ Ours & ViT-B & 800 & 583 (-41\%) & 80.0/94.5 \\ \end{tabular} \end{table} Table 5: **System-level comparison of training Video MAE.** Both GPU-hours are measured on the 4-GPU hardware. Our pipeline achieves the same accuracy after fine-tuning (“ft.”) while using 41% less pre-training time than VideoMAE [62]. Figure 7: **Training speed comparison of a video MAE model** on \(4\times\) A5000 GPUs and \(1\times\) AMD 32-Core CPU (64 threads). Our Fused DecodeCrop consistently reduces data loading overhead and increases the overall training speed compared to baseline training pipelines. ## Appendix A Pseudo-code for Fused DecodeCrop Fig. 8 illustrates the Pythonic pseudo-code for standard RandomResizedCrop for video inputs ("Decode-then-crop") and our proposed Fused DecodeCrop. ## Appendix B Implementation Details ### Pre-training on Ego4D We pre-train on the video-narration pairs from Ego4D [25] with the training recipe inherited from LaViLa [87]. We train the model using AdamW with \((\beta_{1},\beta_{2})=(0.9,0.999)\) and weight decay of 0.01 for 5 epochs. After Large Language Models augment the video-narrations pairs, the "effective" number of epochs is doubled to 10. We use a fixed learning rate of 3e-5. The projection head after the dual encoders is a linear layer with an output dimension of 256. Our optimized pipeline enables us to put a per-gpu batch size of 256 on a single 8-GPU machine for ViT-B, resulting in a total batch size of 2,048. For ViT-L, we fit a per-gpu batch size of 112 over 8 GPUs, resulting in a total batch size of 896, which is close to 1K. For input, we randomly sample 4 frames between the start and end time of the clip and use standard RandomResizedCrop (0.5, 1.0), which is fused at the video-decoding side, for data augmentation and the input resolution is \(224\times 224\). ### Multi-Instance Retrieval on EK-100 We fine-tune the pre-trained model on EK100 using AdamW with \((\beta_{1},\beta_{2})=(0.9,0.999)\) and weight decay of 0.01. We use cosine annealing with warmup, where the base learning rate starts from 1e-6, linearly increases to a peak of 3e-5 in the first epoch, and then gradually decreases to 1e-5 following a half-wave cosine schedule. We apply the multi-instance max-margin loss [71] with a margin value of 0.2. We use a per-gpu batch size of 64 over 8 GPUs for ViT-B and a per-gpu batch size of 24 over 8 GPUs for ViT-L. We use a stochastic depth ratio of 0.1 in the backbone. For input, we represent each video clip with 16 sampled frames at both training and testing times. At training time, we scale the short side of the video to 256 pixels and then take a 224\(\times\)224 crop and use standard RandomResizedCrop (0.5, 1.0), which is fused at the video-decoding side, for data augmentation. At testing time, we scale the short side to 224 pixels and take the center 224\(\times\)224 crop. ### Action Recognition on EK-100 We fine-tune the pre-trained model on EK100 for 100 epochs using SGD with a momentum of 0.9 and weight decay of 5e-4. We use cosine annealing with warmup, where the base learning rate starts from 1e-6, linearly increases to a peak of 0.012 in the first epoch, and then gradually decreases to 1e-5 following a half-wave cosine schedule. We drop the linear projection head and attach a \(3806\)-dim head for action classification. To get the verb- and noun-level accuracy, we simply marginalize the action-level probability. We use a per-gpu batch size of 64 over 8 GPUs for ViT-B and a per-gpu batch size of 24 over 8 GPUs for ViT-L. We use a stochastic depth ratio of 0.1 in the backbone and apply a dropout of 0.5 before the classification head. We also use a label smoothing of 0.1 and a mixup of 0.8. For input, we represent each video clip with 16 sampled frames at both training and testing times. At training time, we scale the short side of the video to 256 pixels and then take a 224\(\times\)224 crop and use standard RandomResizedCrop (0.5, 1.0) and HorizontalFlip (0.5), both of which are fused at the video-decoding side, for data augmentation. At testing time, we scale the short side to 224 pixels and take the center 224\(\times\)224 crop. Figure 8: Pythonic pseudo-code for video decoding with a cropping filter (§4.2).
2308.16540
Time-Varying Quasi-Closed-Phase Analysis for Accurate Formant Tracking in Speech Signals
In this paper, we propose a new method for the accurate estimation and tracking of formants in speech signals using time-varying quasi-closed-phase (TVQCP) analysis. Conventional formant tracking methods typically adopt a two-stage estimate-and-track strategy wherein an initial set of formant candidates are estimated using short-time analysis (e.g., 10--50 ms), followed by a tracking stage based on dynamic programming or a linear state-space model. One of the main disadvantages of these approaches is that the tracking stage, however good it may be, cannot improve upon the formant estimation accuracy of the first stage. The proposed TVQCP method provides a single-stage formant tracking that combines the estimation and tracking stages into one. TVQCP analysis combines three approaches to improve formant estimation and tracking: (1) it uses temporally weighted quasi-closed-phase analysis to derive closed-phase estimates of the vocal tract with reduced interference from the excitation source, (2) it increases the residual sparsity by using the $L_1$ optimization and (3) it uses time-varying linear prediction analysis over long time windows (e.g., 100--200 ms) to impose a continuity constraint on the vocal tract model and hence on the formant trajectories. Formant tracking experiments with a wide variety of synthetic and natural speech signals show that the proposed TVQCP method performs better than conventional and popular formant tracking tools, such as Wavesurfer and Praat (based on dynamic programming), the KARMA algorithm (based on Kalman filtering), and DeepFormants (based on deep neural networks trained in a supervised manner). Matlab scripts for the proposed method can be found at: https://github.com/njaygowda/ftrack
Dhananjaya Gowda, Sudarsana Reddy Kadiri, Brad Story, Paavo Alku
2023-08-31T08:30:20Z
http://arxiv.org/abs/2308.16540v1
# Time-varying quasi-closed-phase analysis for accurate formant tracking in speech signals ###### Abstract In this paper, we propose a new method for the accurate estimation and tracking of formants in speech signals using time-varying quasi-closed-phase (TVQCP) analysis. Conventional formant tracking methods typically adopt a two-stage estimate-and-track strategy wherein an initial set of formant candidates are estimated using short-time analysis (e.g., 10-50 ms), followed by a tracking stage based on dynamic programming or a linear state-space model. One of the main disadvantages of these approaches is that the tracking stage, however good it may be, cannot improve upon the formant estimation accuracy of the first stage. The proposed TVQCP method provides a single-stage formant tracking that combines the estimation and tracking stages into one. TVQCP analysis combines three approaches to improve formant estimation and tracking: (1) it uses temporally weighted quasi-closed-phase analysis to derive closed-phase estimates of the vocal tract with reduced interference from the excitation source, (2) it increases the residual sparsity by using the \(L_{1}\) optimization and (3) it uses time-varying linear prediction analysis over long time windows (e.g., 100-200 ms) to impose a continuity constraint on the vocal tract model and hence on the formant trajectories. Formant tracking experiments with a wide variety of synthetic and natural speech signals show that the proposed TVQCP method performs better than conventional and popular formant tracking tools, such as Wavesurfer and Praat (based on dynamic programming), the KARMA algorithm (based on Kalman filtering), and DeepFormants (based on deep neural networks trained in a supervised manner). Matlab scripts for the proposed method can be found at: [https://github.com/njaygowda/ftrack](https://github.com/njaygowda/ftrack) Time-varying linear prediction, weighted linear prediction, quasi-closed-phase analysis, formant tracking. ## I Introduction Vocal tract resonances (VTRs), commonly referred to as _formant frequencies_, are speech parameters that are of fundamental importance in all areas of speech science and technology. The estimation and tracking of VTRs from speech signals is a challenging problem that has many applications in various areas: in acoustic and phonetic analysis [1, 2], in voice morphing [3], in speech recognition [4, 5], in speech and singing voice synthesis [6, 7], in voice activity detection [8], and in designing hearing aids [9, 10]. Many algorithms of varying complexity have been proposed in the literature for tracking formants in speech signals [11, 12, 13, 14, 15]. A dynamic programming (DP)-based tracking algorithm with a heuristic cost function on the initial formant candidates estimated using conventional linear prediction (LP) analysis was used in [11, 12]. This two-stage approach has a detection stage, where an initial estimate of the VTRs is obtained, followed by a tracking stage. An integrated approach towards tracking was adopted in [13, 14, 15] using state-space methods such as Kalman filtering (KF) and the factorial hidden Markov model (FHMM). In both approaches, analysis of the signal for the accurate estimation (or modeling) of the vocal tract system is an important and necessary computational block. However, it should be mentioned here that there are a few exceptions, such as [15], which uses a non-negative matrix factorization (NMF)-based source-filter modeling of speech signals. Recently, deep learning-based techniques [16, 17, 18] have also been studied as alternatives to conventional statistical signal processing-based formant estimation and tracking methods. These methods, however, are based on supervised machine learning, which calls for having annotated speech corpora with which to obtain the ground truth formant frequencies for system training. LP analysis is one of the most widely used methods for estimating VTRs from speech signals [19, 20, 21]. To improve the accuracy of LP, several variants of this all-pole spectral modeling method have been proposed [22]. Among the different modifications, autocorrelation and covariance analyses are the most popular LP methods in formant estimation and tracking [11, 12]. Covariance analysis is known to give more accurate formant estimates than autocorrelation analysis, but the stability of the resulting all-pole filter is not guaranteed in covariance analysis [23, 21]. Even though the filter instability must be avoided in applications where the signal needs to be reconstructed (such as speech synthesis and coding), the instability in itself is not a serious problem in formant tracking. Compared to covariance analysis, closed-phase analysis is known to provide even more accurate VTR estimates by avoiding the open-phase regions of the glottal cycle, which are influenced by the coupling of the vocal tract with the trachea [24, 25]. Closed-phase analysis, however, works better for utterances such as those of low-pitched male voices, which have more samples in the closed phase of the glottal cycle compared to high-pitched female and child voices that might have just a few closed-phase samples per glottal cycle. As a remedy for the lack of data samples in formant estimation, a selective prediction of speech samples can be conducted in spectral modeling. A sample-selective prediction is used in weighted linear prediction (WLP) methods by giving different temporal weighting to the prediction error at each discrete time instant [26, 27, 28, 29, 30, 31, 32, 33]. One such method, called sample selective linear prediction (SSLP) analysis, was proposed in [26] for better modeling of the vocal tract area function. In SSLP, a hard rejecting weighting function is used to eliminate outlier samples in sample selection. A more generalized WLP algorithm was developed in [27] with a continuous weighting function for the prediction residual. In [28], an iterative LP algorithm, robust linear prediction was proposed by utilizing the non-Gaussian nature of the excitation signal to derive a temporal weighting function based on the magnitude of the residual samples. To improve the robustness of linear predictive spectrum estimation, a simple non-iterative WLP method was studied in [29] based on the short-time energy (STE) weighting function. The STE weighting function is a temporal energy function that is computed, for example, in 1-2 ms frames of the speech signal waveform. The STE weighting function emphasizes the importance of the high-energy regions within a glottal cycle in computing the autocorrelation (or covariance) matrix. Therefore, this WLP method is similar to closed-phase LP analysis because the high-energy sections of voiced speech emphasized by the STE weighting correspond roughly to glottal closed-phase regions. Since the publication of WLP in [29], several variants of this all-pole modeling method have been developed and used, for example, in the robust feature extraction of speech [29, 31] and in glittal inverse filtering (GIF) [33, 34]. Some of these more recent WLP algorithms have also addressed the stability of the all-pole filter [30, 32]. In [32], a new weighting function, called the attenuated main excitation (AME) window, was studied to improve the accuracy of dormant estimation, especially for high-pitched voices. The AME function is designed to attenuate the effect of prominent speech samples in the vicinity of the glottal closure instants (GCIs) on the autocorrelation function. This is justified because these high-energy speech samples are greatly contributed to by the glottal source, which results in distortion of the formant estimates by the biasing effect of the glottal source. As a sequel to using AME as a temporal weighting function in WLP, the quasi-closed-phase (QCP) analysis of speech signals was proposed in [33] for the estimation of glottal flow with GIF. QCP analysis uses a more generalized version of the AME weighting function, for example, with slanted edges instead of vertical ones. In addition, the weighting function of QCP analysis puts more emphasis on the closed-phase regions compared to the open-phase regions that are prone to subglottal coupling. However, the previous experiments with QCP analysis in [33] focused solely on GIF analysis of the voice source, without any evaluation of the QCP algorithm's performance in formant detection and estimation. The spectral modeling of speech is conducted using conventional LP in short-time segments (5-50 ms) by assuming speech to be a quasi-stationary process [21]. This traditional short-time analysis models the real, continuously varying human vocal tract system in a piecewise manner. In addition, the conventional methods based on short-time LP analysis typically use a two-stage detect-and-track approach in tracking formants [11, 12]. It should be noted that even those formant tracking methods that directly track formants from the cepstral coefficients use this piecewise approximation of the vocal tract system [13, 14]. In order to take into account the inherent slowness of the real human vocal tract (i.e., the system being inertial), time-varying linear prediction (TVLP) provides an attractive method that models the vocal tract over longer time-spans by defining the model parameters as a function of time by using selected, low-order basis functions [35, 36, 37]. The solution to conventional LP involves minimizing the \(L_{2}\) norm of the prediction error signal, the residual, with an inherent assumption that the excitation source signal is a Gaussian process [22, 38]. Based on the theory of compressed sensing, sparsity constraints can be used to utilize the super Gaussian nature of the excitation signal [39, 40]. This is achieved by approximating a non-convex \(L_{0}\) norm optimization problem by using a more tractable convex \(L_{1}\) norm optimization [39]. In addition, it was shown in [40] that an iterative reweighted minimization of the norm can achieve increased sparsity of the error signal, which yields a solution closer to \(L_{0}\) norm optimization. In this article, we propose a new time-varying quasi-closed-phase (TVQCP) linear prediction analysis of speech for accurate modeling and tracking of VTRs. The proposed method aims to improve the estimation and tracking of formants by combining three different ideas: QCP analysis, increased sparsity of the error signal and time-varying filtering. To the best of our knowledge, this combination has not been studied before in formant estimation and tracking and is justified as follows. First, in order to reduce the effect of the glottal source in formant estimation, it is justified to take advantage of QCP analysis to temporally weight the prediction error, which has been shown to improve the estimation of the vocal tract in voice source analysis [33, 34]. Second, filter optimization in previous QCP studies has been conducted using the \(L_{2}\) norm which is known to result in less sparse residuals. Therefore, in order to further enhance the performance of temporal weighting, it is justified to increase the sparsity of the residual in QCP analysis by using the \(L_{1}\) norm. Third, in order to take into account the fact that the natural human vocal tract is a slowly varying physiological system, we argue that formant tracking can be further improved by implementing the proposed \(L_{1}\) norm -based QCP analysis using time-varying filtering. A preliminary investigation of TVQCP for formant tracking was published in a conference paper in [41]. In the current study, our preliminary experiments reported in [41] are expanded in many ways by, for example, including a larger number of evaluation datasets and a larger number of reference methods. In summary, the contributions of the current study are as follows. * Combining the ideas of QCP analysis, \(L_{1}\) norm optimization and TVLP analysis to create a new formant estimation and tracking method, TVQCP. * Studying the advantages of sparsity by comparing the \(L_{1}\) and \(L_{2}\) norm optimization in TVQCP. * Analysing the effects of the different parameters in TVQCP. * Studying the formant tracking performance of TVQCP using synthetic vowels of varying fundamental frequency values and phonation types, using high-pitched child speech simulated with a physical modeling approach, and using natural speech. * Comparing TVQCP with popular formant tracking methods (Wavesurfer, Praat and KARMA) and with a recently proposed deep neural network -based method (DeepFormants) that is based on supervised learning. * Studying the noise robustness of TVQCP for different noise types and signal-to-noise ratio (SNR) scenarios. In the following two sections, the optimization of the TVQCP model is described by first presenting the time-invariant (i.e., stationary) QCP analysis in Section II as background information. After this, the TVQCP (i.e., non-stationary QCP) analysis is presented in Section III. Formant tracking experiments are reported in Section IV and conclusions are drawn in Section V. ## II Quasi-closed-phase analysis QCP analysis belongs to the family of temporally weighted LP methods with a specially designed weighting function based on the knowledge of GCIs [33]. An overview of WLP and the design of the QCP weighting function is given in this section. ### _Weighted linear prediction_ In conventional LP, the current speech sample \(x[n]\) is predicted based on the past \(p\) speech samples as \[\hat{x}[n]=-\sum_{k=1}^{p}a_{k}x[n-k], \tag{1}\] where \(\{a_{k}\}_{k=0}^{p}\) with \(a_{0}=1\) denote the prediction coefficients and \(p\) is the prediction order. Let us denote the estimated transfer function of the vocal tract system as \(H(z)=1/A(z)\), where \(A(z)\) is the \(z-\)transform of the prediction coefficients \(\{a_{k}\}_{k=0}^{p}\). The optimal prediction coefficients minimize the overall prediction error given by the cost function \[E=\sum_{n}e^{2}[n], \tag{2}\] where \(e[n]=x[n]-\hat{x}[n]\) is the sample-wise prediction error, the residual. The optimal prediction coefficients are computed by minimizing the cost function (\(\partial E/\partial a_{i}=0,\ 1\leq i\leq p\)), which results in the following normal equations \[\sum_{k=1}^{p}r_{i,k}a_{k}=-r_{i,0},\quad 1\leq i\leq p, \tag{3}\] \[\text{where}\ \ r_{i,k}=\sum_{n}x[n-i]x[n-k]. \tag{4}\] In the above formulation, it can be seen that the prediction error is minimized in the least-square sense by having equal temporal weighting for every sample. However, in WLP, a different (positive) weighting value is imposed on each squared residual sample, resulting in the following WLP cost function \[E_{w}=\sum_{n}w[n]e^{2}[n], \tag{5}\] where \(w[n]\) denotes the weighting function on the sample-wise prediction error \(e[n]\). It should be noted that the weighting in WLP methods is on the error signal and should not be confused with the traditional short-time windowing (e.g., Hamming) of the speech signal that is used for reducing truncation effects in spectral analysis. The prediction coefficients can be computed in a similar way to that of conventional LP by minimizing the cost function (\(\partial E_{w}/\partial a_{i}=0,\ 1\leq i\leq p\)) and solving the resulting normal equations \[\sum_{k=1}^{p}b_{i,k}a_{k}=-b_{i,0},\quad 1\leq i\leq p, \tag{6}\] \[\text{where}\ \ b_{i,k}=\sum_{n}w[n]x[n-i]x[n-k]. \tag{7}\] ### _The choice of weighting function_ As mentioned earlier in Section I, several weighting functions have been proposed for WLP. STE is one of the popular weighting functions used in WLP, and it is demonstrated in Fig. 1. The figure shows an example of a vowel utterance, an electroglottography (EGG) signal, and the derivative of the EGG signal (dEGG), along with rough markings for the closed phases and open phases. The STE weighting function is computed as \[w[n]=\sum_{k=(D+1)}^{(D+M)}x^{2}[n-k], \tag{8}\] where the delay parameter \(D\) controls the peak position (or emphasis) of the weighting function within the glottal cycle and the length parameter \(M\) controls the peak width, as well as the dynamic range and smoothness of the function. Typical values for these two parameters are \(D=0\) and \(M=12\), the latter corresponding to 1.5 ms at an 8 kHz sampling rate. It can be seen that the STE function puts more weighting to the high-energy closed-phase regions of the glottal cycle. However, Fig. 1 also demonstrates that the degree of suppression in both the glottal open phase and at the instant of the main excitation depends on the decay of the speech signal waveform within the glottal cycle. Therefore, the STE weighting function does not necessarily suppress these regions completely. The effect of this problem of the STE weighting function was studied in our previous study on formant estimation of high-pitched vowels [32]. This previous study indicated that by changing the weighting function from STE to AME resulted in a clear improvement in formant estimation accuracy particularly for the first formant for which the average estimation accuracy improved by almost 10 percentage units. A weighting function based on the residual signal energy can also be used. Fig. 1 shows a residual weighting function derived by inverting and normalizing (between 0 to 1) a zero-mean residual energy signal, computed similar to the STE function. As can be seen from the figure, the residual weighting function may not suppress some weaker glottal excitations (at around 25 ms) as well as the stronger ones. This effect can be more pronounced in the vowel beginning and ending frames with a highly transient signal energy. Also, the residual weighting function may not effectively down-weight the contributions from the open-phase regions of the glottal cycle. A QCP weighting function derived from knowledge of GCIs is also shown in Fig. 1. It can be seen that this weighting function emphasizes the closed-phase region of the glottal cycle, while at the same time the function de-emphasizes the region immediately after the main excitation as well as the open-phase region. ### _Quasi-closed-phase weighting function_ An example of the QCP weighting function \(w_{n}\) is shown in Fig. 2, along with the Liljencrants-Fant (LF) glottal flow derivative waveform \(u_{n}\) for about one glottal cycle. The QCP weighting function can be expressed with three parameters: the position quotient (\(PQ=t_{p}/T_{0}\)), the duration quotient (\(DQ=t_{d}/T_{0}\)), and ramp duration \(t_{r}\), where \(T_{0}\) is the time-length of the glottal cycle. In order to avoid possible singularities in the weighted correlation matrix given in Eq. (6), a small positive value, \(d_{w}=10^{-5}\), is used (instead of zero) as the minimum value in the weighting function. The parameters of the QCP weighting function were optimized in [33] using a set of about 65000 LF-excited synthetic vowels of different phonation types and different fundamental frequency values. Rather than aiming at a generic optimal weighting function, the optimization procedure adopted in [33] was based on using a simple, pre-defined waveform depicted in Fig. 2 whose parameters were optimized in a grid search. For more details about the optimization procedure, the reader is referred to Section IV.A in [33]. The optimization procedure reported in [33] gave both fixed QCP parameters and parameters where one of the values (DQ) was pitch-adaptive. In the current study, we used the pitch-adaptive QCP parameters of [33] and the values of the two fixed parameters were as follows: \(PQ\)=0.05 and \(t_{r}\)=0.375 ms (which corresponds to \(N_{ramp}\)=3 samples using the notation of [33]). \(DQ\) was varied between 0.5 and 0.9 (as will be reported in Section IV-E.4) and was set to \(DQ\)=0.8. Using the QCP function as a temporal weighting waveform in WLP provides two distinct advantages when compared to conventional LP (i.e., giving equal weighting to all squared residual samples) or conventional WLP (i.e., weighting is given using the STE function). The first advantage is that the emphasis of the QCP weighting function is on the closed phase region, which provides more accurate modeling of the vocal tract by reducing the effect of coupling between subglottal and supraglottal cavities. The second is that the QCP weighting de-emphasizes the region immediately after the main excitation of the vocal tract, which reduces the biasing effect of the glottal source in the modeling of VTRs. De-emphasizing the main excitation can also be justified from the observation that this region typically shows large prediction errors that become increasingly dominant with short pitch periods. QCP analysis has previously been shown to be effective in estimating the voice source with GIF [33]. ## III Time-varying quasi-closed-phase analysis The spectral estimation and tracking method proposed in this study, TVQCP analysis, combines the ideas of sample selective prediction (i.e., the underlying idea of QCP), sparsity of the prediction error, and long-time nonstationary analysis of the vocal tract system (i.e., the underlying idea of TVLP). In the following, the normal equations of the proposed TVQCP analysis are derived by starting from conventional LP. Note that the optimization schemes in Section II all used the \(L_{2}\) norm of the error signal whereas this section uses more general optimization norms. ### _Linear prediction_ In conventional LP, the current sample \(x[n]\) is predicted according to Eq. (1) as a linear weighted sum of the past \(p\) samples. By denoting the window size as \(N\), the predictor coefficients can be estimated as a solution to the convex Fig. 1: An illustration of different weighting functions for use in WLP: (a) the speech signal (the solid line) and the short-time energy (STE) weighting function (the dashed line); (b) the LP residual (the solid line) and the weighting function (the dashed line) derived from the residual; and (c) EGG (the solid blue line), dEGG (the solid black line), and three different weighting functions: speech signal–based STE weighting (the dashed red line), residual based weighting (the dashed pink line), and QCP weighting (the dashed black line). Fig. 2: The design of the quasi-closed-phase (QCP) weighting function \(w_{n}\) (the dotted line), along with the LF glottal flow derivative signal \(u_{n}\) (the solid line) for about one glottal cycle. optimization problem of generic norm \(L_{m}\) given by \[\hat{\mathbf{a}}=\operatorname*{arg\,min}_{a}||\mathbf{x}-\mathbf{X}\mathbf{a}||_{m}^ {m}, \tag{9}\] \[\text{where}\quad\mathbf{x}=[x[0],x[1],\ldots,x[N-1]]_{{}_{N\times 1}}^{T},\] (10) \[\mathbf{a}=[a_{1},a_{2},\ldots,a_{p_{\nu x}^{T}}],\] (11) \[\mathbf{X}=[X_{0},X_{1},\ldots,X_{N-1}]_{{}_{N\times p}}^{T},\quad \text{and}\] (12) \[X_{n}=[x[n-1],\ldots,x[n-p]]_{{}_{p\times 1}^{T}}^{T}. \tag{13}\] The minimization of the \(L_{2}\) norm of the residual leads to the least square solution of conventional LP. However, imposing a sparsity constraint on the residual provides better modeling of both the excitation and vocal tract system. This is achieved by minimizing the \(L_{1}\) norm of the residual instead of its \(L_{2}\) norm. This change in the optimization norm is known to give a convex approximation of the solution to the \(L_{0}\) norm optimization problem, also referred to as sparse linear prediction (SLP) [39, 40]. ### _Weighted linear prediction_ WLP analysis uses sample-selective prediction and gives differential emphasis to different regions of the speech signal within a glottal cycle (as discussed earlier in Section II-A). Using a generic \(L_{m}\) norm, WLP can be expressed by minimizing the weighted error signal given by \[\hat{\mathbf{a}}=\operatorname*{arg\,min}_{a}\mathbf{W}||\mathbf{x}-\mathbf{X}\mathbf{a}||_{m}^{m}, \tag{14}\] where \(\mathbf{W}_{N\times N}\) is a diagonal matrix with its diagonal elements corresponding to a weighting function \(w_{n}\), imposed on the prediction error signal. ### _Time-varying linear prediction_ TVLP is a generalization of conventional LP where the predictor coefficients are continuous functions of time. Therefore, TVLP can be used in the spectral analysis of nonstationary speech signals using long-time (e.g., 100-200 ms) frames. TVLP imposes a time-continuity constraint on the vocal tract system in the form of low-order basis functions. Due to this time-continuity constraint, TVLP is capable of modeling the slowly varying vocal tract system better than conventional LP that is based on a piecewise constant quasi-stationary approximation. In TVLP, the current speech sample is predicted using the past \(p\) samples as \[\hat{x}[n]=\sum_{k=1}^{p}a_{k}[n]x[n-k], \tag{15}\] where \(a_{k}[n]\) denotes the \(k^{th}\) time-varying prediction filter coefficient at time instant \(n\). The time-variant predictor coefficient \(a_{k}[n]\) can be expressed using different basis functions, such as polynomials (i.e., power series), trigonometric series, or Legendre polynomials [35]. In this study, we use the simple \(q^{th}\) order polynomial approximation given by \[a_{k}[n]=\sum_{i=0}^{q}b_{k_{i}}n^{i}. \tag{16}\] The TVLP coefficients are estimated by minimizing the \(L_{m}\) norm of the error signal. This can be presented as the convex optimization problem given by \[\hat{\mathbf{b}}=\operatorname*{arg\,min}_{b}||\mathbf{x}-\mathbf{Y}\mathbf{b}||_ {m}^{m}, \tag{17}\] \[\text{where}\quad\mathbf{x}=[x[0],x[1],\ldots,x[N-1]]_{{}_{N\times 1}}^{T},\] (18) \[\mathbf{b}=[b_{1_{0}},\ldots,b_{1_{q}},\ldots,b_{p_{0}},\ldots,b_{p_ {q}}]_{{}_{p(q+1)\times 1}}^{T},\] (19) \[\mathbf{Y}=[Y_{0},Y_{1},\ldots,Y_{N-1}]_{{}_{N\times p(q+1)}}^{T}, \quad\text{and}\] (20) \[Y_{n}=[x[n-1],nx[n-1],\ldots,n^{q}x[n-1],\] \[\ldots,x[n-p],nx[n-p],\ldots,n^{q}x[n-p]]_{{}_{p(q+1)\times 1}}^{T}. \tag{21}\] Again, the \(L_{2}\) and \(L_{1}\) norm minimization lead to the least square solution and the sparse solution to the convex optimization problem respectively [37, 39, 40]. It is to be noted that the \(L_{2}\) norm minimization can be solved in closed form whereas convex optimisation calls for an iterative approach and therefore its computational complexity is larger. The current study uses linear programming in convex optimization for the \(L_{1}\) norm-based methods. Hence, the computational complexity of the \(L_{1}\) norm-based LP methods studied in this article is clearly higher than in the \(L_{2}\) norm-based LP methods. ### _Time-varying weighted linear prediction_ As the final step of the model optimization, let us combine WLP, the technique described in Section II-A and Section III-B, and TVLP, the approach presented in Section III-C. The combination of these two, time-varying weighted linear prediction (TVNLP) analysis, is analogous to WLP where the predictor coefficients are estimated by minimizing the weighted error signal given by \[\hat{\mathbf{b}}=\operatorname*{arg\,min}_{b}\mathbf{W}||\mathbf{x}-\mathbf{Y}\mathbf{b}||_{m}^{m}, \tag{22}\] where \(\mathbf{W}_{N\times N}\) is a diagonal matrix with its diagonal elements corresponding to the weighting function \(w[n]\), imposed on the error signal. Based on Eq. (22), in this study we propose a new TVQCP analysis of speech signals that uses the QCP weighting function (described in Section II-C) in matrix \(\mathbf{W}\) of the TVNLP framework above. By using the \(L_{1}\) norm (i.e., assigning \(m=1\) in Eq. (22)), the TVQCP analysis enables imposing a sparsity constraint on the excitation signal. ## IV Formant tracking experiments One of the main problems with evaluating the performance of a formant tracker and comparing it with other methods is the availability of absolute ground truth in formant frequency values. It is possible to have such absolute ground truth in the case of synthetic speech signals. However, there are two limitations with using synthetic speech signals. The first is that the reference formant frequencies provided by synthetic utterances can be biased towards a particular method of formant tracking if there is a strong similarity in the synthesis model and the analysis model of the tracker. The second is that the formant trackers are ultimately required to process natural speech signals that do not have any reference ground truth. The problem with using natural speech signals is the need for a semi-supervised human annotation of the formant frequency value, which by itself can vary from one annotator to another [42]. Formant tracking from natural speech can also be biased by the tools and techniques used for the annotation, such as spectrographic representations and/or methods used for deriving some of the initial estimates. Also, it should be noted that actual resonance frequencies of the vocal tract cavities need not exactly coincide with the apparent peaks in speech spectra because these spectral peaks might also be harmonics that are a result of the glottal excitation. In order to address the above problem with reference ground truth, the performance of formant tracking with the proposed TVQCP method is studied using both synthetic and natural speech signals. Two different types of synthetic signals were used. In one type, vowels are produced with conventional source-filter modeling of the speech production apparatus using the LF glottal source model and an all-pole vocal tract filter. In the other type, utterances are generated using physical modeling of the vocal tract and glottal source [43, 44]. The latter approach is different from the LF source-filter technique because the speech signal is generated based on physical laws, rather than by a digital parametric model similar to the model assumed in LP and it variants. The physical modeling approach is used to avoid any inherent bias that the LF source-filter technique may have towards the proposed TVQCP method, owing to the fact that both use LP-based methods in vocal tract modeling. ### _Performance metrics_ The formant tracking performance of different methods is evaluated in terms of two different metrics: the formant detection rate (FDR) and formant estimation error (FEE). Throughout this study, formants are identified by looking for the local peaks of the power spectrum. The FDR is measured in terms of the percentage of frames where a formant is hypothesized within a specified deviation from the ground truth. The FDR for the \(i^{th}\) formant over \(K\) analysis frames is computed as \[D_{i} =\frac{1}{K}\sum_{n=1}^{K}I(\Delta F_{i,n}), \tag{23}\] \[I(\Delta F_{i,n}) =\left\{\begin{array}{ll}1&\quad\text{if}\,(\Delta F_{i,n}/F_{ i,n}<\tau_{r}\quad\&\quad\Delta F_{i,n}<\tau_{a})\\ 0&\quad\text{otherwise,}\end{array}\right. \tag{24}\] where \(I(.)\) denotes a binary formant detector function and \(\Delta F_{i,n}=|F_{i,n}-\hat{F}_{i,n}|\) is the absolute deviation of the hypothesized formant frequency \(\hat{F}_{i,n}\) for the \(i^{th}\) formant at the \(n^{th}\) frame from the reference ground truth \(F_{i,n}\). The thresholds \(\tau_{r}\) and \(\tau_{a}\) denote the relative deviation and absolute deviation respectively. Using a single detection threshold, either a relative threshold or an absolute threshold, is problematic on a linear frequency scale. For higher formants, the relative deviation needs to be smaller than that for the lower formants. Similarly, the absolute deviation for lower formants needs to be smaller than that for the higher formants. In order to define a common detection strategy for all formants, two thresholds, one on relative deviation and the other on absolute deviation, must be used. The relative threshold controls the detection rates of lower formants whereas the absolute threshold controls the detection rates of higher formants. The FEE is measured in terms of the average absolute deviation of the hypothesized formants from the ground truth. The FEE for the \(i^{th}\) formant over \(K\) analysis frames is computed as \[R_{i}=\frac{1}{K}\sum_{n=1}^{K}\Delta F_{i,n}. \tag{25}\] The FDR and FEE values are only computed for frames that are voiced or for some particular phonetic category of interest. One problem with accumulating FEEs over all frames is that a few large error outliers can dictate the overall score. This is even more severe for the root mean square error (RMSE) criterion that is a widely used metric for measuring formant estimation accuracy. In view of this, we propose using mean absolute error, which is less sensitive to outliers, as a measure for FEE. The reading of FEE scores in conjunction with FDR scores, which denote the number of frames detected within a fixed threshold, can give a better sense of the performance of a formant tracker. ### _The choice of window size and polynomial order_ As outlined in Section III-C, TVLP analysis involves two parameters (in addition to prediction order \(p\)) that need to be set: window size \(N\) and polynomial order \(q\). Longer window sizes (e.g., 500 ms) are useful for the efficient parameterization of speech signals but would introduce longer delays. Moreover, longer window sizes require higher polynomial orders in order to model the time-varying characteristics of the vocal tract and can lead to computational problems due to the inversion of rank deficient matrices. Therefore, moderate window sizes (e.g., 100-200 ms) are a good overall compromise that enables the efficient parameterization of the slowly time-varying characteristics of the vocal tract using low-order polynomials (e.g., \(q=3\)). In order to study the choice of the window size and polynomial order in TVLP analysis, an initial experiment was conducted on a set of synthetic utterances. The effect of these two parameters on a larger dataset of natural speech utterances will be studied later. The synthetic speech utterances were generated starting with ten (5 male, 5 female) randomly chosen natural speech utterances from the TIMIT-VTR database [42]. The natural utterance was first inverse filtered using a high order (\(p=18\)) short-time LP analysis (20-ms frame size, 10-ms frame shift, and a sampling rate of 8 kHz) to compute a spectrally flat residual signal that was void of any formant structure. This residual signal was then used to excite an \(8^{th}\) order all pole model constructed using the first four reference formants and bandwidths available for the utterances as part of the VTR database [42]. The results of the experiment are shown in Fig. 3, which depicts the relative deviation of the estimated formants from their ground truth, averaged over the first three formants (\(F_{1}\), \(F_{2}\), and \(F_{3}\)) for different values of polynomial order and window size. TVLP analyses computed using the \(L_{2}\) norm are shown as blue bars and those computed with the \(L_{1}\) norm are shown as red bars. Fig. 3(a) depicts the TVLP performance using a fixed window size of 100 ms but with the varying polynomial order \(q\). It can be seen that the best performance is obtained for polynomial orders between \(q=2\) and \(q=4\) and the performance starts to deteriorate at the order of \(q=5\). Similarly, Fig. 3(b) shows the performance by varying the window size at a fixed polynomial order of \(q=3\). It can be seen that the performance is good with moderate window sizes of 100 ms and 200 ms, but the performance starts to deteriorate for longer window sizes. Therefore, in the experiments that follow in the remainder of the paper, we used a window size of 100 ms and a polynomial order of \(q=3\) in time-varying LP analyses. An example of using two different polynomial orders (\(q=0\) and \(q=3\)) for an utterance produced by a female speaker is shown in Fig. 4. The figure depicts the contours of the two lowest coefficients (\(a_{1}\) and \(a_{2}\)) computed using TVLP with the \(L_{2}\) norm. It can be seen that the filter taps computed using \(q=0\) and \(q=3\) follow a similar general trend over the entire time-span shown in the figure but the contours computed using \(q=3\) are clearly more dynamic and their values change also during each frame. ### _Experiments on LF model-based synthetic data_ The performance of the proposed TVQCP method in formant tracking is studied next in this section by analyzing how the method's performance is affected by variations in the glottal excitation (both in fundamental frequency and phonation type). Formant tracking provided by the TVQCP method is compared with that of TVLP using both the \(L_{1}\) norm and \(L_{2}\) norm. In addition, a comparison with the traditional LP covariance-based method (known as the entropic signal processing system (ESPS) method [45]) used in the popular open source tool Wavesurfer [11] (denoted by "WSURF") is also provided. The TVQCP and TVLP analyses are carried out over non-overlapping 100-ms windows using a prediction order of \(p=8\) and a polynomial order of \(q=3\). The ESPS method used in Wavesurfer adopts a short-time (25-ms Hamming window, 10-ms frame shift) \(12^{th}\) order stabilized covariance-based LP analysis followed by a dynamic programming-based tracking of formants [11]. #### Iv-C1 The dataset Four different phonation types (creaky, modal, breathy, and whispery phonation) and four different ranges of fundamental frequency (mean utterance \(F_{0}\) scaled by the factors 1.0, 1.5, 2.0, and 2.5) are considered for generating the synthetic speech test utterances. The phonation type and \(F_{0}\) range are controlled by using the LF model for the glottal source [46]. The LF source parameter values used to synthesize the different phonation types in the current study are taken from [47, 48]. The four different fundamental frequency ranges are generated by scaling the original \(F_{0}\) contour of a natural speech Fig. 4: The trajectories of \(a_{k}[n]\) for q=0 and q=3 in TVLP-L2 for the first and second coefficients (\(a_{1}\) and \(a_{2}\)) are shown in (a) and (b), respectively. The word \(’materials’\) produced by a female talker is used for the illustration. Fig. 5: The absolute deviation (FEE) of the estimated first three formants (\(F_{1}\), \(F_{2}\), and \(F_{3}\)) from their ground truth and their overall average for different phonation types of the LF model–based synthetic data. Fig. 3: Relative deviation (in percentage) of the TVLP-estimated formants from their ground truth, averaged over the three first formants as a function of (a) polynomial order and (b) window size. utterance (3-5 sec long) by different factors before synthesizing the speech signal. A modal LF excitation is generated based on the new \(F_{0}\) contour while retaining the original rate of formants and hence keeping the speaking rate intact. Speech signals are synthesized by filtering the LF glottal flow derivative signal using an all-pole model with the first four semi-automatically derived formants and bandwidths of the natural utterance part of the VTR database [42]. Ten randomly selected utterances (5 male and 5 female) from the VTR database are synthesized for the four different phonation types and four different mean \(F_{0}\) at a sampling rate of 8 kHz. #### Iv-C2 The effect of phonation type The performance of the TVQCP and TVLP methods are shown in Fig. 5 for the four different phonation types. The TVQCP method that minimizes the \(L_{2}\) norm (denoted by TVQCP-L2) performed best overall, marginally better than the TVQCP method that minimizes the \(L_{1}\) norm (denoted by TVQCP-L1). The \(L_{1}\) norm minimization seemed to perform better than the \(L_{2}\) norm for most cases in creaky and modal phonations while the \(L_{2}\) norm performed better for breathy and whispery phonations that exhibit larger open quotients and higher spectral tilts. Overall, it can be seen that the TVQCP methods performed better than their TVLP counterparts across all formants and all phonation types. Moreover, the performance of the both TVLP and TVQCP methods is clearly better than that of the popular Wavesurfer tool. #### Iv-C3 The effect of fundamental frequency The performance of the TVQCP and TVLP methods are shown in Fig. 6 for all the four ranges of \(F_{0}\) values. It can be seen that TVQCP optimized using both the \(L_{1}\) and \(L_{2}\) norms provided consistent improvements over TVLP up to a scale factor of 2.0. The mixed performance for the scale factor 2.5 may be due to the new \(F_{0}\) values moving very close to \(F_{1}\) in the synthetic utterances. However, it has been observed that this minor aberration gets corrected if \(F_{1}\) is shifted upward by a small percentage. Also, the \(L_{1}\) norm optimization seemed to perform better than the \(L_{2}\) norm in most cases except for TVLP for \(F_{1}\) and \(F_{2}\) at \(F_{0}\) scale factor 1.0. In terms of overall performance across all fundamental frequency ranges and formants, TVLP-L2, TVQCP-L2, TVLP-L1, and TVQCP-L1 showed a consistent improvement in this order. The overall performance of the formant tracking methods is given in Table I by averaging over all phonation types and \(F_{0}\) ranges. The general observation is that the FEE reduced considerably with the use of QCP analysis (TVQCP analysis vs. TVLP) and that there is a marginal reduction when using the sparsity constraint (\(L_{1}\) norm vs. \(L_{2}\) norm). Overall, both the TVLP and TVQCP methods provided large improvements over the popular Wavesurfer tool with 60 to 70 percentage unit reduction in the estimation error. ### _Experiments on simulated high-pitched child speech using a physical modeling approach_ The formant estimation accuracy of the proposed TVQCP method is compared to that of TVLP using synthetic data generated by an alternate, physical modeling approach of the speech production apparatus [43]. The experiments in this section try to address two issues with the evaluation of formant estimation and tracking methods. One is the bias of the LF model-based synthetic data towards LP-based methods, and the other is the performance of these methods on speech signals at very high fundamental frequencies. An \(8^{th}\) order analysis is used for all the methods, and the original data at 44.1 kHz is downsampled to 16 kHz and passed through a preemphasis filter \(P(z)=1-0.97z^{-1}\) before further processing. The TVLP and TVQCP methods use a 100-ms window size and a polynomial order of \(q=3\). The final formant estimates are evaluated at a 20-ms frame shift to match the reference formants rate. #### Iv-D1 The dataset The simulated data consists of eight short child speech utterances of a high pitch (as high as 600 Hz) used in [44]. The eight utterances include two steady vowels, [a] and [i], of 340 ms duration each with a constant \(F_{0}\) of 400 Hz. The six simulations of 1.03 s each are three time-varying vocal tract shapes combined with two different time-dependent \(F_{0}\) variations. The three time-varying vocal tract shapes correspond to the sequence of sounds {i.a.i.a.i.a}, {ae.u.ae.u.ae.u}, and {i.a.i}. The fundamental frequency of the utterances varies between 240 Hz to 500 Hz, one in a smooth Fig. 6: The absolute deviation (FEE) of the estimated first three formants (\(F_{1}\), \(F_{2}\), and \(F_{3}\)) from their ground truth and their overall average for different mean \(F_{0}\) values of the LF model–based synthetic data. increasing-decreasing pattern and the other in a reverse pattern over the entire length of the utterance. All the utterances have four vocal tract resonances and are stored at a 44.1 kHz sampling rate. More information on the formant and \(F_{0}\) contours used and other details of the dataset can be found in [44]. #### Iv-D2 The results FEEs computed using both the \(L_{1}\) and \(L_{2}\) norms in the TVLP and TVQCP methods are given in Table II. It is seen that the TVQCP method tends to give a consistent shift in estimating the fourth formant. This could be due to many reasons including the pre-emphasis, sampling rate, model limitations, limited synthetic data, and this needs further investigation. In view of this, further discussions in this section are limited to the first three formants. It can be seen from the table that imposing a sparsity constraint with the \(L_{1}\) norm minimization clearly improves the accuracy of TVLP and TVQCP. The continuity constraint imposed by time-varying models (TVLP) do not seem to provide much improvement on their own. However, when combined with the QCP weighting, the continuity constraint seems to provide large improvements in the case of TVQCP-L1 and some marginal improvement in the case of TVQCP-L2. Owing to the limited availability of data, it may not be possible to draw too many inferences from this experiment. Nevertheless, it demonstrates the usefulness of combining the ideas of QCP analysis, time-varying linear predictive analysis, and the sparsity constraint for formant tracking applications. ### _Experiments on natural speech data_ One of the primary goals of this paper is to evaluate the performance of the proposed TVQCP-based formant tracker on real speech utterances. A detailed evaluation of the TVQCP method and a comparison with some of the state-of-the-art formant trackers is presented in this section. #### Iv-E1 The dataset The performance of different methods in formant tracking was evaluated on natural speech signals using the VTR database published in [42]. The test data of the VTR database is used for the evaluation and this data consists of 192 utterances (8 utterances pronounced by 8 female and 16 male speakers). The duration of each utterance varies between 2 and 5 s. The first four reference formant frequency and bandwidth values derived using a semi-supervised LP-based algorithm [49] are provided for every 10-ms interval. The first three reference formant frequency values have been verified and corrected manually based on spectrographic evidence. All the speech data, originally recorded at a 16 kHz sampling rate, are downsampled to 8 kHz before processing. A pre-emphasis filter of \(P(z)=1-0.97z^{-1}\) is used to preprocess the speech signals. Based on our earlier experiments on formant tracking using synthetic speech signals, we use a default window size of 100 ms, a prediction order of 8, and a polynomial order of 3 for the time-varying linear predictive methods unless otherwise mentioned. All the performance metrics presented in this section are average scores computed over vowels, diphthongs, and semiwovels. These are phonetic categories whose manually corrected formant ground truths are more reliable compared to other categories. #### Iv-E2 The effect of window size, prediction order, and polynomial order The effect of the choices for the window size, prediction order, and polynomial order for the tracking performance of the TVQCP-L1 and TVQCP-L2 methods is provided in Table III and VI by denoting window size in ms as \(N_{t}\). It can be seen that the performance of the TVQCP methods is quite stable over a range of values for the window size and polynomial order. However, the performance seems to be slightly sensitive to the choice of prediction order, which needs further investigation. #### Iv-E3 The choice of weighting function The effect of using different weighting functions within the framework of TVNLP for \(L_{1}\) norm and \(L_{2}\) norm on formant tracking performance is given in Table IV and VII. The different weighting functions studied include the signal energy-based STE function, the residual-based weighting function, and the QCP weighting function discussed earlier in Section II-B. It can be seen that the QCP weighting function performs best among the three compared weighting functions. Note that the TVWLP method with the QCP weighting in Table IV and Table VII corresponds to TVQCP-L1 analysis and TVQCP-L2 analysis, respectively. #### Iv-B4 Robustness to GCI detection errors and the DQ parameter The robustness of the proposed TVQCP method to errors in GCI detection was studied by artificially inducing errors in the estimated GCI locations. Two types of errors were studied. In the first, a uniformly distributed random error (\(Rerr\)) was added to the estimated GCIs. In the second, there was a fixed error (\(Ferr\)) that gives a consistent bias to the estimated GCIs. The formant tracking results for random and fixed GCI errors is given in Table V and VIII for TVQCP-L1 and TVQCP-L2, respectively. It can be seen that the performance of the proposed TVQCP methods is robust to GCI errors in the range of 1-2 ms. Simulating a fixed GCI error is equivalent to altering the position quotient (PQ) of the QCP weighting function (described in Section II-C). The performance of TVQCP in relation to varying the duration quotient (DQ) of the QCP weighting function between 0.5 and 0.9 is given in Tables V and VIII using \(L_{1}\) and \(L_{2}\) norm minimization, respectively. It can be seen that TVQCP performed robustly over a range of DQ values, and the best performance was obtained with DQ=0.8, i.e., using a weighting function, which suppresses the residual energy in 20% of the samples during the fundamental period. Therefore, this value of DQ was used in all the analyses of the study. Iv-B5 A comparison of time-variant linear predictive methods and other formant tracking methods for clean speech The performance of the TVLP and TVQCP methods with different norms are compared to some of the popular formant tracking methods in Table IX. "PRAAT" denotes the Burg method of LP analysis with a 50-ms Gaussian-like window function that is used in format tracking in Praat, a widely used speech research tool [12]. "MUST" denotes an adaptive filter-bank based method proposed by Mustafa et al. [50]. "WSURF" denotes the formant tracker part of Wavesurfer [11] that uses a stabilized covariance analysis over a 25-ms Hamming window. "KARMA" denotes the state-of-the-art KF-based formant tracking method published in [14]. "DeepF" (DeepFomrants) denotes the deep-learning based formant tracking method proposed recently in [16, 18, 51]. It is worth emphasizing that DeepF is based on supervised learning and calls for an annotated speech corpus to be trained. It can be seen from Table IX that the TVLP and TVQCP methods clearly performed better (a 20-60% reduction in error across the three formants) compared to the popular formant tracking methods (Praat and Wavesurfer) that use a two-stage detect-and-track approach. The proposed TVQCP method provided an improvement in the performance (both FDRs and FEEs) of tracking the second and third formants (a reduction in the estimation error of around 30% and 50% respectively) compared to KARMA. The KARMA method performed slightly better than the TVQCP method (with a relative improvement of around 9%) in tracking the first formant. Compared to DeepF, the proposed TVQCP method provided an improvement in FEEs of around 20%, 21% and 12% for all the three formants, respectively. In terms of FDR, DeepF performed slightly better (around 1%) than TVQCP for the first formant. However, for the second and third formants, TVQCP improved the FDR by around 4% and 3%, respectively, compared to DeepF. Differences in performance within the family of time-varying methods were not as evident. However, it can be seen from the results that the use of TVQCP analysis seems to improve the performance of formant tracking. It can also be observed that TVLP-L1 is slightly better than TVLP-L2, and TVQCP-L1 is slightly better than TVQCP-L2. Between TVLP-L1 and TVQCP-L1, TVQCP-L1 is better than TVLP-L1 in both FDRs and FEEs for all the three formants (a reduction in the estimation error of around 2%, 3% and 4% for F1, F2 and F3, respectively). A detailed comparison in the formant tracking performance of KARMA, DeepF, TVLP-L1 and TVQCP-L1 is given in Table X for different phonetic categories. It can be seen that the estimation error of TVQCP-L1 is 15-40% and 25-55% smaller than that of KARMA for \(F_{2}\) and \(F_{3}\) respectively. Likewise, KARMA gave an estimation error that was 1-15% smaller than that of TVQCP-L1 for \(F_{1}\) across different phonetic categories. In comparison to DeepF, the estimation error of TVQCP-L1 was 7-25%, 9-30% and 13-20% smaller for \(F_{1}\), \(F_{2}\) and \(F_{3}\), respectively (except in semivowels for \(F_{3}\)). The performance of DeepF for \(F_{3}\) in semivowels was better (by around 4%) than that of TVQCP-L1. It can also be observed that the performance of TVLP-L1 for \(F_{2}\) and \(F_{3}\) in semivowels was slightly better (by around 1%) than that of TVQCP-L1. On the other hand, the performance of TVQCP-L1 for \(F_{1}\), \(F_{2}\) and \(F_{3}\) was better (by around 2-6%) than that of TVLP-L1. When all the voiced sounds are considered, the performance of DeepF was better than the other methods reflecting the fact that DeepF benefits from supervised learning of the formant contours in the model training. Note that the reliability of the manually corrected reference ground truth is less for the other phonetic categories. In view of this, we can argue that the proposed TVQCP method provided the best overall formant tracking performance compared to the popular
2309.07764
TGh: A TEE/GC Hybrid Enabling Confidential FaaS Platforms
Trusted Execution Environments (TEEs) suffer from performance issues when executing certain management instructions, such as creating an enclave, context switching in and out of protected mode, and swapping cached pages. This is especially problematic for short-running, interactive functions in Function-as-a-Service (FaaS) platforms, where existing techniques to address enclave overheads are insufficient. We find FaaS functions can spend more time managing the enclave than executing application instructions. In this work, we propose a TEE/GC hybrid (TGh) protocol to enable confidential FaaS platforms. TGh moves computation out of the enclave onto the untrusted host using garbled circuits (GC), a cryptographic construction for secure function evaluation. Our approach retains the security guarantees of enclaves while avoiding the performance issues associated with enclave management instructions.
James Choncholas, Ketan Bhardwaj, Ada Gavrilovska
2023-09-14T14:51:38Z
http://arxiv.org/abs/2309.07764v1
# TGh: A TEE/GC Hybrid Enabling Confidential Faa Platforms ###### Abstract. Trusted Execution Environments (TEEs) suffer from performance issues when executing certain management instructions, such as creating an enclave, context switching in and out of protected mode, and swapping cached pages. This is especially problematic for short-running, interactive functions in Function-as-a-Service (FaaS) platforms, where existing techniques to address enclave overheads are insufficient. We find FaaS functions can spend more time managing the enclave than executing application instructions. In this work, we propose a TEE/GC hybrid (TGh) protocol to enable confidential FaaS platforms. TGh moves computation out of the enclave onto the untrusted host using garbled circuits (GC), a cryptographic construction for secure function evaluation. Our approach retains the security guarantees of enclaves while avoiding the performance issues associated with enclave management instructions. ## 1. Introduction Software and data protected within a Trusted Execution Environment are isolated from a compromised OS, malicious userspace processes, and other malicious TEEs when operating as intended, a promise of confidential computing. To fully capture the benefits of TEEs, recent work in industry (Bahdan et al., 2017; Bahdan et al., 2017) and academia (Bahdan et al., 2017; Bahdan et al., 2017; Bahdan et al., 2017) has incorporated these hardware-based security features into systems which make them efficient, easy to consume, and easy to manage. A common challenge for these systems is to amortize the overheads of TEE-based execution. These overheads stem from managing hardware structures when creating the enclave, switching context, and accessing memory which does not fit within the enclave page cache (Shen et al., 2017). Such overheads are reasonable over the lifespan of long-running tasks using tricks to batch and reorder I/O, Intel's Switchless calls, and reducing enclave memory usage (Shen et al., 2017; Shen et al., 2017). However, in the context of short running and interactive tasks such as in Function-as-a-Service (FaaS), the overhead of trusted execution is quite high (Han et al., 2017). We observe that the BeFaaS benchmark (Bahdan et al., 2017) contains only a small handful of operations per function, a much smaller cost than the 17,000 cycles required just to perform the call to pass data into the enclave (Shen et al., 2017). Existing approaches to address TEE overhead, like HotCalls and Intel's Switchless Calls, do not fix the fundamental issue for short interactive tasks which, by definition, require frequent context switches for I/O and fast starts. In this work, we propose a rather unorthodox approach for trusted execution which we call **TEE/GC Hybrid**. The idea is to bypass TEE hardware inefficiencies by repurposing techniques from Secure Multiparty Computation (MPC). MPC is a field of cryptography used to securely evaluate functions between two mutually distrusting parties (often on physical separate machines), each wanting to compute on the aggregate of their secret data without sharing the data with each other. We re-purpose a specific MPC construction, garbled circuits (GC) to run between enclave and host (on same physical machine), while retaining the same security guarantees as native execution on the TEE. The key idea is the trusted enclave sets up a function for the untrusted host to evaluate. The host then evaluates the function in userspace without the performance penalties of context switching or the overhead of enclave page cache management, as depicted in Figure 1. MPC protocols have seen dramatic theoretical improvements in efficiency over the last decade, however security is not without cost. Secure function evaluation under MPC is orders of magnitude slower than evaluating the same function natively. Intuitively, this would preclude MPC protocols from being useful compared to hardware enclaves, however, we notice that when MPC is used to offload computation in this setting, many simplifications can be made to the protocols. In general, MPC enables collaborative computation between many parties each of which may have secret data. Computation offload on the other hand is a subset of this scenario where only one party (the enclave in this case) has secret data. We apply garbled circuits to this problem such that expensive cryptographic operations like Oblivious Transfer (OT) (Bahdan et al., 2017) are unnecessary as only one party owns all the secret data. As such, GC is si Figure 1. Comparison of confidential computing techniques. in the configuration we propose for two reasons: the lack of OT, and colocation of the enclave and host who evaluate the protocol among each other. Using the EMP toolkit library, we measure GC evaluation speed over a LAN to be 5 million AND gates per second, and increases from 22 million to 35 million simply from transferring the garbled truth tables over shared memory vs. local loopback. For short running functions this makes it possible to evaluate their GC faster than the cell into the TEE, making a TEE/GC hybrid (**TGh**) approach an enabler for confidential FaaS. An important limitation of **TGh** is the constant cryptographic overhead of evaluating garbled circuits. Since every operation under GC is slower except for enclave management related operations like ecalls and EPC evictions, only short running functions benefit from this approach. Furthermore, functions running under **TGh** need to be reimplemented as boolean circuits such that control flow does not depend on secret data. Not all functions are ammanable to this transformation. The remainder of this paper describes our research contribution. We detail a TEE/MPC hybrid approach to trusted execution, supplemented with experiments motivating the envisioned performance properties. ## 2. Background **TEE/GC Hybrid** consists of two fundamental technologies, Trusted Execution Environments and garbled circuits. **TEE.** The two major processor designers each have their own implementation of a TEE, Intel with Software Guard Extensions (SGX) and ARM with TrustZone. The set of hardware features collectively called the TEE provide an elevated degree of security for applications. SGX specifically extends the x86-64 ISA to allow application to instantiate a protected execution environment called an enclave while only trusting the hardware and not system software (hypervisor, OS, frameworks, etc.) with explicit instructions to perform host to enclave switch (ecall) and vice versa (ocall). It also incorporates memory protection. When executing in enclave mode the processor enforces additional checks on memory accesses ensuring that only code inside the enclave can access its own enclave region. For other features and details readers are directed to the SGX and TrustZone specifications (Bradner et al., 2016; Bradner et al., 2016). **GC.** Garbled circuits were invented by Andrew Yao in 1986 (Sar Inputs. TEE \(T\) holds input \(x\) and would like to offload computation of the function \(f(x)\), represented as a boolean circuit, to the untrusted host \(H\).Goal.Host \(H\) computes \(y=f(x)\) without learning anything about \(x\) or \(y\). Protocol: 1. **Setup phase.** Before the input to the function is known, \(T\) may begin by generating a standard garbled circuit. We present this in the notation of (Srivastava et al., 2017). 1. \(T\) associates a random mask bit \(\lambda_{\alpha}\in\{0,1\}\) with every wire of the circuit, enabling the point-and-permute technique of (Krause et al., 2017). \(T\) also associates random labels \(\lambda_{\alpha,0}\) and \(\lambda_{\alpha,1}=\lambda_{\alpha,0}\oplus\Delta\) with every wire, enabling the free-XOR technique of (Krause et al., 2017). 2. \(T\) generates a garbled truth table of the form: \begin{tabular}{} \end{tabular} \(\frac{\hat{x}}{\hat{y}}\) garbled rows \begin{tabular}{} \end{tabular} \(\frac{\hat{x}}{\hat{y}}\) \\ \hline 0 0 & \(H(\lambda_{\alpha,0},L_{\beta,0},Y,00)\oplus(\hat{2}_{0,0},L_{Y,\hat{z}_{0,0}})\) \\ 0 1 & \(H(\lambda_{\alpha,0},L_{\beta,1},Y,01)\oplus(\hat{2}_{0,0},L_{Y,\hat{z}_{0,1}})\) \\ 1 & 0 & \(H(\lambda_{\alpha,1},L_{\beta,0},Y,10)\oplus(\hat{2}_{0,0},L_{Y,\hat{z}_{1,0}})\) \\ 1 & 1 & \(H(\lambda_{\alpha,1},L_{\beta,1},Y,11)\oplus(\hat{2}_{0,0},L_{Y,\hat{z}_{1,1}})\) \\ \hline \end{tabular} \(H()\) is a hash function modeled as a random oracle. AES is commonly used in practice. The labels may be chosen pseudorandomly using the output of a PRF. 3. \(T\) sends all garbled truth tables to \(H\) through shared unencrypted memory pages. Thus, \(H\) learns the masked bits \(\hat{x}=x\oplus\lambda_{\alpha}\) and \(\hat{y}\), as well as the garbled truth table. 2. **Online phase.** 1. For every input wire of the circuit, \(T\) sends \(H\) one of the two possible wire labels per gate. Alternatively, a remote client who has deployed enclave \(T\) may send the wire labels to \(H\) directly. This is easy as the remote client would know seed \(S\) and be able to generate the wire labels without requiring any interaction with \(T\). 2. \(H\) evaluates the garbled circuit as usual. For AND gates, the row indexed by \(\hat{x}\), \(\hat{y}\) is decrypted yielding \(\hat{z}\) and \(L_{Y,\hat{z}_{0,0}}\). XOR gate labels may simply be XOR'ed together as described by (Krause et al., 2017) 3. Upon reaching output gates in the circuit, \(H\) sends the output labels to whomever must learn the output, either \(T\) or a remote client. If the receiver recognizes the labels received from \(H\) as labels assigned to output wires, it learns the plaintext result of the computation. If the receiver does not recognize the labels received, the circuit was not correctly evaluated and is aborted. \\ \end{tabular} without interacting with the enclave, thereby moving the enclave out of the critical path. In **TGB**, the only purpose of the enclave is to generate the correlated randomness later used by the host to evaluate the garbled circuit. As such, only a single enclave per machine is required as it may be shared by all clients. As long as clients are convinced of the integrity of the code running inside the enclave via attestation, a single enclave may generate the correlated randomness using a unique seed per client. ## 4. Preliminary Evaluation Security.The security of our TEE/GC hybrid falls to the lowest common denominator of TEEs and GC. Specifically, the hybrid scheme is broken if either the garbled circuit is broken or the TEE is compromised. This is good, as it means the hybrid scheme is just as secure as a TEE. We prove this by contractiction, namely if an attacker can break the TEE/GC hybrid, they have broken either the garbled circuit or the TEE. Say the untrusted host attacking Protocol 1 learns the plaintext value of an intermediate wire label in the computation beyond a negligible advantage (better than flipping a coin.) Considering the view of the host contains only garbled truth tables, this implies that the host has either broken the garbled circuit and can reverse the block cipher used to generate the garbled circuit with non-negligible probability, or the host has learned this information out-of-band from the TEE. TEE security assumptions, however, are a superset of garbled circuits. Both assume block ciphers act as random oracles; TEEs encrypt RAM with AES and garbled circuits build truth tables with it. However, TEEs have a litany of other cryptographic assumptions (Bauer et al., 2017). Since TEEs are strictly weaker than garbled circuits, TEE/GC hybrid has similar security properties as TEEs alone and security guarantees have not been weakened by introducing garbled circuits. Performance.Evaluating a garbled circuit outside an enclave is slower than plaintext execution inside an enclave, however the GC does not need to start an enclave, ecall, or page. Thus, the crux of the performance question we explore is the following: out of a set of instructions, how many must be related to enclave management operations to warrant offloading via **TGB**? The more management operations which exists in a set of instructions, the higher the overhead of the TEE, overhead that can be eliminated by executing those instructions as a GC outside the enclave. Enclave management operations consist of the following: * [leftmargin=*] * * [leftmargin=*] * of this section we compare enclave operations to the number of AND gates which can be evaluated per second. These numbers were measured from the EMP [30] library running on a Intel Core i9 11th generation CPU, measuring the rate at which EMP can evaluate garbled truth tables. Our results focus on AND gates as XOR gates can be evaluated without AES and thus are much faster. _Bypassing ecalls overhead:_ According to recent work [33], ecalls on Intel SGX can consume up to 17,000 extra cycles. This includes the direct effects of context switching like flushing caches and TLBs, but it also includes indirect costs of subsequent compulsory cache and TLB misses. They also measure a minimum costs of 8,600 cycles while other work claims calls cost a similar 10,000 cycles [11]. The upper and lower bounds are shown in Figure 2 compared to the constant amount of time it takes to evaluate one AND gate using **TGh**, outside the enclave. The important inflection point in Figure 2 is for every \(0.7\%\) of ecalls that a series of instructions contains, one garbled AND gate can be evaluated in the same amount of time, outside the enclave. Thus, simple functions which can be represented in a small number of AND gates can theoretically run faster as a garbled circuit with increasing benefit as the program requires more ecalls. In practice, however, functions rarely consist of such a small number of AND gates, thus, ecall overhead is alone is not enough to justify the high overhead of garbled circuits. _Bypassing EPC eviction overhead:_ Intel SGX has an enclave page cache (EPC) to store metadata about encrypted pages. When the number of pages grows beyond what this data structure can track, pages must be evicted through an expensive process that involves multiple memory accesses to encrypted data. In Intel's SGXv1 the EPC is either 128MB or 256MB while in SGXv2, it can be up to 512GiB per socket. SGXv2 is a huge step towards enabling enclaves for applications with a large working set of memory, however in multitenant situations such as in Faa3, even the large EPC size may pale in comparison to the maximum amount of DRAM such a machine could be configured with (and legitimately need). The EPC is shared across all enclaves raising questions of performance isolation between tenants. Thus, even with the massive increase in EPC memory size in SGXv2, we still consider the performance implications of EPC page evictions due to the concerns with multitenancy and the fact that SGXv2 is now only available on Xeon server-grade SKUs and support has been dropped for Core series consumer grade processors. According to Ngoc et al. [11], one EPC page eviction consumes 12,000 cycles. In Figure 3 we can see the inflection point of when EPC page eviction cost outweighs the cost of garble circuits. For every \(0.8\%\) of instructions which cause an EPC page eviction, one garbled AND gate can be evaluated in the same amount of time. Thus, a set of instructions causing frequent EPC page evictions runs faster using **TGh** if the function can be represented in a small number of AND gates. However, given recent increase in EPC size from 128MB in SGXv1 to 256GB in SGXv2, it is unlikely that the cost of evictions alone will outweigh the overhead of executing a function as a garbled circuit. As such, EPC page evictions are complimentary to more dramatic performance reasons for **TGh**, such as enclave creation. _Bypassing enclave creation overhead:_ According to Gjerdrum et al. [13], it takes 300ms to create a batch of 100 SGX enclaves. The relationship between creation time and batch size is linear thus we can expect creation of a single enclave to take 3 milliseconds. In the same amount of time 111000 AND gates can be evaluated, or AES can be evaluated under MPC 17 times. Furthermore, LibOS-based approaches to TEE programming further exaserbate startup costs with simple no-op (return 0;) TEE calls requiring 300 ecalls, 1000 ocalls, and 1000 AEX exits measured on SGXv1 [18], and taking \(370ms\) measured on SGXv2 [18; 26]. Thus, removing enclave creation from the critical path leaves room to run simple functions at the higher cost of garbled circuit evaluation. **End to End Performance in FaaS benchmark:** Thus far we have presented the cost of enclave overheads individually compared to garbled circuits, but how do these overheads stack up in a real application? To give real-world context we consider the BeFaaS e-commerce application which is based on Google's microservice demo [14]. BeFaaS is a collection of functions which implement an online shopping app, one such function allowing a user to check out. To synthetically compare this to **TGh**, we represent the checkout function as a Boolean circuit, then count the number of AND gates to Figure 3. TEE/GC Hybrid can evaluate one AND gate in the same time as a set of instructions with \(0.8\%\) causing EPC page evictions (assuming non-evicting instructions execute in \(1\) cycle). Figure 2. TEE/GC Hybrid can evaluate one AND gate in the same time as a set of instructions made up of \(1\%\) ecalls (assuming non-ecall instructions execute in \(1\) cycle). project how long it would take to execute as a GC. With 2488 AND gates, computing the checkout function is projected to take 70us under GC. This number is significant because it is lower than enclave startup time, faster than doing 20 ecalls, and faster than 25 EPC page evictions. Looking beyond applications, we note the comparisons we've drawn to garbled circuits come from numbers gathered using the EMP library. Semi-honest garbled circuit evaluation in EMP (the specific implementation we're using) is single threaded. In the GC literature, the dominant cost is network bandwidth not CPU computation, thus there is no benefit to parallelizing circuit evaluation. In the context of **TEE/GC Hybrid**, however, the network between the parties is instead a much higher bandwidth using shared memory pages. As such, parallelizing circuit evaluation can significant improve on GC evaluation speed, which is why we refer to **TEE/GC Hybrid** as being accelerator friendly. ## 5. Discussion **Benefits:** One benefit of **TGh** is the simplicity of the software which runs in the enclave. Since all the enclave does is generate garbled circuits, the interface between the untrusted host and enclave is thin minimizing the attack surface of the enclave code. In **TGh**, the enclave is basically acting as an oblivious PRF which can be evaluated inexpensively but is subject to side-channel leakages compared to pure cryptographic approaches. Another benefit is that multiple mutually distrusting remote clients may use the same enclave, removing enclave creation time from the critical path. Since all the enclave does is generate garbled circuits, the enclave source code may be available for all to inspect, and clients who trust the attestation of enclave authenticity can be confident that sharing an enclave to generate garbled circuits will not reveal their private data. As such, the enclave may act as an orchestrator for many remote clients, evaluating their tasks on the host machine. Each client does not need to pay the cost of enclave creation, nor deal with issues of performance isolation as all enclaves running on the same host must share the protected enclave page cache (EPC). Low complexity software running inside the enclave makes it easier for multiple clients to audit and trust there are no bugs in the enclave circuit generation code. **TEE/GC Hybrid** also achieves active security without supplemental cryptographic techniques like message authentication codes commonly used to improve security guarantees of other protocols (Beng et al., 2016; Beng et al., 2017; Beng et al., 2018). What this means is the enclave (or the remote client) can tell by looking at the output from the host if the host tried to cheat in the protocol. The only messages the host sends (and thus can cheat on) are the output wire labels. The host will obtain at most one out of two output labels, by the nature of garbled circuits, and the only way for the host to obtain the one label is to correctly evaluate all gates up until the output. Thus, if the host does not correctly evaluate the garbled circuit they will not receive a legitimate label on the output wires. If the receiver (enclave or remote client) receives an invalid label from the host, they know the host has not correctly evaluated the circuit. This shows the only opportunity the host has to cheat is to guess the output wire label it did not learn with probability of success \(1/2^{\sigma}\), with the random oracle assumption and \(\sigma\) being a statistical security parameter. Lastly, we would like to note that garbled circuits are friendly to hardware acceleration. Recent work have accelerated garbled circuits using everything from GPUs (Han et al., 2017) to ASICs (Krishnan et al., 2017). Garbled circuits parallelize to the same degree as the underlying function and mostly consist of repeatedly evaluating AES. **Limitations:** TEEs are subject to side channel attacks unlike MPC protocols (Krishnan et al., 2017; Krishnan et al., 2017; Krishnan et al., 2017). One might assume combining the TEEs and MPC may improve security beyond what each offers alone, however this is not the case. Instead, security guarantees fall to the lowest common denominator, however as we show, our TEE/MPC hybrid is no weaker than TEEs alone. As proposed, MPC is leveraged to improve the performance of TEE execution, not TEE security guarantees but future TEE/MPC hybrids may extend beyond performance to security. Furthermore, **TGh** does not hide data access patterns, a goal of recent work using TEEs to build Oblivious RAM (ORAM) schemes (Beng et al., 2017). Additionally, certain MPC protocols like those based on garbled circuits are expensive for tasks with branch-y control flow. While recent efforts address the cost of branches under MPC (Krishnan et al., 2017), it has historically required data oblivious algorithms and predicated execution. Secondly, executing a function as a garbled circuit cannot easily be done with an unmodified binary, as with LibOS based approaches discussed later. Thus **TGh** requires additional engineering effort to build functions as circuits to be executed outside the enclave. Furthermore, GC require data oblivious algorithms and predicated execution to prevent leaking private data through conditionals. This leads to more engineering, and potentially performance degradation in building functions as circuits. **Related work.** A popular approach to using enclaves is via library OS which supports running an unmodified application binary within an enclave and using the dynamic linker to capture system calls, which are redirected to the host OS. Prior work reports running an empty enclave (return 0;) on one such system, Graphene, to perform 300 ecalls, 1000 calls, 1000 AEX exits, and 1M EPC evictions (Krishnan et al., 2017). This was measured on SGXv1 which does not support dynamic memory allocation, thus the entire default sized 4GiB heap must be preallocated and paged out which explains the high number of EPC evictions. While SGXv2 does support dynamic allocation and thus does not have this high EPC cost, SGXv2-based platforms still see slow enclave creation time e.g. 370ms (Krishnan et al., 2017). In the same amount of time, over 13 million AND gates can be evaluated corresponding to evaluating AES under MPC 2140 times. Being able to run unmodified binaries within TEEs greatly reduces development overhead but comes at the cost of performance, an especially high cost for short running tasks. Frequently paying such cost, for example on function cold starts, highlights the usefullness of our approach. **Open Questions. TGh** is a novel approach to confidential computing and as such it opens many new research questions at the intersection of cryptography and systems. The most important theme among these questions is scalability and a problem we refer to as the label management problem. It is advantageous to refer to secret data in wire label form to avoid evaluating a decryption algorithm under MPC for performance reasons, but the wire labels cannot be reused in multiple circuits as that jeopardizes the security of the garbled circuits. When inputs to functions are sent as garbled circuit wire labels, how are the labels generated using a secret shared across many remote clients and many enclaves? This not only extends to garbling generation but storage: how should labels be stored across many untrusted hosts to keep the secret values consistent and without reusing labels in multiple circuits? Furthermore, how can functions be chained across machines when each are fed by enclaves with different PRF seeds? Can wire soldering protocols be used between circuits generated by different enclaves (Bartos et al., 2016; Bartos et al., 2017; Bartos et al., 2017; Bartos et al., 2017)? ## 6. Summary In this work we propose a method to evaluate short running, interactive functions associated with FaaS platforms using confidential computing. Our method moves function evaluation out of the enclave and onto the untrusted host using our TGh protocol. We motivated the need to pull the enclave out-of-band by showing that enclave overhead for short running tasks is often greater than the task itself. We then argued the security guarantees of doing so are the same as TEEs alone, and lastly considered the performance implications.
2309.08457
Sim-to-Real Brush Manipulation using Behavior Cloning and Reinforcement Learning
Developing proficient brush manipulation capabilities in real-world scenarios is a complex and challenging endeavor, with wide-ranging applications in fields such as art, robotics, and digital design. In this study, we introduce an approach designed to bridge the gap between simulated environments and real-world brush manipulation. Our framework leverages behavior cloning and reinforcement learning to train a painting agent, seamlessly integrating it into both virtual and real-world environments. Additionally, we employ a real painting environment featuring a robotic arm and brush, mirroring the MyPaint virtual environment. Our results underscore the agent's effectiveness in acquiring policies for high-dimensional continuous action spaces, facilitating the smooth transfer of brush manipulation techniques from simulation to practical, real-world applications.
Biao Jia, Dinesh Manocha
2023-09-15T15:03:54Z
http://arxiv.org/abs/2309.08457v1
# Sim-to-Real Brush Manipulation using Behavior Cloning and Reinforcement Learning ###### Abstract Developing proficient brush manipulation capabilities in real-world scenarios is a complex and challenging endeavor, with wide-ranging applications in fields such as art, robotics, and digital design. In this study, we introduce an approach designed to bridge the gap between simulated environments and real-world brush manipulation. Our framework leverages behavior cloning and reinforcement learning to train a painting agent, seamlessly integrating it into both virtual and real-world environments. Additionally, we employ a real painting environment featuring a robotic arm and brush, mirroring the MyPaint virtual environment. Our results underscore the agent's effectiveness in acquiring policies for high-dimensional continuous action spaces, facilitating the smooth transfer of brush manipulation techniques from simulation to practical, real-world applications. ## I Introduction Painting, an art form rich in diversity and complexity, has been an integral part of human culture throughout history. It encompasses a wide range of styles, from delicate watercolor scenes to intricate Chinese ink landscapes and detailed oil portraits. In recent decades, there has been a concerted effort to simulate these diverse artistic styles using non-photorealistic rendering techniques, including stroke-based and painterly rendering approaches [1, 2]. While these methods have produced impressive results, they often rely on manual engineering, limiting their ability to create entirely novel styles. Recent advances in machine learning have revolutionized image recognition and synthesis, opening up new possibilities for creative tasks such as painting. Machine learning techniques have been applied to various aspects of painting, including brush modeling [3], generating brush stroke paintings in specific artist styles [4], and constructing stroke-based drawings [5]. Other approaches leverage generative adversarial networks [6] and variational autoencoders [7] to emulate artistic styles [8, 9, 10, 11, 12]. In this paper, we focus on a more general and challenging problem of training a natural media painting agent from scratch using reinforcement learning methods. Our goal is to develop an agent that is able to perform a sequence of primitive drawing actions to produce a target output. Given a reference image, our painting agent aims to reproduce the identical or transformed version of that image in the simulated and real environment. We present a novel automated painting framework that employs a painting agent trained through reinforcement learning for natural media painting. The primary objective of our painting agent is to faithfully reproduce a given reference image, either identically or in a transformed manner, in both simulated and real-world environments. In the simulated environment, our model can acquire complex painting policies through reinforcement learning. In the real environment, we have developed a method to transfer the learned policies while Fig. 1: **Demonstration of the Learned Model’s Artistic Versatility**, showcasing a wide range of artistic styles achieved through real robot painting (**a: Robot setup with Realsense D415, UltraArm robot, and paintbrush; b: Water pot; c: Ink pot from the egocentric view from the camera mounted on the end-effector**) and digital painting outputs (**d: Painting in simulated environment; e: Painting using setup (a) with various brushes within the MyPaint [1] virtual environment**). Reference image courtesy of KanjiVG [2]. This demonstrates the effective transferability of the painting policy to a real environment, enabling the generation of various artistic styles. preserving their artistic capabilities. The contributions of our work include: * The introduction of a novel deep reinforcement learning network meticulously designed for learning natural painting media within a simulated environment. Our approach exhibits the versatility to learn with or without human supervision and excels in navigating continuous high-dimensional action spaces, enabling it to effectively handle large and intricately detailed reference images. * The development of an adaptive sim-to-real methodology tailored for deformable brushes. This methodology capitalizes on behavior cloning to initialize policies for painting tasks, facilitating the seamless transfer of learned policies from simulation to reality. * A real painting environment featuring a robotic arm and brush, which corresponds to the MyPaint virtual environment. This real-world setup allows us to undertake complex artistic endeavors, including painting various subjects. We have rigorously evaluated our results using a diverse set of reference images, spanning a wide range of artistic styles, as illustrated in Figure 1. This evaluation encompassed both simulated and real robot setups. Our virtual painting agent exhibits the capability to generate high-resolution outputs tailored to different painting media. Concurrently, our real robot adeptly replicates the subtleties of these references across a spectrum of artistic styles. Through this implementation, we aim to provide a robust and practical solution for high-degree-of-freedom end-effector manipulation tasks. Our method is meticulously designed to discern and adapt to the intricate relationships between actions and environmental changes. ## II Related Work ### _Learning-based Drawing_ There have been several attempts to address related problems in this domain. Xie et al. [3, 4, 13] proposed a series of works to simulate strokes using reinforcement learning and inverse reinforcement learning. These approaches learn a policy either from reward functions or expert demonstrations. Unlike our goal, Xie et al. [3, 4, 13] primarily focus on designing reward functions for generating oriental painting strokes, and their methods require expert demonstrations for supervision. Recently, Ha et al. [5] collected a large-scale dataset of millions of simple sketches of common objects with the corresponding recording of painting actions. Based on this dataset, a recurrent neural network model is trained in a supervised manner to encode and re-synthesize action sequences, and the trained model is shown to be capable of generating new sketches. Following [5], Zhou et al. [9] exploit reinforcement learning and imitation learning to reduce the amount of supervision needed to train such a sketch generation model. Distinct from [5, 9], our painting agent operates in a complex SSPE with a continuous action space involving brush width and color, and our approach learns its policy network completely without human supervision. ### _Visual Generative Methods_ Visual generative methods typically directly synthesize visual output in pixel spaces, which is fundamentally distinct from our approach. Image analogies by Hertzmann et al. [14] solve this problem by introducing a non-parametric texture model. More recent approaches, based on CNNs and using large datasets of input-output training image pairs, learn the mapping function [15]. Inspired by the idea of variational autoencoders [7], Johnson et al. [16] introduced the concept of perceptual loss to implement style transferring between paired datasets. Inspired by the idea of generative adversarial networks (GANs) [6], Zhu et al. [8] learn the mapping without paired training examples using Cycle-Consistent Adversarial Networks. These methods have been successful at generating natural images [11, 12], artistic images [17], and videos [18, 19]. In terms of the final rendering, current visual generative methods can produce results in various painting styles using a limited training dataset. However, compared to our method, these generative methods may fail to achieve high-resolution results. For the purpose of interactive artistic creation, the stroke-based approach can generate trajectories and intermediate painting states. Another advantage of the stroke-based method is that the final results are trajectories of the paintbrush, which can be deployed in different synthetic natural media painting environments and real painting environments using robot arms. ### _Reinforcement Learning-based Painting Methods_ In the development of robotic painting algorithms, various approaches have been investigated. In significant work by Lee et al. [20], a hierarchical reinforcement learning (RL) model was proposed for painting tasks, where a high-level controller learns the painting policy and a low-level manipulator adapts to the deformation of the brush. This dual-layered approach has been a critical reference point for our research. However, in our proposed method, we have prioritized efficiency and higher-dimensional control. Our model is capable of managing sophisticated control strategies, including the adjustment of pressure, stroke width, and depth. Other studies, such as those by Chen et al. [21], El et al. [22], and Vempati et al. [23], have focused on learning low-level manipulation policies to tackle challenges presented by uneven painting surfaces. We also incorporate these strategies into our method, illustrating its versatility and adaptability. A distinctive feature of our approach, compared to these studies, is that our method does not require explicit environmental modeling. Consequently, our algorithm exhibits broader applicability in real-world scenarios and a wider range of painting tasks, marking a significant contribution to the field of robotic painting algorithms. ## III Training a Painting Policy In this section, we delve into the technical details of our painting agent based on reinforcement learning. We begin by introducing the fundamental components of reinforcement learning, encompassing the action space, observation, reward, and policy network. Subsequently, we elucidate the intricacies of our training and runtime algorithms, along with methodologies aimed at enhancing learning efficiency, including curriculum learning, difficulty-based sampling, and self-supervised learning. ### _Policy Representation_ The policy of our painting agent encompasses the definition of actions, observations, rewards, and the architecture of the policy network. The action space characterizes the degrees of freedom of the painting agent, representing the output of the policy network. Observations capture the state of the painting process, serving as input to the policy network. The reward function quantifies the effectiveness of painting actions in achieving the desired configuration, as determined by the environment. The policy network's structure dictates the technical implementation of the machine learning approach. #### Iii-A1 Action Space To capture the essence of painting behavior, we represent actions using stroke properties, including angle, length, size, and color. Specifically, we define the action as a 6-dimensional vector, \(a_{t}=[\alpha_{t},l_{t},w_{t},c_{rt},c_{gt},c_{bt}]\in\mathbb{R}^{6}\), with each value normalized to \([0,1]\). The action space is continuous, enabling us to employ policy gradient-based reinforcement learning algorithms. Notably, when \(w=0\), the brush moves above the canvas without applying paint. #### Iii-A2 Observation Our approach extends the observation \(o_{t}\) of the painting state to encompass the reference image \(s^{*}\) as part of the observation, defined as \(o_{t}=\{s_{t},p_{t}\}\). This inclusion enables the model's generalization across different reference images. In all our experiments, both the reference image and the canvas are encoded as observations, representing the current state and the goal state of the agent. Fig. 3: _Behavior Cloning for Policy Initialization:_ We utilize a behavior cloning algorithm to train the policy, extending the action space to initialize the reinforcement learning (RL) policy within a real environment setup. The action space used in behavior cloning is a subspace of the RL action space and includes direction and on/off canvas actions. This initialization process bridges the gap between behavior cloning and RL, facilitating effective policy learning in the real environment. Fig. 2: _Overview of Training/Rollout Process:_ For each time step, the current state of the canvas and the reference image form the observation for the policy network. Based on the observation, the policy network selects an action to execute and update the canvas accordingly. \begin{table} \begin{tabular}{l l} \hline \hline Symbol & Meaning \\ \hline \(t\) & step index \\ \(s_{t}\) & current painting state of step \(t\), canvas \\ \(s^{*}\) & target painting state, reference image \\ \(s^{*}\) & reproduction of \(s^{*}\) \\ \(o_{t}\) & observation of step \(t\) \\ \(a_{t}\) & action of step \(t\), \(a_{t}=[\alpha_{t},l_{t},w_{t},c_{t}]\) \\ \(r_{t}\) & reward of step \(t\) \\ \(q_{t}\) & accumulated reward of step \(t\) \\ \(\gamma\) & discount factor for computing the reward \\ \(p_{t}\) & position of the paintbrush of step \(t\) \\ \hline \(\pi\) & painting policy, predict \(a\) by \(o\) \\ \(V_{\pi}\) & value function of the painting policy, \\ & predict \(r\) by \(o\) \\ \(R(a_{t},s_{t})\) & render function, render action to \(s_{t}\) \\ \(O(s^{*},s_{t})\) & observation function, encode the current \\ & state and the target state \\ \(L(s,s^{*})\) & loss function, measuring distance between \\ & state \(s\) and objective state \(s^{*}\) \\ \hline \(\alpha_{t}\) & angle of action \(a_{t}\) \\ \(l_{t}\) & length of action \(a_{t}\) \\ \(w_{t}\) & stroke width of action \(a_{t}\) \\ \(c_{t}\) & color descriptor of action \(a_{t}\) \\ \hline \hline \end{tabular} \end{table} TABLE I: Notation Summary We tackle the challenge of incorporating positional information by adopting an egocentric observation strategy. In this strategy, the paintbrush remains centered on the canvas, with the canvas and reference image adjusted accordingly. This approach simplifies the action space, eliminates the need for a replay buffer, and renders training in a continuous action space and large state space feasible. The state observation \(o_{t}\) is defined in Eq. 1, where \((h_{p},w_{p})\) denote the 2D position of the paintbrush, and \((h_{o},w_{o})\) represent the size of the egocentric window. \[\begin{split} o_{t}=&\left\{s_{t}\left[h_{p}-\frac{ h_{o}}{2}:h_{p}+\frac{h_{o}}{2},w_{p}-\frac{w_{o}}{2}:w_{p}+\frac{w_{o}}{2} \right]\right.,\\ &\left.s^{*}\left[h_{p}-\frac{h_{o}}{2}:h_{p}+\frac{h_{o}}{2},w_{ p}-\frac{w_{o}}{2}:w_{p}+\frac{w_{o}}{2}\right]\right\}.\end{split} \tag{1}\] This definition of observation allows us to incorporate the paintbrush's position and enables the generalization of training data. We illustrate our rollout algorithm in Algorithm 1. #### Iii-C3 Reward In our setup, the reward for each action is determined by the difference between the canvas and the reference image. A loss function is employed to calculate the action's reward during each reinforcement learning iteration. To incentivize the painting agent to match the color and shape of the reference image precisely rather than aiming for an average color, we slightly modify the \(L_{2}\) loss into \(L_{\frac{1}{2}}\), \[L_{\frac{1}{2}}(s,s^{*})=\frac{\sum_{i=1}^{h}\sum_{j=1}^{w}\sum_{k=1}^{c}|s_{ ijk}-s_{ijk}^{*}|^{\frac{1}{2}}}{hwc}, \tag{2}\] where the image \(s\) and the reference image \(s^{*}\) are matrices with dimensions \(h\times w\times c\). Here, \(w\) and \(h\) denote the width and height of the image, while \(c\) represents the number of color channels. After defining the loss between \(I\) and \(I^{ref}\), we normalize \(r_{t}\) using Eq. 3, such that \(r_{t}\in(-\infty,1]\). \[r_{t}=\frac{L(s_{t-1},s^{*})-L(s_{t},s^{*})}{L(s_{0},s^{*})} \tag{3}\] #### Iii-C4 Policy Network The first hidden layer applies convolution with 64 \(8\times 8\) filters and a stride of 4. The second layer employs convolution with 64 \(4\times 4\) filters and a stride of 2, followed by the third layer using convolution with 64 \(3\times 3\) filters and a stride of 1. Subsequently, the network connects to a fully-connected layer comprising 512 neurons. All layers employ the ReLU activation function [24]. #### Iii-C5 Curriculum Learning Given the continuous action space \(a\in\mathbb{R}^{6}\), the sampling space can grow significantly as the number of time steps increases. Moreover, policy gradient-based reinforcement learning algorithms may introduce noise that overwhelms the signal. To efficiently train the model, we adopt a curriculum learning approach, wherein the number of sampled trajectories increases during training episodes. Consequently, the agent can learn policies incrementally and generate relatively long strokes compared to models trained without this technique. The agent tends to seek rewards greedily within the limited time steps. Another primary challenge arises from the bias among different samples. In conventional RL tasks, the goal is typically fixed. In our case, however, the reference image must change to prevent overfitting. To overcome this challenge, we implement difficulty-based data sampling. In reinforcement learning, the optimal policy \(\pi^{*}\) maximizes the expected long-term reward \(q_{t}\), which accumulates rewards \(r_{t}\) over a time horizon \(t_{\max}\) of steps, incorporating a discount factor \(\gamma\in\mathbb{R}\), \[q_{t}=\sum_{t=1}^{t_{\max}}r_{t}\gamma^{t}, \tag{4}\] where \(t_{\max}\in\mathbb{Z}\) represents the maximum number of steps for each trial. For a painting policy, numerous goal configurations are sparsely distributed across a high-dimensional space, posing challenges for the convergence of the agent's learning process. We adapt the horizon parameter \(t_{\max}\) by introducing a reward threshold \(r_{\text{thresh}}\) and gradually increasing it during training as: \[\hat{t}_{\max}=\operatorname*{arg\,min}_{i}(r_{i}>r_{\text{thresh}}). \tag{5}\] With this redefined horizon parameter, the policy gradient algorithm can efficiently converge when dealing with a set of complex goal configurations. This encourages the policy to seek rewards greedily within limited time steps, thus reducing the exploration space. ## IV Sim-to-Real Brush Manipulation In this section, we will provide a detailed explanation of the methods employed for sim2real transfer from the painting policy in Section III. The objective is to seamlessly transfer the painting policy learned in simulation to real-world robotic drawing tasks. This transfer is essential for achieving high-quality brush manipulation and stroke control in real-world scenarios. To effectively control the shape of strokes and ensure precise interactions between the brush and various painting media, such as ink, water, and foam, it is imperative to estimate pressure accurately. Pressure plays a pivotal role in determining the thickness and texture of strokes, significantly impacting the quality of artwork produced by the robot. Unlike traditional methods that rely on force sensors, our approach leverages advanced modeling and image analysis techniques to estimate pressure, making it suitable for a wide range of applications where force sensing may not be feasible. In our practical experiments, we adopted a hybrid approach, as outlined in Fig. 3. Initially, we utilized flexible end-effector image capture to determine the optimal pressure range. Subsequently, we employed a stroke image sampling technique to establish a precise mapping. Regarding the policy we have acquired, it can be deconstructed into two distinct components: the high-level and low-level policies. The high-level policy is trained through behavior cloning, enhancing the standardization of stroke order, particularly in the context of handwriting. In contrast, the low-level policy is developed using an efficient sampling-based reinforcement learning methodology. This policy functions as a mapping mechanism, translating the original reinforcement learning low-level policy into tangible actions within the real-world environment. ### _Contact Force Estimation_ Accurately estimating the contact force between the pen tip and the painting media is a crucial aspect of robotic brush manipulation. However, precise force sensors are often unavailable. Therefore, we employ image analysis methods to infer pressure values. #### Iii-A1 Observation of Stroke Images This approach involves indirectly observing environmental changes, specifically the stroke images on the paper, to infer variations in pressure. It is an intuitive method where we record the shape of strokes and the configuration of the robotic arm. We can then interpolate to obtain the desired stroke characteristics. However, finding a suitable arm configuration is not straightforward. Similar to training reinforcement learning (RL) in simulation, this method requires extensive sampling, with many instances yielding no positive rewards due to the limited deformation range of the brush. #### Iii-A2 Observation of End-Effector Images In contrast to observing stroke images, this method offers a more direct approach. It involves capturing the shape changes of the flexible end effector. While this method may be susceptible to image noise, it provides valuable information about the pressure limit of the flexible object. We utilize linear fitting to identify the point at which deformation no longer occurs, treating it as the pressure limit. ### _Mapping Actions from Simulation to Reality_ In Section III, we defined actions in a simulated environment, which may differ from the actions required in the real-world environment. Therefore, we need to map robot actions from the simulated environment's action space to the robot's configuration space in the real world. The first challenge is that the painting plane in the simulated environment differs from the real robot environment. Therefore, we need to find a 2D plane in the 3D configuration space to serve as the painting space. The action mapping formula is computed similarly to the camera's extrinsic calibration. The second challenge arises because certain actions cannot be directly translated into robot movements but still have a limited visual effect. These include: 1. Stroke thickness, which can only be adjusted by changing the brush's contact force. 2. Color, which, in our setup, is limited to monochrome. Color changes are achieved through interactions with the environment, such as dipping in ink, water, or interacting with a sponge. 3. Tilt, which our 3-DoF robot cannot directly achieve due to limited kinematics. To approximate these effects, we employ the following methods: #### Iii-A1 Gaussian Modeling of Strokes The key to achieving artistic font treatment is to emulate the stroke characteristics of human artists. To accomplish this, we use Gaussian modeling for each stroke. This model captures the distribution of the stroke's centroid and pressure, allowing us to generate artistic fonts with various styles. Fine-tuning these parameters enables us to create different types and styles of strokes, achieving font diversity. #### Iii-A2 2D to 3D Action Projection To match the actions from the simulated environment to the real robot's configuration space, we need to project 2D actions into a 3D configuration space. This projection can be defined using the following equation, which is similar to a camera's extrinsic calibration projection: \[\begin{bmatrix}x_{\text{robot}}\\ y_{\text{robot}}\\ z_{\text{robot}}\end{bmatrix}=\begin{bmatrix}R&T\\ 0&1\end{bmatrix}\begin{bmatrix}x_{\text{painting}}\\ y_{\text{painting}}\\ 1\end{bmatrix}\] Here, \(x_{\text{robot}}\), \(y_{\text{robot}}\) and \(z_{\text{robot}}\) represent the robot's coordinates. \(x_{\text{painting}}\) and \(y_{\text{painting}}\) are the desired painting coordinates in 2D space. The transformation matrix \(\begin{bmatrix}R&T\\ 0&1\end{bmatrix}\) maps the 2D Fig. 4: _Effect of Gaussian Stroke Model on Stylization:_ We model a long stroke composed of segments using a Gaussian distribution. These correspond to the first column in Fig. 1, where variations in artistic style are achieved by adjusting the Gaussian parameters. painting coordinates to the 3D robot configuration, allowing us to generate actions that correspond to the desired painting locations and orientations in the real world. ## V Behavior Cloning Behavior cloning leverages a paired dataset comprising observations and corresponding actions to train a policy to mimic expert trajectories or behaviors. In our context, the expert trajectory is encoded in the paired dataset \(\{o_{(t)},a_{(t)}\}\). We employ behavior cloning to initialize the policy network for reinforcement learning, using the supervised policy trained with the paired data. The paired dataset can be generated by a human expert or an optimal algorithm with global knowledge, which our painting agent lacks. Once we obtain the paired dataset \(\{o_{(t)},a_{(t)}\}\), one common approach is to apply supervised learning based on regression or classification to train the policy. The training process can be formulated as an optimization problem: \[\pi^{*}=\arg\min\sum_{t}^{N}||\pi(o_{t})-a_{t}||. \tag{6}\] Generating an expert dataset for our painting application can be challenging due to the significant variation in reference images and painting actions. However, we can create a paired dataset by rolling out a policy during the RL training process. Additionally, there are existing datasets like KanjiVG and Google's Quick, Draw! that provide paired supervised data [25, 26]. ## VI Experiment ### _Setup_ For our simulated painting setup, we created an environment that allows the painting agent to explore a high-dimensional action space and observation space based on MyPaint [27]. For the real brush manipulation experiment, we implement our approach using an UltraArm, which features 3 DoFs for movement as shown in Fig. 1. The primary experimental setup includes a water pot and foam, allowing the robot to manipulate a paintbrush by absorbing water, squeezing it, or using the object to reshape it. This setup serves to demonstrate that our method can effectively learn the complexity of high DoF end-effector manipulation tasks in a practical and realistic scenario. By incorporating the water pot and foam into the experimental setup, we introduce additional challenges that the robot must learn to overcome. These include controlling the amount of water absorbed by the paintbrush, adjusting the pressure applied when squeezing or reshaping the brush, and maintaining a stable grip on the brush throughout the manipulation process. These added complexities showcase the adaptability and effectiveness of our approach in handling diverse manipulation tasks involving deformable materials and intricate interactions with the environment. ### _Data Preparation_ In the scope of our real-robot experiments, we selected the KanjiVG dataset [25] for our training endeavors. This dataset, rich in its depth, provides detailed stroke information for approximately 2,000 distinct characters. Every individual character within the dataset has been complemented with associated painting actions, which are vividly depicted in Fig. 5, columns 1 and 3. This dataset, having been meticulously collated from human participants, establishes itself as a premier choice when leveraging behavior cloning in the domain of robotic calligraphy. Within the framework of our reinforcement learning (RL) strategy, we leaned on the acclaimed CelebA dataset [28] to facilitate the training of our painting agent. It's important to note that our rollout algorithm was architectured employing MyPaint [27], a decision made to ensure the results seamlessly mirror the characteristics of natural media. The nuances of the painting model are distilled implicitly, rooted in the foundational knowledge embedded in the environment model. The versatility and robustness of our algorithm are showcased in Fig. 1. ### _Evaluation_ We demonstrated the advantages of our approach by computing performance and comparing visual effects. We designed three experiments to evaluate the performance of our algorithms. For the first experiment, we computed the learning curve of the baseline model and the model with curriculum learning (Sec. III-A5), as shown in Fig. 6. Both models converged within \(78,000\) episodes. The y-axis denotes the average rewards of the trained model in a validation dataset, and the x-axis denotes the training episodes. As the training process proceeded, the average rewards grew, showing that curriculum learning can improve the reinforcement learning to converge to a better policy. For the second experiment, we evaluated the performance of the high-resolution reference images. We computed the \(L_{2}\) loss and cumulative rewards and compared our approach with behavior cloning, reinforcement learning, and a combined. We drew \(1000\)\(400\times 400\) patches from 10 reference images to construct the benchmark. Moreover, we iteratively applied Fig. 5: Illustration of KanjiVG’s labeled data used for behavior cloning (Column 1,3) and a result generated by a 3-DoF robot (Column 2,4).
2309.09991
Proof of the Collatz Conjecture by Collatz Graph
The 3n+1 problem, or Collatz problem, is an extremely simple to state, extremely hard to solve, problem. A number of Collatz graphs have been presented to visualize the Collatz sequences. The Collatz graph is grown by considering the bottom-up method with the inverse relation. If n is the Collatz functional value of m, then n is connected by m. The concept is simple, the tree-based graphs indeed provide a path starting from n down to the root, the number of 1, for a given seed n, and demonstrate the generated Collatz sequences eventually converges to 1. However, as a general case, due to the irregular structures, no one has yet proved the completeness of the Collatz graphs. By completeness we mean that the Collatz graph contains all positive integers n. This paper proves the Collatz conjecture by constructing a Collatz graph with the regular structure. The developed Collatz graph consists of Collatz nodes located various levels of the graph. In the developed graph, each node consists of all positive integers m which have the functional value n. A set of simple, yet efficient connection rules is also developed to construct the graph. Results show that the developed Collatz graph generates the Collatz trajectories for all positive integers and the sequences converge to 1. This proves the completeness of the developed Collatz graph and Collatz conjecture.
Chin-Long Wey
2023-09-15T19:13:06Z
http://arxiv.org/abs/2309.09991v2
# Proof of Collatz Conjecture ###### Abstract The \(3n+1\) problem is an extremely simple to state, extremely hard to solve, problem. An interconnection dynamical system model, Component Connection Model, is presented to model the \(3n+1\) problem. The model consists of a set of equations describing the component dynamics and component interconnections. This paper attempts to prove both Collatz conjecture and Syracuse conjecture. Syracuse conjecture is a _2N+1_-version of Collatz conjecture, where _2N+1_ is the positive odd integers. _2N+1_ is partitioned into 4 disjoint sets yielding that the Syracuse sequences contain the terms either with the values of \(6t+1\) or \(6t+5\), and the seed of the sequence can be any positive odd integers. Two incoming term matrices are developed to describe the system components, while the component interconnections are described by a connection tree. The connection tree shows that all Syracuse sequences and Collatz sequences can eventually reach the number of 1. This proves that both conjectures are true. _Key words:_ Collatz Conjecture, Syracuse Conjecture, Component Connection Model, Dynamical System, Component Dynamics, Component Interconnection. ## 1 Introduction Let \(N\) = \(\{0,1,2,\cdots\}\) and _N+1_ = \(\{1,2,\cdots\}\) denote the natural numbers and the positive integers, respectively, while _2N+1_ = \(\{1,3,5,\cdots\}\) and _2N+2_ = \(\{2,4,6,\cdots\}\) are the positive odd and even integers, respectively. The _3n+1 problem_, or _Collatz problem_, is one the hardest math problems, yet still unsolved. The Collatz sequence is \(COL(n)\)=\(\{n=n_{0},n_{1},\cdots,n_{c}\}\), for all \(n_{i}\in\emph{2N+1}\). \(n\) is called the _seed_ and \(n_{i}\) is called the _term_, where the Collatz function is defined as \(n_{i+1}=Col(n_{i})=3n_{i}+1\), if \(n_{i}\) is odd, and \(n_{i+1}=Col(n_{i})=n_{i}/2\), if \(n_{i}\) is even, \(i=0,1,\cdots,c\). The unsolved 3n+1 problem is to prove or disprove that the sequences always eventually reach the number of 1. Syracuse conjecture is a (_2N+1_)-version of Collatz conjecture. The Syracuse sequence \(SYR(n)=\{n=n_{0},n_{1},\cdots,n_{s}\}\), \(n_{i}\in\emph{2N+1}\), and \(n_{i+1}=Syr(n_{i})=(3n_{i}+1)/2^{d}\), \(d\in\emph{2N+1}\). The extensive surveys and historical discussion of Collatz and Syracuse conjectures can refer [1-5]: In a Collatz sequence, if \(m\notin COL(n)\) and \(n=Col(m)\), m is called the _incoming term_ of n. \(COL(m)=\{m,n=n_{0},n_{1},n_{2},\cdots,n_{c}\}\) is also a Collatz sequence and COL(n) is a sub-sequence of COL(m). _The 3n+1 problem is an extremely simple to state, extremely hard to solve, problem [1]_. This paper presents a system model, namely, _Component Connection Model (CCM)_, to describe the 3n+1 problem. The CCM of a dynamic system consists of a set of equations describing the component dynamics and component interconnections [6-8]. The system input vector u yields the system output response vector y, where \(a_{i}\) and \(b_{i}\) are the component input and output vectors, respectively. For the 3n+1 problem, the input u is any positive integer n, and the output is y=1 if the 3n+1 problem is true. If \(a_{i}\) is the _present term_, then \(b_{i}\) is the _next term_, and \(b_{i}=Col(a_{i})\) or \(b_{i}=Syr(a_{i})\). The term \(b_{i}\) is linked to the next \(a_{i+1}\) though the connection block. Note that the components in the CCM can be either functions, modules, or subsystems. A simple component model may cause very complicated component interconnections. Conversely, are used as the components. There exists a design trade-off between component and connection complexities. Most of research works on \(3n+1\) problem focus on modelling the component by a simple Syracuse or Collatz function causing extremely complicated component connections. In this paper, from the observation of \(Syr(1)=1,Syr(5)=1,Syr(21)=1,\cdots\), the numbers \(1,5,21,\cdots\), are called the _incoming terms_ of 1 and they have the same functional value of 1. Let \(m=1,V(m)=4m+1\), and \(Syr(V^{p}(m))=1,p\in\textbf{{N}}\). If a component is modelled by a module which consists of \(Syr(m),m,V(m),\cdots,V^{p}(m)\), the complexity of the connection model can be reduced significantly. In the next section, the properties of the incoming term matrices are presented. The construction of the incoming term tree is discussed in Section 3. The Syracuse sequences generated from the tree is also proven to be free of non-trivial cycles and also bounded. In addition, all components will find a path to reach the component connected with the trivial cycle. This asserts the Syracuse conjecture to be true. Section 4 proves the Collatz conjecture. Finally, summary and concluding remarks are given in Section 5. ## 2 Incoming Term Matrices for Positive Odd Integers This section presents the component model and connection model of Syracuse sequences. Consider a Syracuse sequence \(SYR(n)=\{n=n_{0},n_{1},n_{2},\cdots,n_{s}\}\), the seed \(n=n_{0}\in\textbf{{2N+1}}\), and the values of the remaining terms are either \(6t+1\) or \(6t+5\), where \(t\in\textbf{{N}}\). ### Component Model The positive odd integers _2N+1_ is partitioned into 4 disjoint sets \(\{8q+d\}\), d=1,3,5,7, and \(q\in N\), i.e., \(\{8d+1\}\cup\{8d+3\}\cup\{8d+5\}\cup\{8d+7\}=\textbf{{2N+1}}\). Because of \(\{8q+3\}\cup\{8q+7\}=\{4q+3\}\) and \(\{12q+5\}\cup\{12q+11\}=\{6q+5\}\), their Syracuse functional values are \(Syr(4q+3)=(3(4q+3)+1)/2=6q+5\), \(Syr(8q+1)=(3(8q+1)+1)/4=6q+1\), and \(Syr(8t+5)=(3n+1)/2^{d}\), as shown in Figure 2(a). **Lemma 2.1**.: _Let \(S_{a}(t)=Syr(8t+a)\), a=1, 3, 5, 7. \(S_{5}(4t)=S_{1}(t)\); \(S_{5}(4t+1)=S_{3}(t)\); \(S_{5}(4t+2)=S_{5}(t)\); and \(S_{5}(4t+3)=S_{7}(t)\)._ Proof.: \(S_{5}(t)=Syr(8t+5)=(3(8t+5)+1)/2^{r}=(24t+16)/2^{r}\)=\((3t+2)/2^{r-3}\in 2N+1\) only if (i) \(r=3\) and t is odd, or (ii) \(r=4\) and t is even. Cases (i) and (ii) yield \(S_{5}(t)=3t+2\) and \(S_{5}(t)=3t/2+1\), respectively. Thus, \(S_{5}(4t)=3(4t)/2+1=6t+1=S_{1}(t)\); \(S_{5}(4t+1)=3(4t+1)+2=12t+5=S_{3}(t)\); and \(S_{5}(4t+3)=12t+11=S_{7}(t)\). For \(S_{5}(4t+2)=(3(4t+2)+2)/2^{r-3}\)=\((3t+2)/2^{r-5}\in 2N+1\) only if (c) \(r=5\) and t is odd, or (d) \(r=6\) and t is even. In both cases, \(S_{5}(4t+2)=S_{5}(t)\). **Remark 2.2**.: _By Lemma 2.1, \(\{S_{5}(t)\}\subseteq\{S_{1}(t)\}\cup\{S_{3}(t)\}\cup\{S_{7}(t)\}\), i.e., the values of \(S_{5}(t)\) are nothing but just the duplicated values of \(S_{a}(t)\), a=1,3,7. This leads to develop two incoming matrices \(\{I_{a}(p,q)\}\), a=1,5. First, \(I_{1}(0,q)=8q+1\) and \(I_{5}(0,q)=4q+3\), then \(I_{a}(p+1,q)=4I_{a}(p,q)+1\), \(p\in\textbf{{N}}\)._ Figure 1: System mode: Component Connection Model. **Remark 2.3**.: _Table A (In Appendix) shows the sample describing \(\{S_{5}(t)\}\subseteq\{S_{1}(t)\}\cup\{S_{3}(t)\}\cup\{S_{7}(t)\},\) for interesting readers._ **Lemma 2.4**.: _Let V(m)=4m+1, U(m)=(m-1)/4, Q(m)=4m+2_ 1. \(V^{p}(m)=(m+1/3)4^{p}-1/3\) _and_ \(U^{p}(m)=(m+1/3)/4^{p}-1/3\)_;_ 2. \(Q^{p}(t)=(t+2/3)4^{p}-2/3.\)__ Proof.: (1) \(V(m)=4m+1=(m+1/3)*4-1/3,\) and \(V^{2}(m)=4((m+1/3)*4-1/3)+1=(m+1/3)*4^{2}-1/3\). Assuming that \(V^{p-1}(m)=(m+1/3)4^{p-1}-1/3\), \(V^{p}(m)=V(V^{p-1}(m))=(m+1/3)*4^{p}-4/3+1=(m+1/3)*4^{p}-1/3\); \(U(m)=(m-1)/4\), \(V(U(m))=m\), \(V^{p}(U^{p}(m))=m=(U^{p}(m)+1/3)4^{p}-1/3\), i.e., \(U^{p}(m)=(m+1/3)/4^{p}-1/3\); (2) \(Q(m)=4m+2=(m+2/3)*4-2/3\), similar to (1), \(Q^{p}(m)=(m+2/3)4^{p}-2/3\); **Lemma 2.5**.: _Let \(Q(t)=4t+2\), if \(m=8t+5\), \(Syr(m)=S_{5}(t)=S_{5}(Q^{p}(t))\)._ Proof.: In the proof of Lemma 2.1, \(S_{5}(4t+2)=(3t+2)/2^{r}-5\in\)_2N+1_ only if (i) \(r=5\) and t is odd, or (ii) \(r=6\) and t is even. In both cases, \(S_{5}(4t+2)=S_{5}(t)\), i.e., \(S_{5}(Q(t))=S_{5}(t)\). Assuming that \(S_{5}(Q^{p-1}(t))=S_{5}(t)\). Therefore, \(S_{5}(Q^{p}(t))=S_{5}(Q^{p-1}(Q(t)))=S_{5}(Q(t))=S_{5}(t)\). Thus, \(Syr(m)=S_{5}(t)=S_{5}(Q^{p}(t))\). **Lemma 2.6**.: _Let \(Q(t)=4t+2\), if \(m=8t+5\), \(Syr(m)=S_{5}(t)=S_{5}(Q^{p}(t))\)._ Proof.: In the proof of Lemma 2.1, \(S_{5}(4t+2)=(3t+2)/2^{r}-5\in\)_2N+1_ only if (i) \(r=5\) and t is odd, or (ii) \(r=6\) and t is even. In both cases, \(S_{5}(4t+2)=S_{5}(t)\), i.e., \(S_{5}(Q(t))=S_{5}(t)\). Assuming that \(S_{5}(Q^{p-1}(t))=S_{5}(t)\). Therefore, \(S_{5}(Q^{p}(t))=S_{5}(Q^{p-1}(Q(t)))=S_{5}(Q(t))=S_{5}(t)\). Thus, \(Syr(m)=S_{5}(t)=S_{5}(Q^{p}(t))\). **Theorem 2.7**.: _If m is incoming term of n, \(n=Syr(m)\), and \(m=8t+5\), then \(n=Syr(m)=Syr(V^{p}(m))\), for \(p\in N\), where \(V(m)=4m+1\)._ Proof.: \(V(m)=4m+1\), \(V^{p}(m)=(m+1/3)4^{p}-1/3=(8t+5+1/3)4^{p}-1/3=8(t+2/3)4^{p}-1/3=8[(t+2/3)4^{p} -2/3]+5\), by Lemma 2.4(2), \(V^{p}(m)=8Q^{p}(t)+5\), where \(Q(t)=4t+2\). Thus, \(Syr(V^{p}(m))=Syr(8Q^{p}(t)+5)\). By Lemma 2.5, if \(m=8t+5\), \(Syr(V^{p}(m))=S_{5}(Q^{p}(t))=S_{5}(t)=Syr(m)\). Figure 2: Syracuse sequences: (a) Partitioned sets; Incoming term matrices, \(I_{a}(p,q)\), a=1, 5; and (c) Contents of incoming term matrices \(I_{a}(p,q)\). **Remark 2.8**.: _In \(\{I_{a}(p,q)\}\), let \(m=I_{a}(0,q)\), \(V(m)=4m+1=4I_{a}(0,q)+1=I_{a}(p+1,q)\). By Lemma 2.7, \(I_{1}(p,q)=V^{p}(8q+1)=[(6q+1)4^{p+1}-1]/3\) and \(I_{5}(p,q)=V^{p}(4q+3)=[(6q+5)4^{p+1}-2]/6\). This yields that \(I_{a}(p,q)=5(mod8)\), for \(p\leq 1\). By Theorem 2.6, \(Syr(I_{1}(p,q))=Syr(V^{p}(8q+1))=Syr(8q+1)=6q+1\), and \(Syr(I_{5}(p,q))=Syr(V^{p}(4q+3))=6q+5\) for any \(p\in\textbf{N}\)._ **Theorem 2.9**.: _Consider the matrices \(\{I_{a}(p,q)\},\) a=1 and 5,_ 1. \(\{I_{1}(p,q)\}\cup\{I_{5}(p,q)\}\) _=_ 2N+1_, for_ \(p,q\in\textbf{N}\)_, and_ \(\{I_{1}(p,q)\}\cap\{I_{5}(p,q)\}=\phi\)_;_ 2. \(\{I_{1}(p,q)\}\cup\{I_{5}(p,q)\}=\{8t+5\},p\in\textbf{N+1}\)_._ Proof.: (1) Let \(R(p,q)=\{I_{1}(p,q)\}\cup\{I_{5}(p,q)\}\). For p=0, \(R(0,q)=\{8q+1\}\cup\{4q+3\}=\{2q+1\}-\{8q+5\},R(1,q)=\{32q+5\}\cup\{16q+13\}=\{8 q+5\}-\{32q+21\}\), thus \(R(0,q)\cup R(1,q)=\{2q+1\}-\{32q+21\}\). Let \(m=8t+5\), by Lemma 2.4(1), \(V(m)=4m+1\), \(V(8t+5)=32q+21\). i.e., \(R(0,q)\cup R(1,q)=\{2q+1\}-\{V(m)\}\). Similarly, \(R(2,q)=\{128q+21\}\cup\{64q+53\}=\{32q+21\}-\{128q+85\}\), and \(R(0,q)\cup R(1,q)\cup R(2,q)=\{2q+1\}-\{V^{2}(m)\}.\) For any r, \(R(0,q)\cup R(1,q)\cup\cdots\cup R(r,q)=\{2q+1\}-\{V^{r}(m)\}\) approaches to \(\{2q+1\}\) as r goes to infinity. Therefore, \(R(p,q)=\{I_{1}(p,q)\}\cup\{I_{5}(p,q)\}=\{2q+1\}=\textbf{2N+1}\). \(\{Syr(I_{a}(p,q))\}=\{6q+a\},a=1,5,\{6q+1\}\cap\{6q+5\}=\phi\) implies that \(\{I_{1}(p,q)\}\cap\{I_{5}(p,q)\}=\phi\). (2) \(R(p,q)-R(0,q)=(\textbf{2N+1})-R(0,q)=(\textbf{2N+1})-(\{4q+3\}\cup\{8q+1\}= \{8q+5\})\), i.e., \(R(p,q)=\{8t+5\}\) Figure 2(c) shows the two incoming term matrices \(\{I_{a}(p,q)\},a=1,5\). Each column of \(\{I_{a}(p,q)\}\) represents a component which includes \(I_{a}(0,q)=m,I_{a}(1,q)=V(m),I_{a}(r,q)=V^{r}(m)\), and so on. All \(I_{a}(p,q)\) are the incoming terms of \(c_{x}\), where \(Syr(V^{p}(m))=c_{x}=6t+a\), for \(p\in\textbf{N}\). By Theorem 2.9, \(\{I_{1}(p,q)\}\cup\{I_{5}(p,q)\}=\textbf{2N+1}\), and \(\{I_{1}(p,q)\}\cup\{I_{5}(p,q)\}=\{6t+1\}\cup\{6t+3\}\cup\{6t+5\}\). This implies that the entries in both \(\{I_{1}(p,q)\}\) and \(\{I_{5}(p,q)\}\) are with the value of \(6t+1\), or \(6t+3\), or \(6t+5\). As shown in Figure 2(c), the entries with the value in grey are \(6t+5\), those in boldface are \(6t+1\), and the remaining entries are with \(6t+3\). For any \(n\in\textbf{2N+1}\), there exists one and only one entry (p,q) such that \(6t+a=I_{b}(p,q)\), for \(a,b\in\{1,5\}\). Figure 3(a) shows the component model. The component represents a column of \(\{I_{a}(p,q)\}\). All \(I_{a}(p,q)\) are the incoming terms of \(c_{x}\). By Theorem 2.7, \(Syr(V^{p}(m))=c_{x}=6t+a\), for \(p\in\textbf{N}\). The node \(c_{x}\) connects to the internal input node \(c_{y}\) of the next component, where \(c_{x}=6t+1=c_{y}=I_{b}(p,q),b\in\{1,5\}\). Figure 3(b) presents the component \(\{I_{1}(p,0)\}\), the column q=0 of \(\{I_{1}(p,q)\}\), its node \(c_{x}\) connects a trivial cycle \(\{1,1,\cdots\}\). Figure 3(c) describes the component \(I_{5}(p,0)\) which connects to node with the value of 5 in the component \(I_{1}(p,0)\). The component connects will discussed in the following subsection. ### Connection Model This subsection presents the connection model, including the connection rule, construction of the connection tree, and the algorithm of finding a convergent path. The connection rule is summarized in Theorem 2.10. Figure 3: Component description: (a) a Component; (b) Interconnection of component corresponding to \(I_{1}(p,0)\) in Level 0; and (c) Interconnection of component corresponding to \(I_{5}(p,0)\) in Level 1. **Theorem 2.10**.: _The internal input nodes of the component corresponding to Ia(p,q), a=1, 5, are connected as follows:_ 1. _If_ \(I_{a}(p,q)=3\pmod{6}\) _it has no external connections;_ 2. _If_ \(I_{a}(p,q)=1\pmod{6}\)_, it is connected by_ \(I_{1}(p,t_{1})\)_, where_ \(t_{1}=(n-1)/6\)_;_ 3. _If_ \(I_{a}(p,q)=5\pmod{6}\)_, it is connected by_ \(I_{5}(p,t_{5})\)_, where_ \(t_{5}=(n-5)/6\)_;_ Proof.: If \(I_{a}(p,q)=3\pmod{6}\), i.e., \(I_{a}(p,q)\) is a multiple of 3 serving as the seed and has no external connection. If \(n=I_{a}(p,q)=1\pmod{6}\), i.e., \(n=6t_{1}+1\), or \(t_{1}=(n-1)/6\), then \(I_{a}(p,q)\) is connected by the component corresponding to \(I_{1}(p,t_{1})\). Similarly, \(n=I_{a}(p,q)=5\pmod{6}\), i.e., \(n=6t_{5}+5\), or \(t_{5}=(n-5)/6\), then \(I_{a}(p,q)\) is connected by the component corresponding to \(I_{5}(p,t_{5})\). The component connection tree is constructed starting from the component corresponding \(I_{1}(p,0)\), at Level \(\sharp 0\), which connects to the trivial cycle, as shown in Figure 3(b), where \(I_{1}(p,0)=1,5,21,85\), for \(p=0,1,2,3\), and \(Syr(I_{1}(p,0))=1\) for \(p\in\boldsymbol{N}\). Because of \(I_{1}(1,0)=5=5\pmod{6},t_{5}=(n-5)/6=0\). By Theorem 2.10(3), \(I_{1}(1,0)\) is connected by \(I_{5}(p,0)\), as shown in Figure 3(b); Because of \(I_{1}(2,0)=21\) and \(I_{1}(5,0)=1365\) are multiples of 3, they are marked by black nodes, and have no connection; \(I_{1}(3,0)=85=1\pmod{6},t_{1}=(n-1)/6=14\), is connected by \(I_{1}(p,14)\). Similarly, Figure 3(c) presents the component \(I_{5}(p,0)\). \(I_{5}(1,0)=13\) is connected by \(I_{1}(p,2)\); \(I_{5}(2,0)=53\) is connected by \(I_{5}(p,8)\); \(I_{5}(4,0)=853\) is connected by \(I_{1}(p,142)\); and so on. The component \(I_{1}(p,0)\) located at Level \(\sharp 0\) is connected by \(I_{5}(p,0),I_{1}(p,14),I_{5}(p,56)\), and many more in Level \(\sharp 1\). The component \(I_{5}(p,0)\) is connected by \(I_{1}(p,2),I_{5}(p,8),I_{1}(p,142)\), \(\cdots\) ; the component \(I_{5}(p,14)\) is connected by \(I_{5}(p,18),I_{1}(p,302),I_{5}(p,1208)\), \(\cdots\) ; and the component \(I_{1}(p,142)\) is connected by \(I_{5}(p,37),I_{5}(p,3637),I_{5}(p,2424)\), \(\cdots\), and many more. These components are located at Level \(\sharp 2\). After constructing the components in Level \(\sharp 2\), the components in Level \(\sharp 3\) are constructed in a similar way. The construction process can be extended as many levels as wished. Note that the numbers under the components are the connection point \(c_{x}=6t+a\), or a term of a Syracuse sequence. Figure 4: Component connection tree. Based on the connection rules, the components at Level \(\sharp r\) is connected only by the components at Level \(\sharp(r+1)\) and will never accept any connections from other levels. Each component at Level \(\sharp r\) can be connected by many components at Level \(\sharp(r+1)\), however, each input of a component at \(\sharp r\) is connected only one component from Level \(\sharp(r+1)\). As mentioned, the numbers under those components in the connection tree are the terms of a Syracuse sequence. Thus, the connection tree can be used to generate the Syracuse sequences. Let \(SYR(n)=\{n=n_{0},n_{1},n_{2},\cdots,n_{s}\}\) be a Syracuse sequence. If the present term is \(n_{i}\), the next term \(n_{i+1}\) is generated as follows, The operation of \(m=(n_{i}+1/3)/4^{p}-1/3\), or \(m=U^{p}(n_{i})\), solve for m and p, can be easily executed by repeatedly computing \((m-1)/4\) and increment p by 1 at each cycle before \(m\notin 2\textbf{N+1}\). Based on Algorithm \(\mathrm{SyrGen}\), in Figure 5, the Syracuse sequence \(SYR(35)=\{35,53,5,1\}\) is generated as an example, and the path is \(35\to I_{5}(p,8)\to I_{5}(p,0)\to I_{1}(p,0)\to\) trivial cycle, as shown in Figure 4, highlighted by boldface lines. _Step 1_. \((35\to I_{5}(p,8))\) Given \(n=35,x=3,a=5,p=0\), and \(m=n=35.x=3?5,q=(35-3)/4=8,n=35\) connects to \(I_{a}(p,q)\), i.e., \(I_{5}(p,8)\), and \(n_{1}=6q+a=6*8+5=53\); _Step 2_: \((53\to I_{5}(p,0))\) Because of \(m\neq 1,q=(m-3)/4=0\), this yields that 53 connects to \(I_{5}(p,0)\) and \(n_{2}=6q+5=5\); _Step 3_: \((5\to I_{1}(p,0)\to\)trivial cycle) \(n_{2}=5,x=5,a=5,p>0,(5-1)/4=1,p=1\), and \(m=1\). Because of \(x=5\) and \(m=1,n_{2}=5\) connects to \(I_{1}(p,0),n_{3}=1\) is followed by the trivial cycle. ### Completeness of Connection Tree This subsection is to prove the completeness of the connection tree, i.e., all components in the incoming term matrices \(\{I_{a}(p,q)\}\), are included by the connection tree, where \(\{I_{1}(p,q)\}\cup\{I_{5}(p,q)\}=2\textbf{N+1}\). The completeness of the connection tree will Let \(m=J_{ab}(x,y),a,b\in\{1,5\}\), denote the component \(I_{a}(p,m)\) connects to \(I_{b}(p,y)\) at p=x, where \(J_{ab}(x,y)=(I_{a}(x,y)-b)/6\). Table 1 shows the matrices \(\{J_{a}(x,y)\},a=1,5\), where the boldface numbers indicate the entries with the values of \(6t+1\), while the numbers in grey are with \(6t+5\). The empty entries are those with \(6t+3\). Therefore, \(J_{a1}(x,y)\) means the entries with the values of \(6t+a\) in \(\{J_{1}(x,y)\},a=1,5\), and \(J_{a5}(x,y)\) indicates the entries with the values of \(6t+a\) Figure 5: Algorithm \(\mathrm{SyrGen}\). in \(\{J_{5}(x,y)\}\). For example, in level \(\sharp 1\), the component \(I_{1}(p,14)\) connects to \(I_{1}(3,0)\), by definition, \(m=14,a=1,b=1,p=x=3\), and \(y=0\), i.e., \(J_{11}(3,0)=(I_{1}(0,0)-1)/6=(85-1)/6=14\). The entry \(J_{11}(3,0)=14\), or \(J_{1}(3,0)=14\) with the number in boldface. The closed-form of four matrices \(\{J_{ab}(x,y)\}\) are shown in Table 1. **Theorem 2.11**.: _The closed forms of \(J_{ab}(x,y)\) are_ 1. \(J_{11}(p,q)=4^{p+1}k+2[(6y+1)*4^{p}-1]/9\)_; and_ \(J_{51}(p,q)=4^{p+1}k+2[(6y+1)*4^{p}-4]/9\)_;_ 2. \(J_{15}(p,q)=2*4^{p}k+[(6y+5)*4^{p}-2]/9;\) _and_ \(J_{55}(p,q)=2*4^{p}k+[(6y+5)*4^{p}-8]/9\)_._ Proof.: \(I_{1}(p,q)=[(6q+1)*4^{p+1}-1]/3,q=y\pmod{3}\), i.e., \(q=3k+y\); and \(I_{5}(p,q)=[(6q+5)*4^{p+1}-2]/6\), \(I_{1}(p,q)=I_{1}(p,3k+y)=[(6(3k+y)+1)4^{p+1}-1]/3=(6k+2y)4^{p+1}+(4^{p+1}-1)/3\); \(J_{11}(p,q)=(I_{1}(p,q)-1)/6=[6k*4^{p+1}+2y*4^{p+1}+(4^{p+1}-1)/3-1]/6=k*4^{p+1 }+[6y*4^{p+1}+4^{p+1}-1-3]/18=4^{p+1}k+2[(6y+1)4^{p}-1]/9\); \(J_{51}(p,q)=((I_{1}(p,q)-1)-5)/6=[6k*4^{p+1}+2y*4^{p+1}+(4^{p+1}-1)/3-5]/6=k*4^{ p+1}+[6y*4^{p+1}+4^{p+1}-1-15]/18=4^{p+1}k+2[(6y+1)*4^{p}-4]/9\); \(I_{5}(p,q)=I_{5}(p,3k+y)=[(6(3k+y)+5)*4^{p+1}-2]/6=3*4^{p+1}k+[(6y+5)*4^{p+1}- 2]/6;\) \(J_{1}5(p,q)=[(I_{5}(p,q)-1)]/6=3*4^{p+1}k+[(6y+5)*4^{p+1}-2]/6-1/6=2*4^{p}k+4[(6 y+5)*4^{p}-2]/36=2*4^{p}k+[(6y+5)*4^{p}-2]/9\); \(J_{5}5(p,q)=[(I_{5}(p,q)-5)]/6=3*4^{p+1}k+[(6y+5)*4^{p+1}-2]/6-5/6=2*4^{p}k+4[(6 y+5)*4^{p}-8]/36=2*4^{p}k+[(6y+5)*4^{p}-8]/9\). For proving the completeness of the connection tree, one must assert that both \(\{J_{11}(p,q)\}\cup\{J_{15}(p,q)\}=\textbf{N}\) and \(\{J_{51}(p,q)\}\cup\{J_{55}(p,q)\}=\textbf{N}\) are true. The former indicates that all components with the values of \(6t+1\) are included by the tree, while the latter one implies that all components with \(6t+5\) are also included. Including the components with \(6t+3\), all components \(\{6t+1\}\cup\{6t+3\}\cup\{6t+5\}=\textbf{2N+1}\), i.e., all positive odd integers are included. Table B (in Appendix) lists the \(\{J_{ab}(p,q)\}\) matrices with \(q=0\sim 15\) for the interesting readers **Theorem 2.12**.: \(J_{11}(p,q)\cup J_{15}(p,q)=\textbf{N};and\ J_{51}(p,q)\cup J_{55}(p,q)= \textbf{N}.\)__ Proof.: Let \(X(p)=\{J_{11}(p,q)\}\) and \(Y(p)=\{J_{15}(p,q)\}\). \(\{2k\}=\{4k\}\cup\{4k+2\}=X(0)\cup\{4k+2\}\), and \(\{4k+2\}=\{8k+2\}\cup\{8k+6\}=Y(1)\cup\{8k+6\}\), i.e., \(\{2k\}=X(0)\cup Y(1)\cup\{8k+6\}.\{8k+6\}=\{16k+6\}\cup\{16k+14\}=X(1)\cup\{16k+14\},\{2k\}=X(0)\cup Y(1)\cup X(1)\cup\{16k+14\}\). The operation is repeatedly processed, \(\{2k\}=X(0)\cup(X(1)\cup Y(1))\cup\cdots\cup(X(r)\cup Y(r))\cup\cdots=\{J_{11}(p,q)\}\cup\{J_{15}(p,q)\}-Y(0)\), i.e., \(\{J_{11}(p,q)\}\cup\{J_{15}(p,q)\}=\{2k\}\cup\{2k+1\}=\textbf{N}\). Similarly, let \(X(p)=\{J_{51}(p,q)\}\) and \(Y(p)=\{J_{55}(p,q).\{2k\}=\{4k\}\cup\{4k+2\}=X(0)\cup\{4k\}=X(0)\cup Y(1)\cup X (1)\cup\{16k+8\}=\cdots=\{J_{51}(p,q)\}\cup\{J_{55}(p,q)\}-Y(0)\), i.e., \(J_{51}(p,q)\cup J_{55}(p,q)=\textbf{N}\). ### Non-trivial Cycle-free and boundedness of Syracuse Sequences To prove that the Syracuse conjecture is true, the issues of non-trivial cycle-free and boundedness of Syracuse sequences are also very import to assert the convergence of Syracuse sequences. **Lemma 2.13**.: _Let \(n_{r}=I_{a}(p_{r},q)\) and \(n_{t}=I_{a}(p_{t},q)\) be located at the same column q of \(\{I_{a}(p,q)\}\). If \(n_{t}\) is a term of \(SYR(n)\), then \(n_{r}\) will never be a term of \(SYR(n)\)._ Proof.: Suppose that \(n_{r}\) is a term of \(SYR(n)\), without loss of generality, let \(t=r+k\) and \(p_{r}=p_{t}+d,d\in\textbf{N+1}\). By Lemma 2.4, \(n_{t}=U^{d}(n_{r})=((n_{r}+1/3)/4^{d}-1/3)\), thus, \(n_{t}=((n_{r}+1/3)/4^{d}-1/3)=Syr^{k}(n_{r})\approx(3/4)^{k}n_{r}\). Let k=d, i.e., \(n_{r}+1/3=3^{d}n_{r}\), this implies that \(n_{r}\notin\textbf{2N+1}\), i.e., \(n_{r}\notin SYR(n)\). **Lemma 2.14**.: _Let \(n_{t}=I_{1}(0,0)=1\) and \(n_{r}=I_{1}(p_{r},0)\), both are located at the same column \(\{I_{1}(p,0)\}\), a Syracuse sequence may include both terms \(n_{r}\) and \(n_{t}\)._ Proof.: Let \(p_{r}=p_{t}+d,d\in\textbf{N+1}\), by Lemma 2.4, \(n_{t}=U^{d}(n_{r})=(n_{r}+1/3)/4^{d}-1/3)=1\), i.e., \(n_{r}+1/3=(4/3)4^{d}\), or \(n_{r}=(4^{d+1}-1)/3\). Thus, \(Syr(n_{r})=(3n_{r}+1)/4^{pr+1}=(3((4^{d+1}-1)/3)+1)/4^{pr+1}=1\), where \(d=p_{r}\), i.e., \(Syr(n_{r})=n_{t}\). The Syracuse sequence \(SYR(n)=\{n=n_{0},n_{1},\cdots,n_{r},n_{t}=1,1,\cdots\}\) contains a trivial cycle. **Theorem 2.15**.: _Syracuse sequences are free of non-trivial cycles._ Proof.: By Lemma 2.13, if both \(n_{r}=I_{a}(p_{r},q)\) and \(n_{t}=I_{a}(p_{t},q)\) are located at the same column q of \(\{I_{a}(p,q)\}\), and if \(n_{t}\in SYR(n)\), then \(n_{r}\notin SYR(n)\). Any Syracuse sequence never has two terms included by the same column of the incoming term matrices \(\{I_{b}(p,q)\}\), except \(I_{1}(p,0)\). By Lemma 2.14, A Syracuse sequence may contain two to incoming term from the same column of \(\{I_{1}(p,0)\}\), which generates the trivial cycle, not non-trivial cycle. Note that a sequence may end with \(\{\cdots,5,1\}\) or \(\{\cdots,21,1\}\), where (5,1) and (21,1) are located at \(I_{1}(p,0)\), the cycles exist. By Lemma 2.14, these cycles are trivial cycles, not non-trivial cycles. Based on the connection rules, the components at Level \(\sharp r\) is connected only by the components at Level \(\sharp(r+1)\) and will never accept any connections from other levels. It is virtually impossible for a component to be enabled again to produce another duplicated output for the sequence. By Theorem 2.15, the Syracuse sequences contain no non-trivial cycles. Regarding the boundedness of the Syracuse sequences, based on the connection rules and the connection tree, for any finite integer \(n\in\textbf{2N+1}\), there exists one and only component, say component A, locate at Level \(\sharp r\). Since the tree does not contain the non-trivial cycles, by the connection rules, Component A connects to a component located at Level \(\sharp(r-1)\). Further, component generates an output (another term of sequence) and connects to the component in Level \(\sharp(r-2)\). The procedure is repeatedly processed until the component located at Level \(\sharp 0\) is reached, the Syracuse sequence is generated and converged to the number of 1. In other words, for any finite positive odd integer n, there exists a component in the connection tree and a path starting from the component to the component located at Level \(\sharp 0\) and the generated sequences converges to 1. At each path, the terms of the generated sequence may be up and down, but the terms eventually converge to 1. Thus, the generated Syracuse sequences are bounded. ### Proof of Syracuse Conjecture **Theorem 2.16**.: _The Syracuse sequence SYR(n) converges to 1, for all \(n\in\textbf{2N+1}\)._ Proof.: By Theorem 2.12, the completeness of the connection tree indicates that, for any finite positive odd integer n, there exists a component located at Level \(\sharp r\) for n. The connection tree will find a path for n from Level \(\sharp r\) down to the component at Level \(\sharp 0\). This implies the generated Syracuse sequence converges to the number of 1. This proves the Syracuse conjecture. ## 3 The Proofs of Collatz Conjecture The previous section has proven the Syracuse conjecture, i.e., all Syracuse sequences are non-trivial cycle-free and bounded sequences and they are converged to the number of 1. In this section, the success of Syracuse conjecture using the component connection model is extended to prove the Collatz conjecture. ### Component Connection Model of Collatz Sequence The components in Collatz sequences are modelled as shown in Figure 5. The component has the odd-numbered inputs/output similar to the component model in Figure 3(a) for Syracuse sequences. The odd-number inputs are multiplied by 3 and plus one, i.e., \(Col(n)=3n+1\). The result \(m=Col(n)\), even number, perform \(Col(m)=m/2\). If \(Col(m)\) is odd, it goes to node \(c_{x}\), otherwise, the division operation is repeatedly processed, and the even-numbered quotients are sequentially sent to the Even-numbered Output (ENO) node. The component also allows to apply all positive even integers as the seeds of the sequences. The positive even integer is applied to the Even-number Input (EVI) node. The EVI node feeds the number to the divider to derive the outputs to ENO node. Figure 5(b) shows the component description of \(I_{5}(p,0)\), the odd-number inputs/output are exactly the same as that in Figure 3(c). If the input \(I_{1}(1,2)=13\) of the component for \(I_{5}(p,0)\) is enabled. \(3n+1=40\) is simultaneously sent to the divider and the ENO node., i.e., the next term of 13 is 40. The procedure, in turns, generates 20, 10 and 5. Since 5 is odd, it is sent to \(c_{x}\) node and to complete the sequence generation in this component model, where the sequence is \(13\to 40\to 20\to 10\to 5\). On the other hand, if n=40 as an input (seed), even integer, ENI node is enabled and simultaneously sends n to the divider and the ENO. Then, \(m=n/2=40/2=20\), the following procedure is exactly the same as described above. Consider the Syracuse sequence \(SYR(35)=\{35,53,5,1\}\), the Collatz sequence COL(35) is generated as follows, \(n=35=I_{5}(0,8),m=3n+1=106\), and \(n_{1}=53=I_{5}(2,0)\); \(m_{1}=3n_{1}+1=160,m_{2}=80,m_{3}=40,m_{4}=20,m_{5}=10\), and \(n_{2}=5=I_{1}(1,0),m_{6}=16,m_{7}=8,m_{8}=4,m_{9}=2\), and \(n_{3}=1\). The Collatz sequence is \[COL(35)=\{35,106,53,160,80,40,20,10,5,16,8,4,2,1\}.\] The connection tree for Syracuse sequences has shown that, for a given positive odd integer as a seed of the sequence, there exists a component located at Level \(\sharp r\) and the connection model will guide a path down link to the component located at Level \(\sharp 0\). The path generates a Syracuse sequence which converges to 1. Similarly, the component connection model for Collatz sequences takes the same connection tree. Given a positive even integer as a seed of a Collatz sequence, there exists a component located as Level \(\sharp r\), whose even-numbered input (ENI) is enabled to take the Figure 6: Component model for Collatz sequences:(a) Component model; and (b) Component model for \(I_{5}(p,q)\). seed and produces an odd-numbered output which connects to the component in Level \(\sharp(r-1)\). Similarly, the connection model will also guide the path down link to the component at Level \(\sharp 0\) and cause the Collatz sequence to converge to 1. The next step is to prove the completeness of the connection tree, i.e., whether or not all positive even integers are included. ### The Proof of Collatz Conjecture To prove the completeness, it is necessary to assert that COL(n) converges to 1 for all \(n\in\textbf{2N+2}\), and 1 for all \(n\in\textbf{2N+1}\), this yields that \(n\in(\textbf{2N+2})\cup(\textbf{2N+1})=\textbf{N+1}\) for proving the Collatz conjecture. **Lemma 3.1**.: _Let \(m=Col(n)\), \(COL(m)\) converges to 1 if and only if \(COL(n)\) converges to 1._ Proof.: Let \(COL(n)=\{n=n_{0},n_{1},n_{2},\cdots,1\}\). If \(m=Col(n)\), then \(COL(m)\) is also a Collatz sequence and converges to 1. On the other hand, \(COL(n)\) is a sub-sequence of \(COL(m)\). If \(COL(m)\) converges to 1, so is \(COL(n)\). **Lemma 3.2**.: _The convergence relationship between SYR(m) and COL(m)_ 1. \(SYR(m)=\{m,1\}\) _converges to 1 and so is_ \(COL(m)\)_;_ 2. \(SYR(m)=\{m,m_{1},1\}\) _converges to 1 and so is_ \(COL(m)\)_;_ 3. _If_ \(SYR(m)=\{m,m_{1},m_{2},\cdots,m_{s}\}\) _converges to 1, so is_ \(COL(m)\)_._ Proof.: (1) \(SYR(m)=\{m,1\}\), where \(m\in\textbf{2N+1}\), there exist even integers \(e_{i}\), such that \(e_{1}=3m+1,e_{i+1}=e_{i}/2,i=1,2,\cdots,r-1\), and \(e_{r}=1\). Thus, \(COL(m)=\{m,e_{1},e_{2},\cdots,e_{r},1\}\) is a Collatz sequence and converged to 1; (2) \(SYR(m)=\{m,m_{1},1\}\), where \(m,m_{1}\in\textbf{2N+1}\), there exist even integers \(e_{i}\) and \(d_{j}\) such that \(e_{1}=3*m+1,e_{i+1}=e_{i}/2,i=1,2,\cdots,r-1\), and \(e_{r}=1\), and \(d_{1}=3m+1,d_{i+1}=d_{i}/2,i=1,2,\cdots,x-1\), and \(d_{x+1}=m_{1}\). Thus, \(COL(m)=\{m,d_{1},d_{2},\cdots,d_{r},m_{1},e_{1},e_{2},\cdots,e_{r},1\}\) is a Collatz sequence and converged to 1; and (3) if \(SYR(m)=\{m,m_{1},m_{2},\cdots,m_{s}\}\) converges to 1, the sequence will yield \(m_{s}=1\), Similar to the proof of (2), \(COL(m)\) also converges to 1. **Lemma 3.3**.: \(\{4t+2\}\cup\{8t+4\}\cup\cdots\cup\{2_{r}(2t+1)\}=\{2t\}\) _as \(r\rightarrow\infty\)._ Proof.: \(\{2t\}=\{4t\}\cup\{4t+2\}\) and \(\{4t\}\cap\{4t+2\}=\phi\), thus \(\{4t+2\}=\{2t\}-\{4t\}\); Similarly, \(\{4t+2\}\cup\{8t+4\}=\{8t+2\}\cup\{8t+6\}\cup\{8t+4\}=\{2t\}-\{8t\}\), and \(\{4t+2\}\cup\{8t+4\}\cup\cdots\cup\{2^{r}(2t+1)\}=\{2t\}-\{2^{r+1}t\}\approx\{2t\}\) as \(r\rightarrow\infty\). **Theorem 3.4**.: \(COL(n)\) _converges to 1 for all \(n\in\textbf{2N+2}\)._ Proof.: Let \(n=2t+1\), i.e., \(n\in\textbf{2N+2}\), \(m_{1}=2(2t+1)=4t+2\) is even, and \(Col(m_{1})=n\). By Lemma 3.1, if \(COL(n)\) converges to 1, so is \(COL(m)\). Let \(m_{2}=2^{2}(2t+1)=8t+4\) is even, and \(Col(m_{2})=m_{1}\). By Lemma 3.1, again, \(COL(m_{2})\) also converges to 1. Suppose that \(COL(m_{r})\) converges to 1, where \(m_{r}=2^{r}(2t+1)\), then \(Col(m_{r+1})=m_{r}\), \(COL(m_{r})\) is a Collatz sequence and converges to 1. In other words, \(COL(m)\) converges to 1 for all \(m=2^{r}(2t+1)\) and \(t\in\textbf{N}\). By Lemma 3.3, \(\{4t+2\}\cup\{8t+4\}\cup\cdots\cup\{2_{r}(2t+1)\}=\{2t\}\) as \(r\rightarrow\infty\), i.e., \(COL(m)\) converges to 1 for all even m, i.e., \(m\in\textbf{2N+2}\), because of \(m\neq 0\). **Theorem 3.5**.: \(COL(n)\) _converges to 1 for all \(n\in\textbf{2N+1}\)._ Proof.: By Theorem 2.7, \(SYR(n)\) converges to 1 for all \(n\in\textbf{2N+1}\). By Lemma 3.2 (3), \(COL(n)\) is a Collatz sequence and converges to 1. **Theorem 3.6**.: _(Collatz Conjecture) \(COL(n)\) converges to 1 for all \(n\in\textbf{2N+1}\)._ Proof.: By Theorem 3.5, \(COL(n)\) converges to 1 for all \(n\in\textbf{2N+1}\), and by Theorem 3.6, COL(n) converges to 1 for all \(n\in\textbf{2N+2}\), this concludes that \(n\in(\textbf{2N+1})\cup(\textbf{2N+2})=(\textbf{N+1})\). The Collatz conjecture is proven. ## 4 Summary and Conclusions This paper employs a system model, Component Connection Model (CCM), to describe the Syracuse sequences. The CCM of a dynamical system consists of a set of equations describing the component dynamics and component interconnections [6]. Each component is described by a column of the incoming term matrix \(I_{a}(p,q),a=1,5\). Basically, _2N+1_ is partitioned into 4 disjoint sets: \(\{8t+a\},a=1,3,5,7\). Let \(S_{a}(t)=Syr(8t+a)\), by Lemma 2.1, \(\{S_{5}(t)\}\cup\{S_{1}(t)\}\cup\{S_{3}(t)\}\cup\{S_{7}(t)\}\). This property yields the following contributions for proving both Collatz and Syracuse conjectures: (1) The terms of any Syracuse sequences are either \(6t+1\) or \(6t+5\), and the term with \(6t+3\) serves only as the seed of the sequences; and (2) Two incoming term matrices \(\{I_{a}(p,q)\},a=1,5\), were constructed from \(\{6q+a\}\), results in \(\{8t+5\}=\{I_{1}(p,q)\}\cup\{I_{5}(p,q)\},p\in\textbf{N+1}\), and \(\{I_{1}(p,q)\}\cup\{I_{5}(p,q)\}=\textbf{2N+1}\), for \(p,q\in\textbf{N}\); All positive odd integers are all included; Based on the incoming term matrices, a component connection tree is constructed and, by Theorem 2.5, all components find a path well to reach the component in Level \(sharp0\) which connects a trivial cycle. The contributions show all components including all positive odd integers can eventually reach the component \(I_{1}(p,0)\) in Level \(sharp0\), thus, all Syracuse sequences are converged to the number of 1 and the Syracuse conjecture is proven to be true. The results of Syracuse sequence are also extended to show the convergence of all Collatz sequences to further prove that Collatz conjecture is true. ## 5 Appendix ### Table A. \(n=8q+a\) and \(S_{a}(n)\), a=1,3,5,7, \(q=0\sim 15\) Table A demonstrates that \(\{S_{5}(t)\}\cup\{S_{1}(t)\}\cup\{S_{3}(t)\}\cup\{S_{7}(t)\}\), where \(S_{a}(t)=Syr(8t+a)\). The values in row \(S_{5}(n)\), in boldface, are the same as those in each column (in boldface), where q=0, \((s_{1}(n),s_{3}(n),s_{5}(n),s_{7}(n))=(1,5,1,11)\); (7,17,5,23) for q=1, (13,29,1,35) for q=2, (19,41,11,47) for q=3, and so on. ### Table B. Interconnection of Components Table B shows the interconnection of components. \(m=J_{1b}(x,y)=(I_{1}(x,y)-b)/6\), The component \(I_{b}(p,m)\) connects to \(I_{1}(p,y)\) at \(p=x\); For example, \(I_{1}(2,1)=149=5\pmod{6}\) and \(m=J_{15}(2,1)=(149-5)/6=24\), i.e. x=2, y=1, b=5, and m=24. The component \(I_{5}(p,24)\) connects to \(I_{1}(p,1)\) at \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline q & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 \\ \hline n=8q+1 & 1 & 9 & 17 & 25 & 33 & 41 & 49 & 57 & 65 & 73 & 81 & 89 & 97 & 105 & 113 & 121 \\ \hline S\({}_{2}\)(n) & **1** & **7** & **13** & **19** & 25 & 31 & 37 & 43 & 49 & 55 & 61 & 67 & 73 & 79 & 85 & 91 \\ \hline n=8q+3 & 3 & 11 & 19 & 27 & 35 & 43 & 51 & 59 & 67 & 75 & 83 & 91 & 99 & 107 & 115 & 123 \\ \hline S\({}_{2}\)(n) & **5** & **17** & **29** & **41** & 53 & 65 & 77 & 89 & 101 & 113 & 125 & 137 & 149 & 161 & 173 & 185 \\ \hline n=8q+5 & 5 & 13 & 21 & 29 & 37 & 45 & 53 & 61 & 69 & 77 & 85 & 93 & 101 & 109 & 117 & 125 \\ \hline S\({}_{3}\)(n) & **1** & **5** & **1** & **11** & **7** & **17** & **5** & **23** & **13** & **29** & **1** & **35** & **19** & **41** & **11** & **47** \\ \hline n=8q+7 & 7 & 15 & 23 & 31 & 39 & 47 & 55 & 63 & 71 & 79 & 87 & 95 & 103 & 111 & 119 & 127 \\ \hline S\({}_{7}\)(n) & **11** & **23** & **35** & **47** & 59 & 71 & 83 & 95 & 107 & 119 & 131 & 143 & 155 & 167 & 179 & 191 \\ \hline \end{tabular} \({}_{5}\)\({}_{[n]}\)=(3n+1)/4 \({}_{5}\)\({}_{[n]}\)=(3n+1)/2 \({}_{5}\)\({}_{[n]}\)=(3n+1)/2 \end{table} Table 2: p=2; \(I_{5}(0,4)=19=1\pmod{6},m=(19-1)/6=3\), i.e., x=0, y=4, b=1, and m=3. The component \(I_{1}(p,3)\) connects to \(I_{5}(p,4)\) at p=0; \(I_{5}(1,4)=77=5\pmod{6},m=(77-1)/6=12\), i.e., x=1, y=4, b=5, m=12, the component \(I_{5}(p,12)\) connects to \(I_{5}(p,1)\) at p=4.
2302.14318
Canonical Purification and the Quantum Extremal Shock
We study the canonical purification (with respect to one of the parties) of pure, bi-partite states obtained by turning on sources in the Euclidean path integral. In holographic conformal field theories, the Lorentzian bulk dual of the canonical purification consists of the corresponding entanglement wedge glued to its CPT image at the quantum extremal surface. However, the mismatch in the classical expansions at the QES due to quantum corrections needs to be supported by a shock in the bulk matter stress tensor in order for the bulk to satisfy Einstein's equations. Working perturbatively to first order in double-trace sources around the thermofield double state, we demonstrate that the state of the bulk matter in the dual to the canonically purified boundary CFT state precisely has this quantum extremal shock in the bulk stress tensor. We interpret our results as the emergence of gravitational physics from the CFT entanglement structure in a context where bulk quantum corrections are important.
Onkar Parrikar, Vivek Singh
2023-02-28T05:20:24Z
http://arxiv.org/abs/2302.14318v1
# Canonical Purification and the Quantum Extremal Shock ###### Abstract We study the canonical purification of pure, bi-partite states (with respect to one of the parties) obtained by turning on sources in the Euclidean path integral. In holographic conformal field theories, the Lorentzian bulk dual of the canonical purification consists of the corresponding entanglement wedge glued to its CPT image at the quantum extremal surface. However, the mismatch in the classical expansions at the QES due to quantum corrections needs to be supported by a shock in the bulk matter stress tensor in order for the bulk to satisfy Einstein's equations. Working perturbatively to first order in double-trace sources around the thermofield double state, we demonstrate that the state of the bulk matter in the dual to the canonically purified boundary CFT state precisely has this quantum extremal shock in the bulk stress tensor. We interpret our results as the emergence of gravitational physics from the CFT entanglement structure in a context where bulk quantum corrections are important. ## 1 Introduction Consider a general, full-rank, bi-partite state \(\Psi\) in the (for the moment, finite dimensional) tensor product Hilbert space \(\mathcal{H}_{L}\otimes\mathcal{H}_{R}\). Any such state can always be written in the form: \[|\Psi\rangle=\sum_{n}\sqrt{p_{n}}\,|\tilde{\chi}_{n}\rangle_{L}\otimes|\chi_{n} \rangle_{R}, \tag{1}\] where \(p_{n}\) are the eigenvalues of the reduced density matrices on \(L\) and \(R\), \(\chi_{n}\) form an orthonormal set of eigenstates of the reduced density matrix on the right, and \(\tilde{\chi}_{n}\) form an orthonormal set of eigenstates of the reduced density matrix on the left. While the eigenvalues are common to both the parties, the eigenstates are not. If we only had access to, say, the left factor, then we could write down a purification for the density matrix as follows: \[|\Psi^{\star}\rangle=\sum_{n}\sqrt{p_{n}}\,|\tilde{\chi}_{n}\rangle_{L}\otimes |\tilde{\chi}_{n}^{\star}\rangle_{L^{\star}}. \tag{2}\] Here \[|\tilde{\chi}_{n}^{\star}\rangle=\Theta|\tilde{\chi}_{n}\rangle, \tag{3}\] where \(\Theta\) is an anti-unitary operator on \(L\).1 This new state is called the _canonical purification_ of \(\Psi\) with respect to the left side [2].2 Note that \(\Psi^{\star}\) resembles the thermofield double state. Physically, if one only had access to the left party in \(\Psi\) and not to the right, then we can think of \(\Psi^{\star}\) as the "simplest" purification that one could build from this information. Since \(\Psi\) and \(\Psi^{\star}\) are two different purifications of the same reduced density matrix on \(L\), these two states are related by a unitary transformation on the right:3 Footnote 1: In quantum field theory, we could take it to be CPT in even dimensions or CRT in odd dimensions, with R being reflection along one spatial direction. [1] Footnote 2: Equivalently, the canonical purification of a density matrix \(\rho\) is defined as the state \(|\sqrt{\rho}\rangle\) viewed as a vector in the Hilbert space \(\mathrm{End}(\mathcal{H})=\mathcal{H}\otimes\mathcal{H}^{\star}\). Footnote 3: Here, we are using the full-rank condition. More generally, the two purifications would be related by an isometry. \[|\Psi^{\star}\rangle:=\mathcal{R}_{\Psi}|\Psi\rangle, \tag{4}\] where \[\mathcal{R}_{\Psi}:\mathcal{H}_{R}\rightarrow\mathcal{H}_{L^{\star}},\;\;\; \mathcal{R}_{\Psi}:=\sum_{n}|\tilde{\chi}^{\star}_{n}\rangle_{L^{\star}} \langle\chi_{n}|_{R}. \tag{5}\] The operator \(\mathcal{R}_{\Psi}\) quantifies how "complex" it is to reconstruct the original state \(\Psi\) from \(\Psi^{\star}\). We should emphasize that entanglement or Renyi entropies between the left and the right parties are blind to \(\mathcal{R}_{\Psi}\). It therefore seems interesting to study aspects of this operator \(\mathcal{R}_{\Psi}\), which goes beyond entanglement in quantifying properties of the state \(\Psi\). We will call this operator \(\mathcal{R}_{\Psi}\) which maps \(\Psi\) to its canonical purification with respect to \(L\) the _reflection operator_ with respect to \(L\). When \(\Psi\) is full-rank, the reflection operator is uniquely specified by the condition that it maps \(\Psi\) to its canonical purification. There are several motivations to study the reflection operator in holographic conformal field theories: one important motivation comes from the quantum error correction perspective on the bulk to boundary map in AdS/CFT [3; 4; 5; 6]. It was shown by Harlow [6] that for a bulk degree of freedom (say, a qudit) within the entanglement wedge of a boundary subregion \(A\), the encoding map \(V\) into the dual CFT takes the general form: \[|\psi_{i}\rangle_{\mathrm{CFT}}=V|i\rangle_{\mathrm{bulk}}=U_{A}|i\rangle_{A _{1}}\otimes|\chi\rangle_{A_{2},\bar{A}}, \tag{6}\] where \(\{|i\rangle\}\) forms a basis of states for the bulk qudit, and \(\mathcal{H}_{A}=\mathcal{H}_{A_{1}}\otimes\mathcal{H}_{A_{2}}\oplus\mathcal{ H}_{A_{3}}\), with \(\mathcal{H}_{A_{1}}\) being the same dimension as that of the code subspace. Harlow's structure theorem is a general consequence of the Ryu-Takayanagi formula [7; 8; 9]. Importantly, we can think of the unitary \(U_{A}\) appearing in Harlow's structure theorem as a reflection operator: introduce an auxiliary reference system "ref" which has the same dimension as that of the code subspace, and consider the maximally entangled state: \[|\Psi\rangle=\frac{1}{\sqrt{d_{\mathrm{code}}}}\sum_{i}|i\rangle_{\mathrm{ref} }\otimes|\psi_{i}\rangle_{\mathrm{CFT}}. \tag{7}\] Then, the unitary \(U_{A}\) is precisely the adjoint of the reflection operator for this state \(\Psi\) with respect to \(\text{ref}\cup\bar{A}\). Furthermore, this particular reflection operator gives a simple recipe for bulk reconstruction: we can represent any bulk operator \(\phi\) on the code subspace as a boundary operator on \(A\) via the formula \[O_{A}=U_{A}\phi_{A_{1}}U_{A}^{\dagger}. \tag{8}\] Thus, the reflection operator finds a natural role in formulating the bulk-to-boundary map in AdS/CFT as a quantum error correcting code. A second (perhaps much more direct) motivation, comes from the fact that the reflection operator is closely linked with the canonical purification, which finds several interesting applications in holography. For instance, the canonical purification is crucially used in identifying the area of the outermost extremal surface as the simple entropy [10; 11]. Along similar lines, the reflected entropy [2] for a mixed two-party state \(\rho_{AB}\) is also defined in terms of the canonical purification \(\Psi^{\star}_{ABA^{\star}B^{\star}}\) as the entanglement entropy of \(AA^{\star}\). The reflected entropy is an interesting information theoretic quantity [12; 13; 14; 15; 16; 17], one which finds a natural bulk dual in terms of the cross section area of the entanglement wedge of \(AB\). Finally, it was recently argued in [18] that for a black hole evaporating into a non-gravitational bath, the canonical purification of the total state with respect to the black hole side is dual to a connected wormhole, thus realizing the ER=EPR idea in the context of an evaporating black hole (see also [19; 20] for other approaches). While the original state of the radiation plus the evaporating black hole does not appear to have a wormhole in it, the state after the action of the corresponding reflection operator does; in this way, the reflection operator in this case acts to "geometrize" the entanglement in the originally complex and non-geometric state. For holographic theories, it was proposed by Engelhardt and Wall [10] that the classical, Lorentzian bulk geometry dual to the canonical purification with respect to a boundary subregion \(A\) is obtained by taking the entanglement wedge of \(A\) and gluing it to its CPT image at the RT or HRRT surface [7; 8] dual to \(A\) (see figure 1). A replica trick argument for this proposal was later given by Dutta and Faulkner [2] (see also [21]). In gluing together portions of solutions of Einstein equations to obtain new solutions, one must impose junction conditions at the gluing surface in order to ensure that the resulting geometry also satisfies Einstein's equations. In the case at hand, the fact that the co-dimension two surface we are gluing across is a classically extremal surface implies that these junction conditions are trivially satisfied (see section 2 for more details). The resulting geometry contains an entire Cauchy surface, and one can obtain the full solution by evolving the data on this surface with the Einstein equations. Upon including quantum corrections, the gluing must be done across the quantum extremal surface (QES) [22]. However, due to quantum corrections, the QES is not generically classically extremal, and now the junction conditions imply that the bulk matter must be in a state whose stress tensor has a delta function "shock"[23] proportional to the first shape-derivative of the bulk entanglement entropy, in order for Einstein's equations to be satisfied. Our goal in this paper will be to study the reflection operator in a perturbative setup. The main application we have in mind is to verify the above prediction of general relativity for the bulk stress tensor shock in the context of the Engelhardt-Wall construction. We will consider a family of states \(\Psi_{\lambda}\) labelled by some parameter \(\lambda\). We will first derive a differential equation for \({\cal R}_{\lambda}\equiv{\cal R}_{\Psi_{\lambda}}\) along the flow parametrized by \(\lambda\); this equation will involve more familiar quantities such as the modular Hamiltonian and modular flow. In order to be concrete, we will then apply this general equation to the thermofield double (TFD) state perturbed by turning on a source (with a small amplitude) in the Euclidean path integral. In a holographic quantum system, we will then use this to compute the bulk stress tensor one-point function (to first order in the deformation) in the bulk dual to the canonically purified state and show that it has the quantum extremal shock contribution required for the Engelhardt-Wall construction to work. While we will explicitly demonstrate the existence of this shock to first order in perturbation theory around the TFD state, we expect that with some mild assumptions, our calculation can be extended beyond perturbation theory (i.e., at finite deformation parameter \(\lambda\)). Since the shock is a prediction of Einstein's equations from the bulk point of view, we are seeing here the emergence of bulk gravitational physics from Figure 1: (Left) A portion of the bulk geometry dual to some holographic state \(\Psi\). The entanglement wedge of the left party is shown in blue and the entanglement wedge of the right party is shown in green. (Right) The Engelhardt-Wall proposal for the geometry dual to the canonical purification \(\Psi^{\star}\) with respect to the left party consists of the left entanglement wedge glued to its CPT image at the quantum extremal surface. In situations where the quantum extremal surface is not classically extremal, the geometry needs to be supported by a shock (red dashed lines) in the bulk matter stress tensor. the CFT entanglement structure [24; 25; 26; 27; 28; 29], but in a context where quantum corrections in the bulk are important (see also [30; 31] for related previous work). The rest of the paper is organized as follows: in section 2, we review the Engelhardt-Wall construction of the bulk dual to the canonical purification. In section 3, we study the reflection operator for a general one-parameter family of states. We then apply this to the special case of the TFD state deformed by a source in the Euclidean path integral, and derive an explicit formula for the reflection operator in this context to first order in perturbation theory. In section 4, we apply this formula to holographic quantum systems in order to study the one-point function of the bulk matter stress tensor and demonstrate the existence of the quantum extremal shock. We end in section 5 with some concluding remarks and open directions. ## 2 Review of Engelhardt-Wall construction In this section, we briefly review the construction of Engelhardt and Wall (EW) for the holographic dual of the canonical purification of a bi-partite state. The EW geometry is a Lorentzian geometry constructed in the following way: let us begin with the original Lorentzian spacetime \(M\) dual to the original state \(\Psi\). Let \(\sigma\) be the quantum extremal surface (QES) corresponding to the left subregion, and let \(D_{\sigma}\) be the corresponding entanglement wedge. The EW proposal for the geometry dual to the canonical purification with respect to the left is to glue \(D_{\sigma}\) to its CPT image at the surface \(\sigma\), then evolve the resulting data on a Cauchy slice using Einstein's equations to obtain the full Lorentzian geometry. However, for this to work, we must impose a set of co-dimension two junction conditions on the geometric data at \(\sigma\) in \(M\). These junction conditions are analogous to, and in fact follow from, the standard, co-dimension one junction conditions which are imposed when gluing two solutions to Einstein's equations across a co-dimension one hypersurface [32; 33]. The basic idea is as follows: let us imagine, for the moment, that we have two different spacetimes \(M\) and \(M^{\prime}\) with some Cauchy slices \(\Sigma\) and \(\Sigma^{\prime}\) respectively. Now, consider co-dimension two surfaces \(\sigma\) and \(\sigma^{\prime}\) in \(M\) and \(M^{\prime}\) respectively, which divide \(\Sigma\) and \(\Sigma^{\prime}\) into two parts. Let us call one part \(\mathrm{In}_{\Sigma}(\sigma)\) and the other \(\mathrm{Out}_{\Sigma}(\sigma)\) in \(M\), we can write similar divisions of the Cauchy slice in \(M^{\prime}\). This procedure naturally divides each spacetime into four parts, namely, \(I_{W}(\sigma)\equiv D[\mathrm{In}_{\Sigma}(\sigma)]\), \(O_{W}(\sigma)\equiv D[\mathrm{Out}_{\Sigma}(\sigma)]\), \(J^{+}[\sigma]\) and \(J^{-}[\sigma]\), where \(D\) denotes the domain of dependence and \(J^{+(-)}\) denotes the causal future (past). We have a similar division for \(M^{\prime}\) as well. We wish to glue \(I_{W}(\sigma)\) to \(O_{W}(\sigma^{\prime})\) by identifying the two surfaces \(\sigma\) and \(\sigma^{\prime}\). For this to work, the most basic thing we must demand is that the intrinsic geometry on \(\sigma\) and \(\sigma^{\prime}\) should be identical, or more precisely, the induced metrics \(h=h_{ij}dy^{i}dy^{j}\) on the two surfaces should be equivalent (up to a change of coordinates) - this is the first junction condition. Next, let us imagine that there exists a consistent solution to Einstein's equations which contains \(V_{\rm in}\equiv I_{W}(\sigma)\) and \(V_{\rm out}\equiv O_{W}(\sigma^{\prime})\) glued together at \(\sigma=\sigma^{\prime}\). Let us consider the null surface \(\mathcal{N}_{k}\) which separates \(V_{\rm out}\cup J^{-}(\sigma)\) from \(V_{\rm in}\cup J^{+}(\sigma)\). Let \(k\) be the generating vector field tangent to null geodesics (not necessarily affinely parametrized) along \(\mathcal{N}_{k}\); at \(\sigma\), we can take \(k\) to be orthogonal to \(\sigma\). Let \(\ell^{\mu}\) be a transverse null vector field satisfying \(\ell.k=-1\) everywhere on \(\mathcal{N}_{k}\). We can take \(\ell\) such that at \(\sigma\) it is orthogonal to \(\sigma\) and agrees with the generating vector field of the null surface \(\mathcal{N}_{\ell}\) separating \(V_{\rm out}\cup J^{+}(\sigma)\) and \(V_{\rm in}\cup J^{-}(\sigma)\). The idea is to now apply the co-dimension one Barrabes-Israel junction conditions [32; 33] individually to \(\mathcal{N}_{k}\) and \(\mathcal{N}_{\ell}\). For instance, the junction condition across \(\mathcal{N}_{k}\) gives the following expression for the matter stress tensor localized to this null sheet: \[8\pi G_{N}\;T^{(k)}_{\mu\nu}=-\left(\left[\theta_{(\ell)}\right]k_{\mu}k_{\nu }+\left[\chi_{(\ell)}{}_{(\mu}\right]k_{\nu)}+\left[\kappa_{(\ell)}\right]h_{ \mu\nu}\right)\delta(\mathcal{N}_{k}), \tag{1}\] where \(\theta_{(\ell)}\) is the expansion of the null-geodesic congruence generated by the vector field \(\ell\), \(\chi_{(\ell)}{}_{\mu}\) is called its twist, and \(\kappa_{(\ell)}\) measures the in-affinity of the geodesic congruence generated by \(k\): \[\theta_{(\ell)} = h^{ij}h^{\mu}_{i}h^{\nu}_{j}\nabla_{\mu}\ell_{\nu}, \tag{2}\] \[\chi_{(\ell)\,\mu} = \frac{1}{2}h^{\mu}_{i}k^{\nu}\nabla_{\mu}\ell_{\nu},\] (3) \[\kappa_{(\ell)} = -k^{\nu}k^{\mu}\nabla_{\mu}\ell_{\nu}=\ell_{\nu}k^{\mu}\nabla_{ \mu}k^{\nu}. \tag{4}\] Finally, the notation \([\cdot]\) stands for difference across \(\mathcal{N}_{k}\). We can write a similar equation for \(\mathcal{N}_{\ell}\) as well. We are interested in evaluating these constraints at \(\sigma\). Since the in-affinity at a point along a geodesic (in the present case, corresponding to where it intersects \(\sigma\)) can be adjusted by an arbitrary rescaling, we can set the discontinuity in the in-affinity to zero at \(\sigma\) by a suitable choice of parametrization. Furthermore, for our specific case where we wish to glue an entanglement wedge to its CPT image, the twist term also drops out, since the twist is even under CPT. On the other hand, the expansion is odd under CPT, and so we get \[8\pi G_{N}T^{(k)}_{\mu\nu}=-2\theta_{(\ell)}k_{\mu}k_{\nu}\,\delta(\mathcal{N} _{k})\qquad(\text{at }\sigma). \tag{5}\] For classically extremal surfaces, the expansion vanishes and the gluing does not require any singular matter stress tensor. However, for a quantum extremal surface, the expansion is not zero, but given by the quantum extremality formula [22]: \[\theta_{(\ell)}=-\frac{4G_{N}}{\sqrt{h}}\ell^{\mu}\frac{\delta S_{\rm bulk}}{ \delta x^{\mu}}. \tag{6}\] Thus, general relativity makes a prediction for the singular part of the matter stress tensor at the quantum extremal surface in the Lorentzian geometry dual to the canonical purification of a holographic state: \[2\pi T_{\mu\nu}^{(k)}=\frac{2}{\sqrt{h}}\ell^{\mu}\frac{\delta S_{\rm bulk}}{ \delta x^{\mu}}\,k_{\mu}k_{\nu}\delta({\cal N}_{k}), \tag{7}\] with a similar prediction for the stress tensor localized to \({\cal N}_{\ell}\). In principle, we need to compute the state of bulk matter fields in the bulk dual to the canonically purified state, evaluate the corresponding bulk stress tensor, and check whether it satisfies the above prediction. Our goal is to do this in the perturbative framework. ## 3 Perturbation theory for the reflection operator Consider a bi-partite Hilbert space \({\cal H}_{L}\otimes{\cal H}_{R}\), where \({\cal H}_{L}\) and \({\cal H}_{R}\) are both finite dimensional Hilbert spaces of the same dimension. Let us say that we have a general one-parameter family of states \(\Psi_{\lambda}\in{\cal H}_{L}\otimes{\cal H}_{R}\) which are all full rank. At any value of \(\lambda\), we can construct the reduced density matrices \(\rho_{L}(\lambda)\) and \(\rho_{R}(\lambda)\) corresponding to the left and right factors respectively. Accordingly, we have the one-parameter family of modular Hamiltonians \(K_{L}(\lambda)\) and \(K_{R}(\lambda)\), where the modular Hamiltonian for a density matrix \(\rho\) is defined as \(K=-\log\,\rho\). At any given value of \(\lambda\), we have a Schmidt decomposition for the state \(\Psi_{\lambda}\): \[|\Psi_{\lambda}\rangle=\sum_{n}e^{-\frac{1}{2}E_{n}(\lambda)}|\tilde{\chi}_{n }(\lambda)\rangle_{L}\otimes|\chi_{n}(\lambda)\rangle_{R}. \tag{8}\] In terms of the modular Hamiltonians, the \(\chi_{n}\) and \(\tilde{\chi}_{n}\) satisfy \[K_{R}(\lambda)|\chi_{n}(\lambda)\rangle_{R}=E_{n}(\lambda)|\chi_{n}(\lambda) \rangle_{R}, \tag{9}\] \[K_{L}(\lambda)|\tilde{\chi}_{n}(\lambda)\rangle_{L}=E_{n}(\lambda)|\tilde{ \chi}_{n}(\lambda)\rangle_{L}, \tag{10}\] where note that the eigenvalues are common to both sides. In terms of these quantities, recall that the reflection operator \({\cal R}_{\lambda}\) is defined as: \[{\cal R}_{\lambda}=\sum_{n}|\tilde{\chi}^{\star}\rangle_{L^{\star}}\langle \chi_{n}|_{R}, \tag{11}\] where \(|\tilde{\chi}^{\star}\rangle_{L^{\star}}=\Theta|\tilde{\chi}\rangle_{L^{ \star}}\), and \(\Theta\) is an anti-unitary operator which we will take to be CPT. Our first goal is to derive a differential equation for \({\cal R}_{\lambda}\) along the flow parametrized by \(\lambda\). ### Flow equation Upon an infinitesimal deformation of the parameter \(\lambda\), the change in the eigenstates of, say \(K_{R}\), is given by \[\frac{d}{d\lambda}|\chi_{n}\rangle_{R}=\sum_{m\neq n}\frac{\langle\chi_{m}|\frac {d}{d\lambda}K_{R}|\chi_{n}\rangle_{R}}{(E_{n}(\lambda)-E_{m}(\lambda))}|\chi_ {m}\rangle_{R}. \tag{10}\] Here we have assumed that the eigenvalues are non-degenerate. We can rewrite this in the following way: \[\frac{d}{d\lambda}|\chi_{n}\rangle_{R} = \sum_{m\neq n}\int_{0}^{\infty}idt\,e^{-\epsilon t}\left(\langle \chi_{m}|e^{itK_{R}(\lambda)}\frac{d}{d\lambda}K_{R}e^{-itK_{R}(\lambda)}|\chi _{n}\rangle_{R}\right)|\chi_{m}\rangle_{R} \tag{11}\] \[= \int_{0}^{\infty}idt\,e^{-\epsilon t}e^{itK_{R}(\lambda)}\frac{d }{d\lambda}K_{R}e^{-itK_{R}(\lambda)}|\chi_{n}\rangle_{R}-\frac{i}{\epsilon} \frac{d}{d\lambda}E_{n}(\lambda)\,|\chi_{n}\rangle_{R}.\] Here we have introduced a regulator \(\epsilon\to 0^{+}\), which plays two roles: firstly, it regulates the \(t\)-integral at large \(t\) in the first line. Secondly, it allows us to add and subtract the \(m=n\) term in the sum, which together with \[\sum_{m}|\chi_{m}\rangle\langle\chi_{m}|_{R}=\mathbb{1}_{R},\] allows us to rewrite the expression as in the second line. Note that the \(\frac{1}{\epsilon}\) divergence is not really present, since it cancels the corresponding divergence from the first term; we have merely chosen to write the expression in this way for convenience. A similar formula is also true for the modular eigenstates of the left party, and so we get the following flow equations for the eigenstates: \[\frac{d}{d\lambda}|\chi_{n}\rangle_{R}=i\mathcal{A}_{R}|\chi_{n}\rangle_{R}, \quad\frac{d}{d\lambda}|\tilde{\chi}_{n}\rangle_{L}=i\mathcal{A}_{L}|\tilde{ \chi}_{n}\rangle_{L}, \tag{12}\] where \[\mathcal{A}_{R}(\lambda)=a_{R}(\lambda)+\int_{0}^{\infty}dte^{-\epsilon t}\,e ^{itK_{R}^{(\lambda)}}\frac{d}{d\lambda}K_{R}^{(\lambda)}e^{-itK_{R}^{( \lambda)}}, \tag{13}\] \[\mathcal{A}_{L}(\lambda)=a_{L}(\lambda)+\int_{0}^{\infty}dte^{-\epsilon t}\,e ^{itK_{L}^{(\lambda)}}\frac{d}{d\lambda}K_{L}^{(\lambda)}e^{-itK_{L}^{( \lambda)}}. \tag{14}\] Here \(a_{L}\) and \(a_{R}\) are the diagonal terms proportional to \(\frac{1}{\epsilon}\). There is an important subtlety we need to address at this point: orthonormality does not fix the overall phase of an eigenstate of the modular Hamiltonian, i.e., we have the freedom \(\chi_{n}\to e^{i\phi_{n}}\chi_{n}\). So, as far as the eigenstates of the modular Hamiltonian are concerned, the diagonal terms in the above flow equation are ambigious. Some of this ambiguity is fixed by the fact that we want \(\chi_{n}(\lambda)\) and \(\tilde{\chi}_{n}(\lambda)\) to be a Schmidt basis for the family of states \(\Psi(\lambda)\). In particular, the sum of the left and the right phases \((\phi_{n}+\tilde{\phi}_{n})\) is fixed, but the relative phase \((\phi_{n}-\tilde{\phi}_{n})\) is not; this is good enough for our purposes, because the reflection operator is unambiguous once the Schmidt condition is imposed. Crucially, these ambiguities all correspond to diagonal terms in the modular eigenstate basis, and for what we are interested in, we will not need to worry about fixing them. We will simply gather all these diagonal terms inside \(a_{L}\) and \(a_{R}\) henceforth.4 Footnote 4: More precisely, \((a_{L}+a_{R})\) can be fixed by the Schmidt condition. But as we will see later, \(a_{L}\) and \(a_{R}\) will drop out of the calculations we are interested in. Coming back to the reflection operator, the change in \(\mathcal{R}_{\lambda}\) in these terms is given by \[i\frac{d}{d\lambda}\mathcal{R}_{\lambda}=i\sum_{n}\Big{(}\Theta\frac{d}{d \lambda}|\tilde{\chi}_{n}\rangle_{L^{*}}\langle\chi_{n}|_{R}+\Theta|\tilde{ \chi}_{n}\rangle_{L^{*}}\frac{d}{d\lambda}\langle\chi_{n}|_{R}\Big{)}= \mathcal{A}_{L}^{*}(\lambda)\,\mathcal{R}_{\lambda}+\mathcal{R}_{\lambda}\, \mathcal{A}_{R}(\lambda), \tag{3.10}\] where we have defined \[\mathcal{A}_{L}^{*}(\lambda)=\Theta\,\mathcal{A}_{L}(\lambda)\,\Theta^{-1}. \tag{3.11}\] While we have focused on the special case with only one parameter \(\lambda\), the formulas above apply naturally to the more general case where the parameter space is an \(n\)-dimensional manifold \(\mathcal{M}\) parametrized locally by coordinates \(\lambda^{i}\). In this case, \(\mathcal{A}_{R}\) and \(\mathcal{A}_{L^{*}}\) become one-forms on this parameter space. It is natural to interpret them as connection one-forms for a \(\boldsymbol{U}(\dim\mathcal{H}_{L})\times\boldsymbol{U}(\dim\mathcal{H}_{R})\) bundle over the base space \(\mathcal{M}\), where \(\boldsymbol{U}(D)\) is the unitary group. To see this more explicitly, imagine that we consider a modified state \(\Psi^{\prime}=U\Psi\), where \(U\) is a one-sided unitary transformation acting on \(R\), but we can let \(U\) depend on the parameters \(\lambda^{i}\). Then, it follows from a short calculation (using the defining equation (3.8)) that the connections transform as \[\mathcal{A}_{L}^{\prime}=\mathcal{A}_{L}, \tag{3.12}\] \[\mathcal{A}_{R}^{\prime}=U\,\mathcal{A}_{R}\,U^{-1}-idU\,U^{-1}, \tag{3.13}\] which is precisely the transformation property of a connection 1-form. The same formula is also true for the transformation of \(\mathcal{A}_{L}\) under a one-sided unitary acting on \(L\). Thus, \(\mathcal{A}_{R}\) and \(\mathcal{A}_{L}\) are connection 1-forms under the action of local, one-sided unitary transformations, and we can think of equation (3.7) as defining transport with respect to these connections. We will refer to these connections as _modular Berry connections_. The curvature for these connections must only lie along the diagonal \(U(1)^{\dim\mathcal{H}}\) subgroups in the non-degenerate case. However, the curvature is much more interesting to study in the degenerate case, where one encounters further ambiguities in how to transport eigenstates within degenerate subspaces; this is a non-Abelian generalization of the phase ambiguities we encountered previously (see [34; 35; 36] for some related work on modular Berry connections). Coming back to the case with one parameter \(\lambda\), the general solution to the differential equation (10) takes the form:5 Footnote 5: The flow equation satisfied – for instance, by \(U_{R}\) – is a regulated version of the flow equation satisfied by the Connes cocycle \(u_{s}=e^{isK_{R}^{(\lambda)}}e^{-isK_{R}^{(0)}}\), in the large \(s\) limit; see [37; 38; 39; 40] for some recent discussions of the Connes cocycle. \[\mathcal{R}_{\lambda}=U_{L}^{\star}(\lambda)\cdot\mathcal{R}_{0}\cdot U_{R}^{ \dagger}(\lambda), \tag{14}\] where \[i\frac{dU_{L}^{\star}}{d\lambda}=\mathcal{A}_{L}^{\star}U_{L}^{\star},\ \ \ -i\frac{dU_{R}}{d\lambda}=\mathcal{A}_{R}U_{R},\ \ \ U_{L}^{\star}(0)=\mathbb{1}_{L^{\star}},\ \ U_{R}(0)=\mathbb{1}_{R}, \tag{15}\] The formal solutions to these equations are given by \[U_{L}^{\star}=\mathcal{P}\exp\left\{-i\int_{0}^{\lambda}d\lambda^{\prime} \mathcal{A}_{L}^{\star}(\lambda^{\prime})\right\},\ \ \ U_{R}=\mathcal{P}\exp\left\{i\int_{0}^{\lambda}d\lambda^{\prime} \mathcal{A}_{R}(\lambda^{\prime})\right\}, \tag{16}\] where \(\mathcal{P}\) stands for path-ordering. The matrices \(U_{R}\) and \(U_{L}\) supply a notion of parallel transport. This, in principle, allows us to completely solve for the reflection operator \(\mathcal{R}_{\lambda}\) in terms of the modular Hamiltonians of the left and right subregions for the one-parameter family of states \(\Psi_{\lambda}\).6 Footnote 6: Note that the reflection operator only depends on \(a_{L}\) and \(a_{R}\) through the combination \((a_{L}+a_{R})\). We also need to impose the Schmidt condition to fix this phase ambiguity, as discussed previously. With this, the reflection operator is completely determined, but this phase ambiguity will not be important for us. ### Expanding around the TFD state So far, we have derived a general differential equation satisfied by the operator \(\mathcal{R}_{\lambda}\) for a one-parameter family of states \(\Psi_{\lambda}\). Now we wish to apply this to a more concrete setting. Let us consider the TFD state \[|\Psi_{0}\rangle=\frac{1}{\sqrt{Z}}\sum_{n}e^{-\frac{\beta}{2}E_{n}(0)}|\chi_ {n}(0)\rangle_{L}\otimes|\chi_{n}^{\star}(0)\rangle_{R}, \tag{17}\] where \(E_{n}(0)\) and \(\chi_{n}(0)\) are the eigenstates of some local Hamiltonian \(H\), and the right Hilbert space \(\mathcal{H}_{R}\) can be identified with \(\mathcal{H}_{L^{\star}}\). The TFD state can also be thought of as a Euclidean path integral over a Euclidean time segment of length \(\beta/2\). The TFD state is itself the canonical purification of the thermal ensemble, and so the reflection operator in the present case is essentially the identity operator. We wish to consider a one-parameter deformation of the TFD state. A natural such family of states can be constructed by turning on a source \(\tilde{J}(\tau)\) for some operator \(\mathcal{O}(\tau)\) in the Euclidean path integral [41]. Concretely, we change the action inside the Euclidean path integral in the following way:7 Footnote 7: We are only displaying the time coordinate here and in what follows, but in principle the sources can also depend on spatial directions. \[S_{\rm new}=S_{\rm old}+\lambda\int_{-\pi}^{0}d\tau\tilde{J}(\tau)\mathcal{O}( \tau), \tag{3.18}\] where we have defined \(\tau=\frac{2\pi}{\beta}\hat{\tau}\) and \(\hat{\tau}\) is the Euclidean time coordinate with period \(\beta\). This new path integral now constructs a new bi-partite state which we will call \(\Psi_{\lambda}\). We wish to construct the reflection operator \(\mathcal{R}_{\lambda}\) for this family of states to first order in \(\lambda\). In order to do this, we first need to compute the change in the modular Hamiltonians of the \(L\) and \(R\) subsystems to first order in perturbation theory. This has been computed previously in several works, see for instance [42; 43; 44; 45]: \[\frac{dK_{R}}{d\lambda}=\int_{0}^{2\pi}d\tau\,J_{R}(\tau)\int_{-\infty}^{ \infty}\frac{ds}{4\sinh^{2}(\frac{s+i\tau}{2})}e^{\frac{is}{2\pi}K_{R}(0)} \mathcal{O}(0)e^{-\frac{is}{2\pi}K_{R}(0)}. \tag{3.19}\] Here \(K_{R}(0)=\beta H\) is the original, undeformed modular Hamiltonian for \(\Psi_{0}\), which is simply \(\beta\) times the Hamiltonian \(H\) corresponding to the TFD state. The source \(J_{R}(\tau)\) is a time-reflection symmetric version of \(\tilde{J}(\tau)\): \[J_{R}(t)=\begin{cases}\tilde{J}(\tau)&-\pi<\tau<0\\ \tilde{J}^{*}(-\tau)&0<\tau<\pi.\end{cases} \tag{3.20}\] Note that the operator on the right hand side of equation (3.19) is a fully Lorentzian operator; all the Euclidean time dependence is now in the \(\sinh^{-2}(\frac{s+i\tau}{2})\) kernel. A similar formula can also be written for the left subsystem. The only difference is that the corresponding source \(J_{L}\) is related to \(J_{R}\) by a left-right reflection, i.e., \(J_{L}(\tau)=J_{R}(\pi-\tau)\). Let us briefly recap where equation (3.19) comes from. In the finite dimensional case, one proceeds as follows:8 Footnote 8: We will temporarily drop the subscripts \(L\) and \(R\), since this derivation applies to both and the subscript is not so relevant. \[\frac{dK}{d\lambda}\Big{|}_{\lambda=0} = -\lim_{\epsilon\to 0}\frac{1}{\epsilon}\left(\log(\rho_{0}+ \epsilon\frac{d\rho}{d\lambda})-\log\rho_{0}\right) \tag{3.21}\] \[= -\lim_{\epsilon\to 0}\frac{1}{\epsilon}\left(\log[\rho_{0}(1+ \epsilon\rho_{0}^{-1}\frac{d\rho}{d\lambda})]-\log\rho_{0}\right)\] \[= -\lim_{\epsilon\to 0}\frac{1}{\epsilon}\left(\log[e^{-K(0)}e^{ \epsilon\rho_{0}^{-1}\frac{d\rho}{d\lambda}}]-\log\rho_{0}\right).\] Here, we have only assumed that \(\rho_{0}\) is invertible. Using the Baker-Campbell-Hausdorff formula in the first term, we get \[\frac{dK}{d\lambda}\Big{|}_{\lambda=0}=-\sum_{n=0}^{\infty}(-1)^{n}\frac{B_{n}}{n! }\left[K(0),\cdots\left[K(0),\rho_{0}^{-1}\frac{d\rho}{d\lambda}\right]\cdots \right]. \tag{3.22}\] Now, using the integral formula \[B_{n}=\int_{-\infty+i\epsilon}^{\infty+i\epsilon}ds\frac{\left(\frac{-is}{2 \pi}\right)^{n}}{4\sinh^{2}(s/2)}, \tag{3.23}\] we can re-sum the BCH expansion to obtain \[\frac{dK}{d\lambda}\Big{|}_{\lambda=0}=-\int_{-\infty+i\epsilon}^{\infty+i \epsilon}\frac{ds}{4\sinh^{2}(s/2)}e^{\frac{is}{2\pi}K(0)}\rho_{0}^{-1}\frac{ d\rho}{d\lambda}e^{-\frac{is}{2\pi}K(0)}. \tag{3.24}\] For Euclidean path-integral states, a path-integral argument [46] shows that9 Footnote 9: More precisely, the operator which appears in this equation is \(:\mathcal{O}:=\mathcal{O}-\langle\mathcal{O}\rangle_{0}\), but for simplicity, we can assume that the one point function of \(\mathcal{O}\) vanishes. \[\rho_{0}^{-1}\frac{d\rho}{d\lambda}=-\int_{0}^{2\pi}d\tau J(\tau)\mathcal{O}( \tau), \tag{3.25}\] so we obtain \[\frac{dK}{d\lambda}\Big{|}_{\lambda=0}=\int d\tau J(\tau)\int_{-\infty+i \epsilon}^{\infty+i\epsilon}\frac{ds}{4\sinh^{2}(s/2)}e^{\frac{is}{2\pi}K(0)} \mathcal{O}(\tau)e^{-\frac{is}{2\pi}K(0)}. \tag{3.26}\] In the finite-dimensional case, this expression is good enough, but we would like to obtain a formula which is well-defined in the infinite dimensional or continuum quantum field theory limit as well. In the latter case, the above expression becomes problematic, since the operator \(\mathcal{O}(\tau)\) is a Euclidean operator and does not admit a bounded continuum limit for all \(\tau\). In order to avoid this problem, we first deform the \(s\)-contour integral10 (before taking the continuum limit), to write the above expression as Footnote 10: The integrand is analytic in the \(0<\text{Im}(s)<2\pi\) strip of the complex \(s\)-plane. Furthermore, in the finite dimensional setting, the vertical contours at \(s=\pm\infty\) can be dropped because \(\sinh^{-2}(s/2)\) decays exponentially. \[\frac{dK}{d\lambda}\Big{|}_{\lambda=0}=\int d\tau J(\tau)\int_{-\infty}^{ \infty}\frac{ds}{4\sinh^{2}(\frac{s+i\tau}{2})}e^{\frac{is}{2\pi}K(0)} \mathcal{O}(0)e^{-\frac{is}{2\pi}K(0)}. \tag{3.27}\] Now we have a completely Lorentzian operator at hand, and at this stage we can take the continuum limit to obtain a well-defined continuum operator. With the first order change of the modular Hamiltonian in hand, we can now obtain the first order change in \(U_{R}\): \[-i\frac{dU_{R}}{d\lambda}(0)=\mathcal{A}_{R}(0), \tag{3.28}\] where \[\mathcal{A}_{R}(0)=a_{R}(0)+\int_{0}^{\infty}dt\,e^{-\epsilon t}\int d\tau J_{R}( \tau)\int_{-\infty}^{\infty}\frac{ds}{4\sinh^{2}(\frac{s+i\tau}{2})}e^{\frac{i(s +2\pi t)}{2\pi}K_{R}(0)}\mathcal{O}(0)e^{-\frac{i(s+2\pi t)}{2\pi}K_{R}(0)}. \tag{3.29}\] Shifting \(s\) by \(2\pi t\) allows us to perform the \(t\) integral: \[\int_{0}^{\infty}idt\,\frac{e^{-\epsilon t}}{4\sinh^{2}(\frac{s-2\pi t+i\tau} {2})}=\frac{1}{2\pi i}\frac{1}{\left(1-e^{-(s+i\tau)}\right)}+\frac{\epsilon} {\pi^{2}}e^{-\frac{\epsilon}{2\pi}(s+i\tau)}B_{e^{s+i\tau}}(1+\frac{\epsilon} {2\pi},0) \tag{3.30}\] where \[B_{z}(a,b)=\int_{0}^{z}dt\,t^{a-1}(1-t)^{b-1},\] is the incomplete Beta function. In the \(\epsilon\to 0\) limit, the second term drops out, as long as the source \(J_{R}(\tau)\) is supported away from \(\tau=0\). Thus, we get \[\mathcal{A}_{R}(0)=a_{R}(0)-\frac{1}{2\pi}\int d\tau J_{R}(\tau)\int_{-\infty} ^{\infty}ds\frac{1}{\left(1-e^{-(s+i\tau)}\right)}e^{\frac{is}{2\pi}K_{R}(0)} \mathcal{O}(0)e^{-\frac{is}{2\pi}K_{R}(0)}. \tag{3.31}\] Similarly, \[\mathcal{A}_{L}(0)=a_{L}(0)-\frac{1}{2\pi}\int d\tau J_{L}(\tau)\int_{-\infty} ^{\infty}ds\frac{1}{\left(1-e^{-(s+i\tau)}\right)}e^{\frac{is}{2\pi}K_{L}(0)} \mathcal{O}(0)e^{-\frac{is}{2\pi}K_{L}(0)}. \tag{3.32}\] Equations (3.31) and (3.32) are our main formulas for the modular Berry connections evaluated on the TFD state. In the next section, we will use these to derive the quantum extremal shock in the Engelhardt-Wall geometry. As another application of these formulas, it is not hard to show that in holographic conformal field theories, these expressions for the modular Berry connections can be put in a manifestly geometric form in the bulk. Indeed, when \(\mathcal{O}\) is taken to be a single-trace operator, we find that \[\Pi_{\text{code}}\mathcal{A}_{R}(0)\Pi_{\text{code}}=\Pi_{\text{code}}a_{R}(0) \Pi_{\text{code}}+\int_{\Sigma_{R}}\mathbf{\omega}(\delta_{\lambda}\phi,\mathbf{\phi}), \tag{3.33}\] \[\Pi_{\text{code}}\mathcal{A}_{L}(0)\Pi_{\text{code}}=\Pi_{\text{code}}a_{L}(0) \Pi_{\text{code}}+\int_{\Sigma_{L}}\mathbf{\omega}(\delta_{\lambda}\phi,\mathbf{\phi}). \tag{3.34}\] Here, \(\Pi_{\text{code}}\) is the projector onto states where we can think of the bulk in terms of quantum fields on a fixed background geometry, \(\mathbf{\phi}\) is the bulk operator valued field dual to \(\mathcal{O}\), \(\delta_{\lambda}\phi\) is the linearized change in the bulk field configuration under the boundary deformation \(J_{R}\), and \(\mathbf{\omega}\) is the symplectic current for the bulk fields:11 Footnote 11: In the case where \(\mathcal{O}\) is the stress tensor, one uses the gravitational symplectic form which appears naturally in the covariant phase space method [47]. The region of integration for the gravitational symplectic flux turns out to be the entanglement wedge of the boundary subregion in the deformed geometry. \[\mathbf{\omega}(\delta_{1}\phi,\delta_{2}\phi)=(\delta_{1}\phi\,n^{\mu}\partial_{ \mu}\delta_{2}\phi-\delta_{2}\phi\,n^{\mu}\partial_{\mu}\delta_{1}\phi). \tag{3.35}\] The derivation of equations (3.33) and (3.34) more or less follows the same logic as in [27], so we will not repeat it here. These equations give a natural generalization of [48] to the case of subregions (see also [49] for a different approach). It is intriguing that the above expressions can be written as a sum of two terms, where the first term comes from the "diagonal" part of the connection, while the second term is related to the symplectic flux of bulk quantum fields in the relevant entanglement wedge; it would be interesting to understand the first term better. One thing to note is that if the source \(J_{R}\) is tuned in order to create a localized excitation at some point in the bulk, then the geometric term in \(\mathcal{A}_{R}\) is also localized at that point. Thus, the deeper in the bulk the excitation created by the source, the more "complex" is the corresponding unitary \(U_{R}\). ## 4 Quantum extremal shock In this section, we wish to study the state of bulk matter in the holographic dual corresponding to the canonical purification \(\Psi^{\star}_{\lambda}\). To be concrete, we will work to first order in perturbation theory near the TFD state. ### Double-trace deformation We wish to turn on an operator \(\mathcal{O}\) in the Euclidean path-integral which sources the bulk stress tensor at \(O(\lambda)\). The reason is that in order to see the quantum extremal shock at \(O(\lambda)\) in the canonically purified state, we need to have a non-trivial shape derivative for the bulk entanglement entropy at \(O(\lambda)\) in the original state. But to linear order in \(\lambda\), we have \[\frac{1}{\sqrt{h(y^{i})}}\frac{d}{d\lambda}\frac{\delta S_{\rm bulk }}{\delta x^{+}}\Big{|}_{\lambda=0,y^{i}} = -2\pi\int_{0}^{\infty}dx^{+}\frac{d}{d\lambda}\langle T^{\rm bulk }_{++}(x^{+},x^{-}=0,y^{i})\rangle_{\Psi_{\lambda}}\Big{|}_{\lambda=0}, \tag{4.1}\] \[= 2\pi\int_{-\infty}^{0}dx^{+}\frac{d}{d\lambda}\langle T^{\rm bulk }_{++}(x^{+},x^{-}=0,y^{i})\rangle_{\Psi_{\lambda}}\Big{|}_{\lambda=0},\] with a similar equation for the shape derivative along \(x^{-}\). Here \((x^{+},x^{-})\) are light-cone coordinates on which Schwarzschild boosts act simply as \((x^{+},x^{-})\to(x^{+}e^{s},x^{-}e^{-s})\), \(y^{i}\) are transverse bulk coordinates which parametrize the original extremal surface (i.e., the bifurcation point), \(h\) is the determinant of the induced metric on the original extremal surface, and the shape derivative at the point \(y^{i}\) is defined as \[\frac{\delta S_{\rm bulk}}{\delta x^{+}}\Big{|}_{\lambda=0,y^{i}}=\lim_{ \epsilon\to 0}\frac{1}{\epsilon}\left[S_{\rm bulk}[x^{+}=\epsilon\delta(y^{i}),x^{-}=0]-S_{\rm bulk}[x^{+}=0,x^{-}=0]\right], \tag{4.2}\] where the arguments of the entropies on the right hand side above are the coordinate locations of the corresponding bulk entanglement cuts. We can derive equation (4.1) as follows: consider the bulk relative entropy for the region corresponding to the entanglement wedge \(r\) of the boundary subregion \(R\): \[S_{\rm bulk}(\rho_{r}(\lambda)||\rho_{r}(0))=\Delta\langle K_{\rm bulk,r}(0) \rangle-\Delta S_{\rm bulk}, \tag{4.3}\] where the \(\Delta\) symbol stands for subtraction with respect to the background TFD state: \[\Delta\langle K_{\rm bulk,r}(0)\rangle=\langle K_{\rm bulk,r}(0)\rangle_{ \Psi_{\lambda}}-\langle K_{\rm bulk,r}(0)\rangle_{\Psi_{0}}, \tag{4.4}\] \[\Delta S_{\rm bulk}=S_{\rm bulk}(\Psi_{\lambda})-S_{\rm bulk}(\Psi_{0}). \tag{4.5}\] Since the first derivative of the relative entropy at \(\lambda=0\) vanishes, we conclude that \[\frac{d}{d\lambda}S_{\rm bulk}\Big{|}_{\lambda=0}=\frac{d}{d\lambda}\langle K _{\rm bulk,r}(0)\rangle_{\Psi_{\lambda}}\Big{|}_{\lambda=0}. \tag{4.6}\] Taking a derivative of this equation with respect to the shape of the bulk entanglement cut and using [42; 50] \[\frac{\delta K_{\rm bulk,r}(0)}{\delta x^{+}}\Big{|}_{y^{i}}=-2\pi\sqrt{h(y^ {i})}\int_{0}^{\infty}dx^{+}T_{++}^{\rm bulk}(x^{+},x^{-}=0,y^{i}), \tag{4.7}\] we land on the first equality in equation (4.1), while applying the same arguments to the the entanglement wedge \(\ell\) of \(L\) and using \[\frac{\delta K_{\rm bulk,\ell}(0)}{\delta x^{+}}\Big{|}_{y^{i}}=2\pi\sqrt{h(y ^{i})}\int_{-\infty}^{0}dx^{+}T_{++}^{\rm bulk}(x^{+},x^{-}=0,y^{i}), \tag{4.8}\] gives the second equality. Importantly, equation (4.1) implies that for us to see the shock in the bulk dual to the canonical purification at \(O(\lambda)\), we need to turn on a deformation which will source the bulk stress tensor at \(O(\lambda)\) in the original state. For this reason, we cannot take \(\mathcal{O}\) to be a single-trace operator, as single trace operators only source the bulk stress tensor at \(O(\lambda^{2})\). Instead, we can imagine turning on a double-trace operator \(\mathcal{O}=:\phi\phi:\), for some single trace operator \(\phi\); although the details of what \(\mathcal{O}\) we choose will not be relevant in the discussion below. Now, the quantum extremal surface in the geometry dual to \(\Psi_{\lambda}\) will deviate from the classical extremal surface at \(O(\lambda G_{N})\). Following the Engelhardt-Wall construction reviewed in section 2, the bulk spacetime dual to the canonical purification \(\Psi_{\lambda}^{\star}\) consists of the entanglement wedge \(\text{EW}(L)\) (in the original geometry dual to \(\Psi_{\lambda}\)) glued to its CPT image at the QES. In order for the junction conditions to be satisfied, the bulk matter stress tensor must have a singular contribution at the location of the QES. Importantly, even though the QES deviates from the classical extremal surface at \(O(\lambda G_{N})\), the singular contribution in the bulk stress tensor in the bulk dual to the canonically purified state must appear at \(O(\lambda)\). It is this contribution that we are after. ### Bulk one point function In order to proceed, we wish to compute the bulk stress tensor in the canonically-purified state. We can be general, and compute the one-point function of a more general operator \(\Phi\) acting on the \(L^{\star}\) factor: \[\langle\Phi\rangle_{\Psi^{\star}_{\lambda}}=\langle\Psi_{\lambda}|{\cal R}^{ \dagger}_{\lambda}\,\Phi\,{\cal R}_{\lambda}|\Psi_{\lambda}\rangle, \tag{4.9}\] at first order in \(\lambda\). Later, we will take \(\Phi\) to be the bulk matter stress tensor \(T^{\rm bulk}_{\mu\nu}(x_{B})\), where we will take \(x_{B}\) to lie in the entanglement wedge of \(L^{\star}\) in the geometry dual to \(\Psi^{\star}_{\lambda}\). In particular, we are interested in the \(T^{\rm bulk}_{\pm\pm}\) components of the stress tensor, and we wish to take the limit where the bulk point approaches the quantum extremal surface. Let us take a moment to discuss what this means. The backreaction from turning on a double-trace operator is of \(O(\lambda G_{N})\). If we ignore this effect for now, the classical bulk spacetime dual to the canonically purified state is the undeformed, eternal black hole spacetime, where we simply re-label the right subsystem as \(L^{\star}\). However, the state of bulk matter fields receives corrections at \(O(\lambda)\), and this is what we wish to probe via the bulk operator \(\Phi\); in particular, we want to take \(\Phi=T^{\rm bulk}_{\pm\pm}\) and take the limit where this operator approaches the original extremal surface (i.e., the bifurcation point) in the eternal black hole. With this preamble, we now wish to compute the first \(\lambda\) derivative of the above one-point function. Taking a \(\lambda\)-derivative of equation (4.9), we get: \[\frac{d}{d\lambda}\langle\Phi\rangle_{\lambda,\star}=\langle\Psi_{\lambda}| \frac{d{\cal R}^{\dagger}_{\lambda}}{d\lambda}\,\Phi\,{\cal R}_{\lambda}|\Psi_ {\lambda}\rangle+\langle\Psi_{\lambda}|{\cal R}^{\dagger}_{\lambda}\Phi\, \frac{d{\cal R}_{\lambda}}{d\lambda}|\Psi_{\lambda}\rangle+\langle\frac{d\Psi _{\lambda}}{d\lambda}|\widehat{\Phi}|\Psi_{\lambda}\rangle+\langle\Psi_{ \lambda}|\widehat{\Phi}|\frac{d\Psi_{\lambda}}{d\lambda}\rangle, \tag{4.10}\] where in the last two terms we have defined the operator \(\widehat{\Phi}\equiv{\cal R}^{\dagger}_{\lambda}\,\Phi\,{\cal R}_{\lambda}\). Using the flow equation for \({\cal R}_{\lambda}\), we can rewrite this as \[\frac{d}{d\lambda}\langle\Phi\rangle_{\lambda,\star}=i\langle\Psi_{\lambda}| \left[{\cal A}_{R},\widehat{\Phi}\right]|\Psi_{\lambda}\rangle-i\langle\Psi^{ \star}_{\lambda}|\left[{\cal A}^{\star}_{L^{\star}},\Phi\right]|\Psi^{\star}_ {\lambda}\rangle+\langle\frac{d\Psi_{\lambda}}{d\lambda}|\widehat{\Phi}|\Psi_ {\lambda}\rangle+\langle\Psi_{\lambda}|\widehat{\Phi}|\frac{d\Psi_{\lambda}}{d \lambda}\rangle. \tag{4.11}\] Note that at \(\lambda=0\), \(\widehat{\Phi}=\Phi\), and so henceforth we will drop the hats. Further, the last two terms can simply be written as \[\left(\langle\frac{d\Psi_{\lambda}}{d\lambda}|\widehat{\Phi}|\Psi_{\lambda} \rangle+\langle\Psi_{\lambda}|\widehat{\Phi}|\frac{d\Psi_{\lambda}}{d\lambda} \rangle\right)_{\lambda=0}=\frac{d}{d\lambda}\langle\Phi\rangle_{\Psi_{\lambda }}\Big{|}_{\lambda=0}. \tag{4.12}\] Let us now focus on the first term involving the commutator with \({\cal A}_{R}\); the same logic will also apply to the second term. We proceed by assuming that \(\Phi(x_{B})\) is an operator acting strictly on the \({\cal H}_{R}\) factor (i.e., \(x_{B}\) is well within the entanglement wedge of \(R\)). As explained above, we will eventually take \(\Phi=T^{\rm bulk}_{\pm\pm}\) and take the limit where the bulk point approaches the bifurcation point. To be precise, when the operator acts "at the bifurcation point", we cannot take it to be supported in \({\cal H}_{R}\) alone. For instance, after a little bit of smearing to make this bulk operator well-defined, we will in general find that it acts on both sides of the bifurcation surface. A simple smearing is to instead consider the operators \[\Phi_{\rm smear}=\lim_{\delta\to 0}\int_{-\delta}^{\delta}dx^{\pm}\,T_{\pm\pm}^{ \rm bulk}. \tag{4.13}\] Indeed, later we will encounter the need for such a smearing, but for now we proceed with the above simplifying assumption. If we work at \(\lambda=0\), and use equation (3.31): \[{\cal A}_{R}(0)=a_{R}(0)-\frac{1}{2\pi}\int d\tau J_{R}(\tau)\int_{-\infty}^{ \infty}ds\frac{1}{\left(1-e^{-(s+i\tau)}\right)}e^{\frac{is}{2\pi}K_{R}(0)}{ \cal O}(0)e^{-\frac{is}{2\pi}K_{R}(0)}. \tag{4.14}\] then we get \[\left\langle\Psi_{0}\right|\left[{\cal A}_{R}(0),\Phi(x_{B}) \right]\left|\Psi_{0}\right\rangle = \mbox{Tr}_{R}\left(\rho_{R}^{(0)}\left[{\cal A}_{R}(0),\Phi(x_{B} )\right]\right)\] \[= \frac{1}{2\pi i}\int d\tau J_{R}(\tau)\int_{-\infty}^{\infty} \frac{ds}{\left(1-e^{-(s+i\tau)}\right)}\,\mbox{Tr}_{R}\left(\rho_{R}^{(0)} \left[{\cal O}(s),\Phi\right]\right)\] \[= \frac{1}{2\pi i}\int d\tau J_{R}(\tau)\int_{-\infty-i\epsilon}^{ \infty-i\epsilon}\frac{ds}{\left(1-e^{-(s+i\tau)}\right)}\,\mbox{Tr}_{R} \left(\rho_{R}^{(0)}{\cal O}(s)\Phi\right)\] \[- \frac{1}{2\pi i}\int d\tau J_{R}(\tau)\int_{-\infty-i(2\pi- \epsilon)}^{\infty-i(2\pi-\epsilon)}\frac{ds}{\left(1-e^{-(s+i\tau)}\right)} \,\mbox{Tr}_{R}\left(\rho_{R}^{(0)}{\cal O}(s)\Phi\right).\] In the second line, we have used the fact that \(a_{R}(0)\) commutes with \(\rho_{R}^{(0)}\) to drop that term. In the third line, we have introduced a new regulator \(\epsilon\to 0^{+}\) to separate the two operators infinitesimally in Euclidean time, and further used the KMS condition to bring the two operators in the same order. So, we conclude that \[\left\langle\Psi_{0}\right|\left[{\cal A}_{R},\Phi(x_{B})\right]\left|\Psi_{0 }\right\rangle=\frac{1}{2\pi i}\int d\tau\,J_{R}(\tau)\int_{\Gamma}\frac{ds}{ \left(1-e^{-(s+i\tau)}\right)}\,\mbox{Tr}_{R}\left(\rho_{R}^{(0)}{\cal O}(s) \Phi\right), \tag{4.16}\] Figure 2: The strip \(-2\pi\leq\mbox{Im}(s)\leq 0\) in the complex-\(s\) plane. The contour \(\Gamma\) is shown in the bold blue. The dashed blue lines indicate the vertical contours at infinity. The red lines indicate potential branch cuts which may develop in the correlation function in the infinite dimensional limit, while the black dot indicates the pole coming from \(\frac{1}{1-e^{-(s+i\tau)}}\). where the contour \(\Gamma=(\mathbb{R}-i\epsilon)\cup(\mathbb{R}-i(2\pi-\epsilon))\) is the union of the two horizontal contours at \(\text{Im}(s)=-\epsilon\) and \(\text{Im}(s)=-(2\pi-\epsilon)\). Using Cauchy's theorem, we can then rewrite this integral as the sum over three contributions: the pole at \(s=-i\tau\), and the two "vertical" contours at \(\mathbb{Re}(s)=\pm\Lambda\) (with \(\Lambda\to\infty\)): \[\langle\Psi_{0}|\left[\mathcal{A}_{R}(0),\Phi(x_{B})\right]|\Psi_{0}\rangle=- \int d\tau\,J_{R}(\tau)\text{Tr}_{R}\left(\rho_{R}^{(0)}\mathcal{O}(\tau)\Phi \right)+\mathcal{I}_{+}^{R}+\mathcal{I}_{-}^{R}, \tag{4.17}\] where \[\mathcal{I}_{\pm}^{R}=\pm\frac{1}{2\pi}\int d\tau\,J_{R}(\tau)\int_{\epsilon}^ {2\pi-\epsilon}\frac{d\theta}{\left(1-e^{-(\pm\Lambda+i\tau)}e^{i\theta} \right)}\,\text{Tr}_{R}\left(\rho_{R}^{(0)}e^{i(\pm\Lambda-i\theta)K_{R}(0)} \mathcal{O}e^{-i(\pm\Lambda-i\theta)K_{R}(0)}\Phi\right). \tag{4.18}\] The vertical contour contributions are seemingly suppressed exponentially in \(\Lambda\) from the large relative boost between the two operators, and so it is tempting to discard them. This is correct in most cases, but not all; we will return to this point below, where we will find that the shocks we are looking for actually come from these terms. For now, let us focus on the contribution of the pole: \[\langle\Psi_{0}|\left[\mathcal{A}_{R}(0),\Phi(x_{B})\right]|\Psi_ {0}\rangle\Big{|}_{\text{pole}} = -\int d\tau\,J_{R}(\tau)\text{Tr}_{R}\left(\rho_{R}^{(0)} \mathcal{O}(\tau)\Phi\right) \tag{4.19}\] \[= -\int d\tau\,J_{R}(\tau)\left\langle\mathcal{O}(\tau)\Phi\right\rangle _{\Psi_{0}}\] \[= -\frac{d}{d\lambda}\langle\Phi\rangle_{\Psi_{\lambda}}\Big{|}_{ \lambda=0}.\] This term simply cancels the last term in equation (4.11). This is expected: this term measures how the entanglement wedge of \(R\) would change in the geometry dual to \(\Psi_{\lambda}\), but in the canonical purification, the entanglement wedge of \(R\) is replaced by a CPT reflected image of the entanglement wedge of \(L\). Thus, the above cancellation ensures that all information about the entanglement wedge of \(R\) is removed. We must now show that, in fact, the entanglement wedge of \(R\) is replaced with a CPT image of the entanglement wedge of \(L\). This comes from the pole contribution in the second term of (4.11): \[\langle\Psi_{0}^{\star}|\left[\mathcal{A}_{L^{\star}}^{\star}(0),\Phi(x_{B})\right]|\Psi_{0}^{\star}\rangle\Big{|}_{\text{pole}} = \int d\tau\,J_{L}(\tau)\text{Tr}_{L^{\star}}\left(\rho_{L^{\star} }^{(0)}\Theta\,\mathcal{O}(\tau)\,\Theta^{-1}\,\Phi\right) \tag{4.20}\] \[= \int d\tau\,J_{L}(\tau)\text{Tr}_{L}\left(\rho_{L}^{(0)}\mathcal{ O}(\tau)\,\Theta^{-1}\,\Phi\,\Theta\right)\] \[= \frac{d}{d\lambda}\langle\Phi^{\star}\rangle_{\lambda=0},\] where \(\Phi^{\star}=\Theta^{-1}\Phi\Theta\) is the CPT conjugate of the operator, but now inserted in the entanglement wedge of \(L\). This is in precise agreement with our expectation for what the canonical purification should do. ### Vertical contours at infinity So far, we have reproduced the standard, expected properties of the bulk dual to the canonical purification. Now we turn to the non-trivial part, which is to reproduce the quantum extremal shock in the bulk. For this, we need to choose a specific bulk operator, i.e., we need to take \(\Phi=T^{\rm bulk}_{\pm\pm}\). Moreover, we need to take the limit where this bulk operator approaches the extremal surface in the original background geometry. To be concrete, let us take \(\Phi=T^{\rm bulk}_{++}\) and consider the vertical contour integral \(\mathcal{I}^{R}_{\pm}\): \[\mathcal{I}^{R}_{\pm}=\pm\frac{1}{2\pi}\int d\tau\,J_{R}(\tau)\int_{\epsilon}^ {2\pi-\epsilon}\frac{d\theta}{\left(1-e^{-(\pm\Lambda+i\tau)}e^{i\theta} \right)}\operatorname{Tr}_{R}\left(\rho_{R}^{(0)}\mathcal{O}e^{i(\mp\Lambda+i \theta)K_{R}(0)}T^{\rm bulk}_{++}e^{-i(\mp\Lambda+i\theta)K_{R}(0)}\right) \tag{4.21}\] This is the same as (4.18), but we have now put the boost on the bulk operator. As \(\Lambda\to\infty\), the relative boost between the two operators goes off to infinity, and so we expect the correlator to decay exponentially. Thus, in the \(\Lambda\to\infty\) limit, this contour integral vanishes. The exception to this occurs when the bulk operator approaches the extremal surface. To see this, let us use light-cone coordinates \((x^{+},x^{-})\) in the plane transverse to the black hole extremal surface. At \(\lambda=0\), boundary modular flow acts locally on bulk operators as a Schwarzschild boost: \[e^{isK_{R}(0)}T^{\rm bulk}_{\mu\nu}(x^{+},x^{-},y^{i})e^{-isK_{R}(0)}=J^{ \alpha}{}_{\mu}(s)J^{\beta}{}_{\nu}(s)T^{\rm bulk}_{\alpha\beta}(x^{+}e^{s},x^ {-}e^{-s},y^{i}), \tag{4.22}\] where the \(J^{\alpha}{}_{\beta}\) represents the action of the boost on the indices of the stress tensor. In more detail, \[e^{isK_{R}(0)}T^{\rm bulk}_{\pm\pm}(x^{+},x^{-},y^{i})e^{-isK_{R}(0)}=e^{\pm 2 s}T^{\rm bulk}_{\pm\pm}(x^{+}e^{s},x^{-}e^{-s},y^{i}), \tag{4.23}\] \[e^{isK_{R}(0)}T^{\rm bulk}_{\pm i}(x^{+},x^{-},y^{i})e^{-isK_{R}(0)}=e^{\pm s} T^{\rm bulk}_{\pm i}(x^{+}e^{s},x^{-}e^{-s},y^{i}), \tag{4.24}\] \[e^{isK_{R}(0)}T^{\rm bulk}_{ij}(x^{+},x^{-},y^{i})e^{-isK_{R}(0)}=T^{\rm bulk }_{ij}(x^{+}e^{s},x^{-}e^{-s},y^{i}). \tag{4.25}\] Consider first \(\mathcal{I}^{R}_{-}\): \[\mathcal{I}^{R}_{-} = \frac{1}{2\pi}\int d\tau\,J_{R}(\tau)\int_{\epsilon}^{2\pi- \epsilon}\frac{d\theta e^{2\Lambda}}{\left(1-e^{(\Lambda-i\tau)}e^{i\theta} \right)}\operatorname{Tr}_{R}\left(\rho_{R}^{(0)}\mathcal{O}e^{-\theta K_{R}( 0)}T^{\rm bulk}_{++}(x^{+}e^{\Lambda},x^{-}e^{-\Lambda})e^{\theta K_{R}(0)} \right), \tag{4.26}\] \[\simeq -\frac{1}{2\pi}\int d\tau\,J_{R}(\tau)\int_{\epsilon}^{2\pi- \epsilon}d\theta e^{\Lambda+i(\tau-\theta)}\operatorname{Tr}_{R}\left(\rho_{ R}^{(0)}\mathcal{O}e^{-\theta K_{R}(0)}T^{\rm bulk}_{++}(x^{+}e^{\Lambda},x^{-}e^{- \Lambda})e^{\theta K_{R}(0)}\right).\] If \(x^{+}\neq 0\), then the above correlation function will decay exponentially in \(\Lambda\) as previously mentioned, and is thus zero in the \(\Lambda\to\infty\) limit because the bulk operator is getting boosted off to infinity. However, when \(x^{+}=0\), the operator does not get boosted away, and we instead get a divergence from the \(e^{\Lambda}\) factor in equation (4.26).12 We can see this quite explicitly in the BTZ black hole, for instance. In Kruskal coordinates, the bulk metric is given by \[ds^{2}=-\frac{4dx^{+}dx^{-}}{(1+x^{+}x^{-})^{2}}+\frac{4\pi^{2}}{\beta^{2}}\frac{( 1-x^{+}x^{-})^{2}}{(1+x^{+}x^{-})^{2}}d\phi^{2}, \tag{4.27}\] where \(\phi\) is a periodic coordinate along the bifurcation surface. The bulk to boundary propagator is given by [52; 53] \[K(x^{+},x^{-},\phi)=\sum_{n}\frac{(1+x^{+}x^{-})^{2\Delta}}{\left\{(1-x^{+}x^ {-})[\cosh(\frac{2\pi}{\beta}(\phi-\phi_{0}+2\pi n))-1]+(x^{+}-e^{-i\theta_{0} })(x^{-}-e^{i\theta_{0}})\right\}^{2\Delta}}, \tag{4.28}\] where \((\phi_{0},\tau_{0})\) label the coordinates on the boundary torus with \(\tau_{0}\) being the Euclidean time direction and \(\phi_{0}\) being the spatial direction. The bulk stress tensor in the presence of the boundary double-trace operator is given by \[\langle T^{\rm bulk}_{++}{\cal O}\rangle\sim\sum_{n}\partial_{+}K_{n}\partial _{+}K_{n}, \tag{4.29}\] where \(K_{n}\) is the \(n\)th term in the summation in equation (4.28). For fixed \(n\) and \(x^{+}\neq 0\), the bulk stress tensor goes as \(e^{-(4\Delta+2)s}\) in the large \(s\) limit. However, when \(x^{+}=0\), there is no suppression as the operator does not get boosted away and \({\cal I}^{R}_{-}\) diverges, because of the factor of \(e^{\Lambda}\) out front in equation (4.26); this suggests a delta-function contribution at \(x^{+}=0\). To check this, we really need to smear the operator in the \(x^{+}\) direction in an infinitesimally small window of \(x^{+}\in[0,\delta]\) :13 Footnote 13: We can think of this as the part of \(\Phi_{\rm smear}\) (see equation (4.13)) which contributes to \([{\cal A}_{R},\Phi]\). \[\int_{0}^{\delta}dx^{+}\,{\cal I}^{R}_{-} = -\frac{1}{2\pi}\int d\tau\,J_{R}(\tau)\int_{\epsilon}^{2\pi- \epsilon}d\theta\int_{0}^{\delta e^{\Lambda}}d\tilde{x}^{+}e^{i(\tau-\theta)} \operatorname{Tr}_{R}\left(\rho_{R}^{(0)}{\cal O}e^{-\theta K_{R}(0)}T^{\rm bulk }_{++}(\tilde{x}^{+},x^{-}e^{-\Lambda})e^{\theta K_{R}(0)}\right) \tag{4.30}\] \[\simeq -\frac{1}{2\pi}\int d\tau\,J_{R}(\tau)\int_{\epsilon}^{2\pi- \epsilon}d\theta\int_{0}^{\infty}d\tilde{x}^{+}e^{i(\tau-\theta)} \operatorname{Tr}_{R}\left(\rho_{R}^{(0)}{\cal O}e^{-\theta K_{R}(0)}T^{\rm bulk }_{++}(\tilde{x}^{+},0)e^{\theta K_{R}(0)}\right),\] where in the first line, we have defined a new coordinate \(\tilde{x}^{+}=x^{+}e^{\Lambda}\), and in the second line we have sent \(\Lambda\to\infty\). By deforming the \(\tilde{x}^{+}\) contour in the complex plane, we can remove all the \(\theta\) dependence from the correlator, and replace it with \(\tau\). Performing the \(\theta\) integral then gives \[\int_{0}^{\delta}dx^{+}\,{\cal I}^{R}_{-}=-\int d\tau\,J_{R}(\tau)\int_{0}^{ \infty}dx^{+}\operatorname{Tr}_{R}\left(\rho_{R}^{(0)}{\cal O}(\tau)T^{\rm bulk }_{++}(x^{+},0,y^{i})\right)=\frac{1}{2\pi}\frac{d}{d\lambda}\frac{\delta S_{ \rm bulk}}{\delta x^{+}}\Big{|}_{\lambda=0,y^{i}}, \tag{4.31}\] where in the last equality we used equation (4.1). Thus, the vertical contour precisely gives us the delta function contribution we had expected. Note that \({\cal I}^{R}_{+}\) does not give a delta function contribution because the enhancement factor of \(e^{2\Lambda}\) is now replaced with a suppression factor of \(e^{-2\Lambda}\). Similarly, we can evaluate the vertical contour contributions coming from the term involving \({\cal A}^{\star}_{L^{\star}}\). In this case, the contour at \(s=+\Lambda\) contributes: \[{\cal I}^{L}_{+}=\frac{1}{2\pi}\int d\tau\,J_{L}(\tau)\int_{\epsilon}^{2\pi- \epsilon}\frac{d\theta}{\left(1-e^{-(\Lambda+i\tau)}e^{i\theta}\right)}\,{ \rm Tr}_{L}\left(\rho_{L}^{(0)}{\cal O}e^{i(-\Lambda+i\theta)K_{L}(0)}(\Theta^ {-1}T^{\rm bulk}_{++}\Theta)e^{-i(-\Lambda+i\theta)K_{L}(0)}\right), \tag{100}\] where \[\Theta^{-1}T^{\rm bulk}_{++}(x^{+},x^{-},y^{i})\Theta=T^{\rm bulk}_{++}(-x^{+},-x^{-},y^{i}). \tag{101}\] The left-sided boost acts on this operator as: \[e^{-i\Lambda K_{L}(0)}T^{\rm bulk}_{++}(-x^{+},-x^{-})e^{i\Lambda K_{L}(0)}=e ^{2\Lambda}T^{\rm bulk}_{++}(-x^{+}e^{\Lambda},-x^{-}e^{-\Lambda}). \tag{102}\] In the large \(\Lambda\) limit, we can expand: \[\frac{1}{\left(1-e^{-(\Lambda+i\tau)}e^{i\theta}\right)}=1+e^{-(\Lambda+i\tau )}e^{i\theta}+\cdots. \tag{103}\] The first term leads to a \(e^{2\Lambda}\) divergence, but the \(\theta\) integration kills this term, as can be seen by smearing in the infinitesimal interval \(x^{+}\in(-\delta,0)\). The first non-trivial contribution comes from the second term, which gives (following the same steps as before): \[\int_{-\delta}^{0}dx^{+}{\cal I}^{L}_{+}=-\int d\tau\int J_{L}(\tau)\int_{- \infty}^{0}dx^{+}{\rm Tr}_{L}\left(\rho_{L}^{(0)}O(\tau)T^{\rm bulk}_{++}(x^{+ },0,y^{i})\right)=-\frac{1}{2\pi}\frac{d}{d\lambda}\frac{\delta S_{\rm bulk}} {\delta x^{+}}\Big{|}_{\lambda=0,y^{i}}. \tag{104}\] The extra minus sign above cancels with the minus sign in front of the \({\cal A}_{L^{\star}}\) term, and thus we get the same vertical contribution from here as we had from the \({\cal A}_{R}\) term, resulting in an overall factor of 2. Thus, we learn that the bulk stress tensor has the following shock contribution in the canonically purified state: \[2\pi\frac{d}{d\lambda}\langle T^{\rm bulk}_{++}(x^{+},x^{-},y^{i})\rangle_{ \Psi^{\star}_{\lambda}}\Big{|}_{\lambda=0}=2\delta(x^{+})\frac{d}{d\lambda} \frac{\delta S_{\rm bulk}}{\delta x^{+}}\Big{|}_{\lambda=0,y^{i}}+\cdots, \tag{105}\] where the \(\cdots\) indicate the other non-singular parts. This is precisely the shock required to support the Engelhardt Wall geometry.14 Thus, the boundary entanglement structure in the canonically purified state gives rise to a state of the matter fields in the bulk which precisely supports the Engelhardt-Wall geometry, in a way consistent with the bulk Einstein's equations. Footnote 14: Our calculation is valid in the limit \(x^{+}\to 0\) with \(x^{-}\) fixed. However, we see that the dependence on \(x^{-}\) is trivial in the end. This is a simple consequence of the conservation of the shock stress tensor, \(\partial_{-}T^{\rm shock}_{++}=0\). Discussion To summarize, we have studied the canonical purification of Euclidean path integral states to first order in sources. In holographic conformal field theories, we have demonstrated that the state of the bulk matter in the bulk dual to the canonically purified state is precisely such that it gives rise to a shock in the bulk stress tensor which is required to support the Engelhardt-Wall geometry. We can view our result in two different ways. Firstly, let us assume that the bulk geometry dual to the canonically purified boundary CFT state must satisfy the semiclassical Einstein's equations, order by order in the state perturbation parameter \(\lambda\). In this case, the bulk must satisfy the junction conditions, equation (5). Together with our result for the bulk shock, we conclude that the co-dimension two surface across which the gluing happens must satisfy \[\frac{1}{4G_{N}}\theta_{\pm}+\frac{\delta S_{\rm bulk}}{\delta x^{\pm}}=0, \tag{109}\] at \(O(\lambda)\), i.e., at first order in the state deformation. This is indeed the quantum extremal surface formula. On the other hand, we could assume that the gluing surface in the bulk must satisfy the quantum extremal surface formula (109), without assuming that the bulk geometry satisfies the gravitational junction conditions. In this case, combining our result for the bulk stress tensor shock together with the QES formula, we would deduce the co-dimension-two junction conditions in general relativity, equation (5), at first order in perturbation theory. From this point of view, the bulk gravitational equations (in this case, the junction conditions) are a consequence of the boundary entanglement structure satisfying the quantum extremal surface formula. This is in the same spirit as the results in [9; 27; 29], but generalized now to a context where quantum corrections in the bulk are important. The quantum extremal surface formula is deeply tied-in with the structure of the bulk-to-boundary map being a quantum error correcting code, and so one might hope that this viewpoint sheds some light on the emergence of gravity from quantum error correction. It would be nice to generalize our results beyond first order in perturbation theory. One approach to do this could be to work to leading order in perturbation theory around a more general background state/geometry. We expect that with some mild assumptions on the nature of modular flow, such as approximate locality in a neighbourhood of the entanglement cut, we should be able to extend our result to this more general scenario. Secondly, the existence of the shock in the bulk stress tensor is deeply tied with the emergence of bulk spacetime and a correspondent quantum field theory subregion algebra in the bulk. Indeed, the calculation we presented is consistent with the expectation that the bulk state dual to the boundary canonical purification is the bulk canonical purification. From this point of view, the bulk canonical purification destroys the delicate entanglement structure at the bifurcation surface, resulting in a "firewall". This is in line with the results in [37], where it was shown that one-sided purifications in quantum field theory can result in such shocks. In more formal terms, this is associated with the emergence of an effective type III von Neumann algebra in the bulk from the type I algebra of the boundary CFT in the large N limit [54; 55; 56]. It has been recently argued in [55; 56] that including \(1/N\) corrections, and in particular, incorporating one quantum gravitational mode (corresponding to relative time fluctuations between the two boundaries, or equivalently, one-sided mass fluctuations) changes the nature of the bulk algebra from type III to type II\({}_{\infty}\), thus explaining the "renormalization" of the UV divergence in the generalized entropy in gravity. It would be nice to understand, in a similar vein, what effect these \(\frac{1}{N}\) corrections can have on the shock that we encountered, and what this means for the bulk spacetime. To this end, it would be satisfying to derive the shock from the more formal machinery of Tomita-Takesaki theory (see [1] for a review). The techniques in [37] may be of direct relevance. Finally, it would be nice to develop more tools to study the reflection operator introduced in this paper. This would have direct applications in several useful directions in AdS/CFT such as bulk reconstruction, complexity of the bulk-to-boundary map etc. ## Acknowledgements We thank Abhijit Gadde, Arjun Kar, Gautam Mandal, Shiraz Minwalla, Pratik Rath, Arvin Shahbazi-Moghaddam, Joan Simon, Jonathan Sorce, Sandip Trivedi and Mark Van Raamsdonk for helpful discussions and comments on the draft. We acknowledge supported from the Department of Atomic Energy, Government of India, under project identification number RTI 4002.
2309.07770
Variational Quantum Linear Solver enhanced Quantum Support Vector Machine
Quantum Support Vector Machines (QSVM) play a vital role in using quantum resources for supervised machine learning tasks, such as classification. However, current methods are strongly limited in terms of scalability on Noisy Intermediate Scale Quantum (NISQ) devices. In this work, we propose a novel approach called the Variational Quantum Linear Solver (VQLS) enhanced QSVM. This is built upon our idea of utilizing the variational quantum linear solver to solve system of linear equations of a least squares-SVM on a NISQ device. The implementation of our approach is evaluated by an extensive series of numerical experiments with the Iris dataset, which consists of three distinct iris plant species. Based on this, we explore the practicality and effectiveness of our algorithm by constructing a classifier capable of classification in a feature space ranging from one to seven dimensions. Furthermore, by strategically exploiting both classical and quantum computing for various subroutines of our algorithm, we effectively mitigate practical challenges associated with the implementation. These include significant improvement in the trainability of the variational ansatz and notable reductions in run-time for cost calculations. Based on the numerical experiments, our approach exhibits the capability of identifying a separating hyperplane in an 8-dimensional feature space. Moreover, it consistently demonstrated strong performance across various instances with the same dataset.
Jianming Yi, Kalyani Suresh, Ali Moghiseh, Norbert Wehn
2023-09-14T14:59:58Z
http://arxiv.org/abs/2309.07770v1
# Variational Quantum Linear Solver enhanced Quantum Support Vector Machine ###### Abstract Quantum Support Vector Machines (QSVM) play a vital role in using quantum resources for supervised machine learning tasks, such as classification. However, current methods are strongly limited in terms of scalability on Noisy Intermediate Scale Quantum (NISQ) devices. In this work, we propose a novel approach called the Variational Quantum Linear Solver (VQLS) enhanced QSVM. This is built upon our idea of utilizing the variational quantum linear solver to solve system of linear equations of a least squares-SVM on a NISQ device. The implementation of our approach is evaluated by an extensive series of numerical experiments with the Iris dataset, which consists of three distinct iris plant species. Based on this, we explore the practicality and effectiveness of our algorithm by constructing a classifier capable of classification in a feature space ranging from one to seven dimensions. Furthermore, by strategically exploiting both classical and quantum computing for various subroutines of our algorithm, we effectively mitigate practical challenges associated with the implementation. These include significant improvement in the trainability of the variational ansatz and notable reductions in run-time for cost calculations. Based on the numerical experiments, our approach exhibits the capability of identifying a separating hyperplane in an 8-dimensional feature space. Moreover, it consistently demonstrated strong performance across various instances with the same dataset. ## I Introduction Support vector machines (SVMs) are one of the most renowned and widely used machine learning algorithms due to its ability to handle high dimensional data. It was initially formulated as a quadratic programming problem [1]. The primary task of an SVM is to construct a separating hyperplane that classifies data in the feature space. While SVMs are effective for many tasks, they might not be as scalable as some other methods, such as the least square formulation of SVM (LS-SVM), especially for large datasets [2]. The LS-SVM is a reformulation of SVM as a linear programming problem which is equivalent to solving a system of linear equations (SLEs), making it computationally easier [3]. Rebentrost et al. proposed a quantum version of LS-SVM, known as the QSVM [4]. This method successfully computes the inverse of the feature matrix by leveraging the principles of the HHL algorithm, coming from Harrow, Hassidim, and Lloyd (HHL) [5]. HHL is designed to efficiently solve SLEs and its computational complexity scales logarithmically with respect to the system size. However, the implementation of the HHL poses significant challenges when it comes to the efficient execution on the current Noisy Intermediate Scale Quantum (NISQ) devices. This is primarily due to the extensive demand of quantum resources such as circuit depth and large number of gates which are restricted due to quantum noise and decoherence. Additionally, QSVM [4] requires that the training data is prepared as a coherent superposition and provided as an input to the quantum hardware for computing the inverse of the kernel matrix, thus making it a plausible algorithm only when implemented on a fault tolerant, large scale quantum computer. Thus, quantum classical hybrid algorithms are being developed that are capable of efficiently solving a task partially on a quantum computer. Variational hybrid quantum-classical algorithms (VHQCAs) are a class of such hybrid algorithms, where a task is partially solved using a quantum subroutine and additionally involves classical pre or post processing methods. They have been used to solve a variety of physical problems varying from quantum chemistry to quantum machine learning [6, 7]. The general idea of VHQCAs is to use shallow quantum circuits for quantum subroutines combined with classical post processing or optimization techniques. In 2019, Havlicek et al. proposed a variational approach, where the authors estimated the kernel function on a quantum computer and subsequently optimized a classical SVM on the classical computer [8]. However, this approach was assessed using a small toy dataset with just two features. Similar ideas were explored applying different classical optimization procedures based on gradient descent [9] and regularized Newton method [10]. QSVM has been realized experimentally on quantum hardware limited to two features [11]. Hence, this leaves an unexplored research area regarding the performance and practical scalability of QSVM when applied to larger-scale, real-world problems on NISQ hardware. This motivates our investigation presented henceforth. We propose a novel approach within the realm of QSVM, the Variational Quantum Linear Solver enhanced QSVM (VQLS-enhanced QSVM). A pictorial representation of our algorithm is presented in Fig. 1. The idea of VQLS was proposed by Bravo-Prieto et al. [12] as a hybrid quantum classical algorithm, designed to solve SLEs with a polylogarithmic scaling in problem size. VQLS has proven to be effectively scalable on NISQ devices for large problem sizes given a well conditioned, sparse matrix. However, as of our current knowledge, the practicality and effectiveness of VQLS for solving SLEs with dense matrices derived from real-world datasets has not yet been investigated. To this end, we develop a classifier from VQLS-enhanced QSVM. We then evaluate the performance, by conducting an extensive series of numerical experiment using the Iris dataset [13]. These experiments were executed on IBM-Q simulators [14] in the noise-free environment. In this paper, we analyze the numerical results of our experiments and present strategies to mitigate the hurdles of utilizing VQLS-based QSVM for real-world applications. Based on the numerical analysis of our experiments, our VQLS-enhanced QSVM succeeded in identifying optimal hyperplane parameters within an 8-dimensional feature space. This is further supported by the construction of support vector classifier (SVC) and the subsequent evaluation of its classification accuracy. The paper is structured as follows: In Sec. II, we briefly discuss the theory of SVMs and VQLS. In Sec. III, we present our approach of combining the two ideas. Sec. IV presents results and discussion, and finally conclusions are in Sec. V. ## II Theoretical Preliminaries ### _Support Vector Machines_ SVMs have long been a cornerstone of classical supervised machine learning, serving as a powerful tool for data classification in feature spaces [1]. An SVM constructs a separating hyperplane that classifies data, illustrated in Fig. 1. An SVM is a quadratic programming problem and the least squares formulation in [3] proposes a method to Fig. 1: Pictorial representation of VQLS enhanced QSVM. obtain parameters via solving an SLE. In this section, we discuss briefly the least squares formulation of SVMs (LS-SVM). Given the tuple \(\{y_{k},\vec{x}_{k}\}_{k=1}^{N}\) as the training set of \(N\) data points, the weights are given by \(\vec{w}\) and the offset by \(d\). The function \(\varphi(\circ)\) is a map from the input vector space spanned by the training data to a higher dimensional space where classification is possible. Solving an SVM and finding the parameters for constructing the optimal hyperplane can be reformulated as an optimization problem with variables \(\eta_{k}\)[3] in the following way: \[\min_{\vec{w},\eta_{k}}\mathscr{J}(\vec{w},\eta_{k})=\frac{1}{2}\vec{w}^{T} \vec{w}+c\sum_{k=1}^{N}\eta_{k}. \tag{1}\] In which case, the separating hyperplane takes the form: \[\begin{split} y_{k}[\vec{w}^{T}\varphi(\vec{x}_{k})+d]\geq 1- \eta_{k},\\ \eta_{k}\geq 0,\qquad k=1,\ldots,N.\end{split} \tag{2}\] In [3], the least squares version is introduced as \[\min_{\vec{w},d,\vec{e}}\mathscr{I}(\vec{w},d,\vec{e})=\frac{1}{2}\vec{w}^{T} \vec{w}+\gamma\sum_{k=1}^{N}e_{k}^{2}, \tag{3}\] where \(e_{k}\) corresponds to a set of slack variables which are inserted to get an equality sign instead of inequality in Eq. (2). Here, the separating hyperplane takes the form: \[y_{k}[\vec{w}^{T}\varphi(\vec{x}_{k})+d]=1-e_{k},\qquad k=1,\ldots,N, \tag{4}\] where \(\gamma\) is a tunable hyperparameter. The optimization Lagrangian takes the form: \[\mathscr{L}(\vec{w},d,\vec{e};\vec{\theta})=\mathscr{I}(\vec{w},d,\vec{e})- \sum_{i=1}^{N}\theta_{i}(\vec{w}^{T}\varphi(\vec{x_{i}})+d+e_{i}-y_{i}), \tag{5}\] where \(\vec{\theta}\) are the Lagrange multipliers. Optimality conditions correspond to the linear system defined in [3]: \[\begin{pmatrix}0&\vec{\Gamma}^{T}\\ \vec{\Gamma}&X^{T}X+\gamma^{-1}\mathds{1}\end{pmatrix}\begin{pmatrix}d\\ \vec{\theta}\end{pmatrix}=\begin{pmatrix}0\\ \vec{y}\end{pmatrix}. \tag{6}\] Here \(\vec{\Gamma}=[1,\ldots,1]^{T}\) is a column vector of dimension \(N\) and \(\mathds{1}\) is the \(N\)-dimensional identity matrix in the canonical basis. Once the hyperparameters such as \(\gamma\) are fixed, the LS-SVM classifier is evaluated using the test data [3]: \[\hat{y}(\vec{x})=\vec{w}^{T}\varphi(\vec{x})+d=\sum_{i=1}^{N}\theta_{i}\varphi (\vec{x}_{i})^{T}\varphi(\vec{x})+d. \tag{7}\] ### _Variational Quantum Linear Solver_ In this section, we summarize the essentials of the algorithm from [12] solving SLEs by a variational approach. VQLS takes the following inputs: the state \(\ket{b}\), the matrix representation of \(A\) and the set of \(\{\alpha_{i}\}\) as the initial set of parameters. For state initialization, there is a unitary operator that is able to efficiently execute \(U\ket{0}=\ket{b}\) as a quantum circuit [15]. And the given matrix \(A\) is decomposed into a linear combination of unitary matrices, \[A=\sum_{l=0}^{N}c_{l}A_{l}. \tag{8}\] It is imperative that the condition number \(\kappa\) of \(A\) is finite, \(\|A\|\leq 1\), and the unitary \(A_{l}\) can be efficiently implemented by a quantum circuit. Generally, for qubit systems, \(A_{l}\) can be further decomposed as a combination of Pauli strings \(P_{l}\), where \(P_{l}\in\{\mathds{1},X,Y,Z\}^{\otimes N}\). #### Ii-B1 Variational Ansatz The solution state \(\ket{x}\) is prepared by a quantum circuit as \(\ket{x}=V(\alpha)\ket{0}\), where \(V(\alpha)\) is a sequence of parameterized quantum gates for the chosen ansatz. The cost function \(C(\alpha)\) is computed in the same circuit to estimate the overlap between \(A\ket{x}\) and \(\ket{b}\). A popular choice is the hardware efficient ansatz [6] from the family of fixed layer ansatz. However, it is known to be hard to train [16, 17]. An overview of different ansatze is presented in [18]. #### Ii-B2 Cost Functions The global cost function is defined in [12] as: \[\begin{split} C_{global}=\frac{1}{\bra{\psi}\ket{\psi}}\left[ \bra{x}A^{\dagger}(\mathds{1}-\ket{b}\bra{b})A\ket{x}\right]\\ =1-\frac{\abs{\bra{b}\ket{\psi}}^{2}}{\bra{\psi}\ket{\psi}}, \end{split} \tag{9}\] where \(\ket{\psi}=A\ket{x}\). Alternatively, a local cost function is proposed in [12] which is resilient to Barren plateaus for large system sizes [17], as \(n\) grows. The cost functions are computed in the variational circuit by using the Hadamard test or the Hadamard overlap test. In terms of minimizing the number of controlled operations, the Hadamard overlap test is preferred at the expense of increasing the number of qubits in the quantum circuit. In this work, the values of quantities \(\bra{\psi}\ket{\psi}\) and \(\abs{\bra{\psi}}^{2}\) are determined by using the Hadamard test. The first component is equivalent to computing [12]: \[\bra{\psi}\ket{\psi}=\sum_{m}\sum_{n}c_{m}^{\star}c_{n}\bra{0}V(\alpha)^{ \dagger}A_{m}^{\dagger}A_{n}V(\alpha)\ket{0} \tag{10}\] Each term of the form \(\bra{0}V(\alpha)^{\dagger}A_{m}^{\dagger}A_{n}V(\alpha)\ket{0}\) inside the sum of Eq. (10) is evaluated by controlled execution of \(A_{m}^{\dagger}\) and \(A_{n}\). The implementation of a quantum circuit for this term is presented in Fig. 2. Similarly, the computation of the second component is given by [12], \[\abs{\bra{b}\ket{\psi}}^{2}=\sum_{m}\sum_{n}c_{m}^{\star}c_{n}\bra{0}U^{ \dagger}A_{n}V(\alpha)\ket{0}\bra{0}V(\alpha)^{\dagger}A_{m}^{\dagger}U\ket{0} \tag{11}\] Here, the implementation of two inner products \(\bra{0}U^{\dagger}A_{n}V(\alpha)\ket{0}\) and \(\bra{0}V(\alpha)^{\dagger}A_{m}^{\dagger}U\ket{0}\) inside the sum requires two more controlled operations of \(U\), \(V(\alpha)\) with \(A_{n}\) and \(A_{m}^{\dagger}\). Fig. 3 illustrates the implementation of the term \(\bra{0}V(\alpha)^{\dagger}A_{m}^{\dagger}U\ket{0}\). #### Ii-A3 Classical Optimization To obtain an optimal set of parameters \(\{\alpha_{i}^{opt}\}\), a classical optimizer is necessary. In [12], gradient based optimization is used. In this work, we use gradient free optimizer, specifically, cobyla[19]. A comparison between different optimization methods for hybrid quantum classical variational algorithms is presented in [20, 21]. ## III Algorithm We take advantage of VQLS to solve Eq. (6), extract parameters \(\{\alpha_{i}^{opt}\}\) to estimate the solution state \(\ket{x}\), and construct a separating hyperplane. This hyperplane is further used for the classification of the samples in test dataset. A pictorial representation of our algorithm is presented in Fig. 1. Further specifications about the execution are discussed in this section. Additionally, the pseudo-code for our novel VQLS-enhanced QSVM algorithm is presented in Algorithm 1 and 2. ### _Dataset_ In this work, we use the Iris dataset [13] to evaluate the effectiveness and feasibility of our algorithm. It contains 50 examples for each of the three distinct iris plant species, _Setosa_, _Virginica_, and _Versicolor_. Each sample is composed of four distinct attributes: sepal length, sepal width, petal length, and petal width, all quantified in centimeters. For our numerical experiments, two species, _Setosa_ and _Virginica_ have been selected. From these two species, a total of seven samples have been chosen randomly for the training dataset. Table. II in Appendix A presents a concise overview of a single instance of the utilized training dataset. ### _Data preprocessing and construction of kernel model_ In order to prevent a particular feature from dominating the others due to its large magnitude, a data normalization technique known as linear scaling has been applied in our work, so that they all fall within the range of \([0,1]\). It is worth highlighting that normalization significantly influences the trainability of variational ansatz, as detailed in Appendix B. The normalization for a feature \(x^{j}\) is given by: \[x^{j}_{norm}=\frac{x^{j}(i)-x^{j}_{min}}{x^{j}_{max}-x^{j}_{min}}, \tag{12}\] where \(i\) is the index of training samples. The representation of the kernel matrix is formulated in Eq. (6). The dimension of the kernel matrix \(K\) is \((N+1)\times(N+1)\), where \(N\) is the number of samples in the training dataset. The presence of an additional row and column is a consequence of the non-zero offset \(d\). In the context of the linear equation \(A\vec{x}=\vec{b}\), the kernel matrix \(K\) corresponds to the matrix \(A\). In designing hybrid quantum classical algorithms executed on current quantum hardware effectively, it is important to strategically distribute different parts of our algorithm on different computing platforms. For this reason, we use SVD prior to Pauli decomposition to reduce the number of controlled components of the kernel matrix, subsequently reducing the hard part of the calculation of the cost function. It is worthwhile to note that the Pauli decomposition in [12] is executed on a classical computer as a one-time preprocessing step. Although there exists efficient methods to simulate such decomposition on a quantum computer [22, 23], the comparison of resource Fig. 3: Quantum circuit for the computation of \(\bra{0}V(\alpha)^{\dagger}A_{m}^{\dagger}U\ket{0}\). The additional auxiliary qubit is present to facilitate the execution of the CCZ gate. Fig. 2: Quantum circuit for the computation of \(\bra{\psi}\psi\). The circuit consists of the variational block and is followed by the controlled components. ``` 0: Feature samples \(X_{train}=\{\vec{x}_{1},\cdots,\vec{x}_{N}\}\) and feature labels \(\vec{y}_{train}=\{y_{1},\cdots,y_{N}\}\) Output: A set of optimal parameters \(\alpha^{opt}\) Normalize \(X_{train}\) to \(\hat{X}_{train}\) (Eq. (12)) Construct the kernel matrix \(K\) (Eq. (6)) Decompose the kernel matrix \(K\) into \(\sum_{l=0}^{N}c_{l}A_{l}\) (Eq. (8)) Initialize iteration \(i=0\), the stop criterion \(\epsilon=0.01\), cost value \(C=1\), the maximum number of iterations \(maxIteration=300\) and initial parameters of parameterized quantum gates \(\alpha^{i}\) \(numIteration=0\) while\(C>\epsilon\) or \(numIteration<maxIteration\)do \(sum1=0\) for\(A_{m}\) in \(\{A_{1},A_{2},\cdots,A_{N}\}\)do for\(A_{n}\) in \(\{A_{1},A_{2},\cdots,A_{N}\}\)do Construct the first quantum circuit (Fig. 2) Execute the circuit with \(shots=10000\) Measure the ancillary qubit \(q_{a}\) Compute \(p_{q_{a}}(0))-p_{q_{a}}(\ket{1})\) to obtain \(\bra{0}V(\alpha)^{\dagger}A_{m}^{\dagger}A_{n}V(\alpha)\ket{0}\) \(sum1+=c_{m}^{\dagger}c_{n}\bra{0}V(\alpha)^{\dagger}A_{m}^{\dagger}A_{n}V( \alpha)\ket{0}\) endfor endfor \(sum2=0\) for\(A_{m}\) in \(\{A_{1},A_{2},\cdots,A_{N}\}\)do for\(A_{n}\) in \(\{A_{1},A_{2},\cdots,A_{N}\}\)do Construct the second quantum circuit for computing the inner product of \(\bra{0}V^{\dagger}A_{n}V(\alpha)\ket{0}\) (Fig. 3) Execute the circuit with \(shots=10000\) Measure the ancillary qubit \(q_{a}\) Compute \(p_{q_{a}}(\ket{0})-p_{q_{a}}(\ket{1})\) to obtain the value of \(\bra{0}U^{\dagger}A_{n}V(\alpha)\ket{0}\) Again construct the second quantum circuit for computing the inner product of \(\bra{0}V(\alpha)^{\dagger}A_{m}^{\dagger}U\ket{0}\) Execute the circuit with \(shots=10000\) Measure the ancillary qubit \(q_{a}\) Compute \(p_{q_{a}}(\ket{0})-p_{q_{a}}(\ket{1})\) to obtain \(\bra{0}V(\alpha)^{\dagger}A_{m}^{\dagger}U\ket{0}\) \(sum2+=c_{m}^{\dagger}c_{n}\bra{0}U^{\dagger}A_{n}V(\alpha)\ket{0}\bra{0}V( \alpha)^{\dagger}A_{m}^{\dagger}U\ket{0}\) endfor endfor \(\frac{\ket{\bra{\bra{\bra{\bra{\bra{\bra{\bra{\bra{\bra{\bra{\brabra{\ #### Iii-B3 Construction and validation of SVC The set of optimal parameters \(\{\alpha^{opt}\}\) obtained through Algorithm 1 is delivered to initialize the hardware-efficient ansatz, allowing us to estimate the vector \(\vec{\theta}\) after measurement. The measured probabilities of each basis state in the statevector indicate the weights of \(\vec{x}_{k}\), where \(k=1,\cdots,N\), used in constructing the SVC. Since we obtained only the normalized statevector from the quantum subroutine, an additional machinery is required to estimate its actual magnitude. Therefore, we employ linear regression to estimate both \(d\) and \(\|\vec{\theta}\|\). Algorithm 2 shows the pseudocode, which was used for construction and validation of the SVC. ``` 0: A set of optimal parameters \(\{\alpha^{opt}_{i}\}\) 0: The accuracy of SVC in the validation dataset Construct the Hardware-efficient ansatz \(V(\alpha)\) (Fig. 4) \(V(\alpha^{opt})\leftarrow\) Initialize the Ansatz with the optimal parameters \(\{\alpha^{opt}_{i}\}\) \(|x_{out}\rangle\leftarrow\) Measure all qubits \(\vec{\theta}^{\prime}=\frac{\vec{\theta}}{\|\vec{\theta}\|}\leftarrow|x_{out}\rangle\), where \(\|\vec{\theta}\|\) is unknown \(\vec{w}^{\prime}\leftarrow\sum\limits_{i=1}^{N}\theta^{\prime}_{i}\cdot\vec{x} _{i}\) \(e^{\prime}_{i}\leftarrow\frac{\vec{\theta}^{\prime}_{i}\cdot\vec{w}}{\gamma}\) \(\bar{d},\|\vec{\theta}\|\leftarrow LinearRegression(\forall i:y_{i}-y_{i}e^{ \prime}_{i}-\vec{w}^{\prime T}\vec{x}_{i}-d=0)\) \(\bar{\theta}\leftarrow\|\vec{\theta}\|\cdot\vec{\theta}^{\prime}\) \(\vec{w}\leftarrow\|\vec{\theta}\|\cdot\vec{w}^{\prime}\) SVC: \(\hat{y}=\begin{cases}1&\text{if }\vec{w}^{T}\vec{x}+\tilde{d}\geq 0\\ -1&\text{if }\vec{w}^{T}\vec{x}+\tilde{d}<0\end{cases}\) return\(\hat{y}\) ``` **Algorithm 2** SVC ## IV Results In this section, we discuss the results of our numerical experiments, aiming to evaluate the performance of our VQLS-enhanced QSVM algorithm. In our work, we use the three qubit VQLS model and the size of the kernel matrix is \(8\times 8\). For the VQLS subroutine, we set the termination condition for the optimization routine as follows: either the program terminates at maximum iterations (\(=300\)) or if the cost value is the same for the last certain number of iterations. For this work,we use the IBM-Q _aer simulator_ and the optimizer cobyla for the classical optimization routine on our local computing resource [26]. In Sec. IV-A, we show how employing SVD prior to Pauli decomposition and solving an equivalent problem gives us an edge over merely using Pauli decomposition [12], in terms of convergence to a minimum and run-time. In the rest of our analysis, we include SVD as an element in the construction of the classifiers. In Sec. IV-B and Sec. IV-C, we use different datasets and different instances within a given dataset to derive SLEs and explore the consequent impact on the convergence of the cost function. This variation leads to SLEs with varying condition numbers, yielding insight into the behavior of VQLS in these cases. In Sec. IV-C, we also analyze the accuracy of classifiers constructed using the VQLS-enhanced QSVM, in comparison to the LS-SVM. ### _Impact of SVD on run-time and convergence_ Since VQLS shows promise in terms of scalability to larger systems in [12], it is crucial to reduce the total number of Pauli strings in Eq. (8) for the computation of the cost function in Eq. (9) and improve its trainability. As proposed in Sec. III-B, we replace the kernel matrix with its SVD component \(\Sigma\). By solving the new system of equations given by Eq. (14), we accelerate the convergence and enhance the trainability compared to the traditional method of using Pauli decomposition for the matrix \(A\) in the original problem in Eq. (13). In our experiment, the number of Pauli strings after decomposition for \(A\) and \(A_{new}\) are \(36\) and \(8\), respectively. Consequently, the total number of expectation values to be computed within the sum in Eqs. (10) and (11) is reduced. For example, when the number of terms in the decomposition is given by \(l\), we need \(l^{2}\) loops at most to compute the inner product in Eq. (11). In our case, this translates to \(1296\) (\(36^{2}\)) and \(64\) (\(8^{2}\)) loops for \(A\) and \(A_{new}\) respectively. This reduction significantly decreases the number of terms required to compute expectation values within Eqs. (10) and (11) and the run-time. The combination of SVD and Pauli decomposition reduces the system run-time to approximately one-sixteenth of what it would be used using the Pauli decomposition alone, when \(\left\langle\psi|\psi\right\rangle\) and \(\left|\left\langle b|\psi\right\rangle\right|^{2}\) in Eq. (9) are computed for our specific example. Fig. 5 illustrates the run-time for identifying an optimal set of parameters for the construction of the separating hyperplane when executed using only Pauli decomposition versus the combination of SVD and Pauli decomposition. We also note that recasting the problem into Eq. (14) yields a lower minimum of the cost function, indicating a possibly more accurate solution. Fig. 5 also compares the final cost minima. Notably, Bravo-Prieto et al. [12, Appendix A] discuss precision of the cost function computation and its dependence on sparsity. Specifically, for a \(d\)-sparse matrix, the discussion presented in [12] implies that the precision of the cost function computation is inversely proportional to \(d\). Consequently, improving sparsity by solving for \(A_{new}\) instead of \(A\) improves the precision of the cost function calculation. For more details on the role of sparsity in solving SLEs with quantum algorithms, we refer to [5, 25]. ### _Influence of the condition number \(\kappa\) on the convergence of the cost function in VQLS_ We study the influence of parameter \(\kappa\) on the convergence of cost function numerically, varying the values of \(\kappa\). Numerical experiments are categorized into two parts based on the chosen dataset: toy dataset and the Iris dataset. #### Iv-B1 Results with toy dataset In this analysis, we randomly choose three different instances of data. The Pauli decomposition of the matrix contains two Pauli strings, III (\(\mathds{1}\otimes\mathds{1}\otimes\mathds{1}\)) and YYZ (\(Y\otimes Y\otimes Z\)). Each instance has two different sets of coefficients. Solving each of these SLEs demonstrates a clearer understanding of the impact of \(\kappa\) on the convergence of the cost function. Fig. 6 illustrates \(\kappa\)'s influence on convergence in three instances. Given the substantial impact of \(\kappa\) on the convergence of the cost function shown in Fig. 6, we further investigate the relationship between the number of Pauli strings in the decomposition of several matrices with similar condition number and the convergence of the cost function. This analysis involves four instances with the kernel matrix having \(10\), \(15\), \(20\), and \(36\) Pauli strings. The condition number of all these matrices is \(\kappa\approx 3\). The convergence of the cost function is illustrated in Fig. 7. For SLEs constructed with the toy dataset, VQLS is accurate when the kernel matrix is well conditioned. In such a situation, the number of Pauli strings in its decomposition does not play a major role. #### Iv-B2 Results with the Iris dataset In this section, we present numerical results that highlight the impact of the condition number \(\kappa\) on the convergence of the cost function, when utilizing the Iris dataset to evaluate our approach without the use of SVD. We extracted one instance of training dataset, including seven samples from _Setosa_ and _Virginica_, and generated five different kernel matrices using Eq. (6). The condition numbers of these kernel matrices are \(\kappa=5,10,19,144\) and \(721\), which is realized by adjusting the hyperparameter \(\gamma\) from Eq. (6). The results shown in Fig. 8 align nicely with those for the toy dataset (Fig. 6 in Sec. IV-B1). We observe that the use of SVD in preprocessing weakens the existing correlation between the condition number \(\kappa\) of the kernel matrix and the convergence of cost function in VQLS. To evaluate our approach's performance when utilizing the SVD, we conducted subsequent experiments Fig. 5: Run-time analysis for the convergence of the cost function for the matrices \(A\) and \(A_{new}\). The cost values start to converge after around \(30\) min for \(A_{new}\) compared to \(450\) min for \(A\) according to the system time. Additionally, the final cost value for \(A_{new}\) converged to a notably lower value of \(6\%\), in comparison to the \(24\%\) for \(A\). Fig. 6: Condition number \(\kappa\)’s influence on the convergence of the cost function in VQLS. It is noteworthy that the results obtained from instances associated with low condition numbers exhibit a better convergence in VQLS. Fig. 7: Influence of the number of Pauli strings for a given condition number on the convergence of the cost function in VQLS. using the same five kernel matrices used previously in the analysis presented in Fig. 8. The numerical results demonstrate a lower cost minimum even under a high condition number \(\kappa\). This can be observed in Fig. 9. The weakening of correlation between the condition number and convergence of cost function due to inclusion of SVD is advantageous. This results in a better convergence at higher condition numbers and a significant enhancement in the trainability of variational ansatz. ### _Performance evaluation of SVC built with VQLS-enhanced QSVM_ In this analysis, we consider ten random instances of training sets from the Iris dataset. Four of them have \(\kappa\leq 10\), three fall within \(10<\kappa<100\) and three have \(\kappa\geq 100\). The classification hyperplanes for these ten instances are constructed using the VQLS-enhanced QSVM detailed in Sec. III. For accuracy validation, we compare the performances of QSVM-based and LS-SVM-based classifiers. The influence of different condition numbers of the kernel matrix, which are manipulated through \(\gamma\), is evident on the classifier accuracy as seen in Fig. 10. The final cost values are also plotted for these matrices alongside the accuracy. Furthermore, Table. IV in Appendix D shows the evaluation of classification performance employing a range of metrics. Furthermore, we repeated each of our numerical experiments five times to examine the stability. Table. I displays the experimental results for one instance. Based on the table, it is evident that the majority of the outcomes yields similar classification accuracy. Results of three more experiments for additional instances are reported in Appendix C. It is important to note that having a lower cost value does not inherently guarantee higher classification accuracy. This is due to the fact that a lower cost value does not guarantee an accurate solution in the case of VQLS [12]. Hence, it is important to include a verification step to validate the solution. ## V Conclusion and Outlook This work aims to identify an optimal set of parameters for constructing a classifier on a quantum computer. We then use this classifier to complete the classification tasks in supervised learning. This objective is realized by Fig. 8: Impact of \(\kappa\) on the convergence of the cost function without the SVD for the Iris dataset.The accuracy of the solution is attributed to lower cost minimum and is better for systems with a lower condition number in VQLS. Fig. 9: Impact of the condition number \(\kappa\) on the convergence with SVD for the Iris dataset. It is worth noting that VQLS demonstrates a notable convergence of cost function in the same instance with the in Fig. 8, even when dealing with matrices featuring a high \(\kappa\). utilizing our proposed hybrid quantum-classical algorithm on NISQ devices, named as the VQLS-enhanced QSVM. Additionally, we benchmarked this approach by examining the SVC with real-world data, the Iris dataset. The VQLS-enhanced QSVM is capable of robustly identifying a separating hyperplane that highly accurately classify samples in the test data. We note that SVD is crucial for minimizing the number of controlled unitaries applied during a Hadamard test. Hence, we applied SVD on the kernel matrix \(A\) in our numerical experiments. It significantly reduces the number of expectation values computed in one iteration for a faster and more accurate result. Furthermore, appropriately selecting the hyper parameter \(\gamma\) in Eq. (6), utilized for the design of the kernel matrix, crucially influences both the trainability of variational ansatze and the classification accuracy. The classifiers constructed using our approach exhibits a strong performance for problems with small condition number of the kernel matrix \(A\). This work can be further explored by employing noise models and executing numerical experiments on real quantum hardware. It is also worthwhile to investigate the scalability of the VQLS-based QSVM with increasing problem size. ## Acknowledgment We thank Katja Schladitz and Alexander Geng for helpful conversations regarding the manuscript. This work was funded by the Federal Ministry for Economic Affairs and Climate Action (German: Bundesministerium fur Wirtschaft und Klimaschutz) under the project EniQmA with funding number 01MQ22007A.
2309.09985
Molecular Conformation Generation via Shifting Scores
Molecular conformation generation, a critical aspect of computational chemistry, involves producing the three-dimensional conformer geometry for a given molecule. Generating molecular conformation via diffusion requires learning to reverse a noising process. Diffusion on inter-atomic distances instead of conformation preserves SE(3)-equivalence and shows superior performance compared to alternative techniques, whereas related generative modelings are predominantly based upon heuristical assumptions. In response to this, we propose a novel molecular conformation generation approach driven by the observation that the disintegration of a molecule can be viewed as casting increasing force fields to its composing atoms, such that the distribution of the change of inter-atomic distance shifts from Gaussian to Maxwell-Boltzmann distribution. The corresponding generative modeling ensures a feasible inter-atomic distance geometry and exhibits time reversibility. Experimental results on molecular datasets demonstrate the advantages of the proposed shifting distribution compared to the state-of-the-art.
Zihan Zhou, Ruiying Liu, Chaolong Ying, Ruimao Zhang, Tianshu Yu
2023-09-12T07:39:43Z
http://arxiv.org/abs/2309.09985v2
# Molecular Conformation Generation ###### Abstract Molecular conformation generation, a critical aspect of computational chemistry, involves producing the three-dimensional conformer geometry for a given molecule. Generating molecular conformation via diffusion requires learning to reverse a noising process. Diffusion on inter-atomic distances instead of conformation preserves SE(3)-equivalence and shows superior performance compared to alternative techniques, whereas related generative modelings are predominantly based upon heuristical assumptions. In response to this, we propose a novel molecular conformation generation approach driven by the observation that the disintegration of a molecule can be viewed as casting increasing force fields to its composing atoms, such that the distribution of the change of inter-atomic distance shifts from Gaussian to Maxwell-Boltzmann distribution. The corresponding generative modeling ensures a feasible inter-atomic distance geometry and exhibits time reversibility. Experimental results on molecular datasets demonstrate the advantages of the proposed shifting distribution compared to the state-of-the-art. ## 1 Introduction The molecular conformation generation task constitutes a crucial and enabling aspect of numerous research pursuits, particularly in the study of molecular structures and their potential energy landscapes (Strodel, 2021). Traditional computational methods for this task rely on optimizing the free energy grounded on Schrodinger equation or density functional theory or its approximations (Griffiths & Schroeter, 2018; Tsujishita & Hirono, 1997; Labute, 2010), failing to find a good balance between complexity and quality. Recently, machine learning has emerged as a powerful and efficient tool to identify more stable and diverse conformations across an expanded chemical space (Xu et al., 2021; Ganea et al., 2021; Xu et al.; Jing et al.). However, such novel approaches give rise to some new challenges. One of the most significant challenges is incorporating the roto-translational equivariance (SE(3)-equivariance) intrinsic to the generation process. Recent works employ \(\mathrm{SE}(3)\)-equivariant molecular properties as proxies to render the model invariance. For instance, some studies focus on predicting torsional angles (Jing et al., 2021) or inter-atomic distances (Simm & Hernandez-Lobato, 2020; Xu et al.; Ganea et al., 2021), with the final conformation assembled through post-processing. Besides, Uni-Mol (Zhou et al., 2023a) predicts the delta coordinate positions based on atom pair representation to update coordinates. Other works leverage inter-atomic distances to directly predict coordinates using generative models (Xu et al., 2021; Xu et al., 2021; Zhu et al.). In parallel with these efforts, researchers have developed \(\mathrm{SE}(3)\)-equivariant graph neural networks (GNNs) to better characterize the geometry and topology of geometric graphs (Schutt et al., 2017; Satorras et al., 2021; Han et al., 2022). These GNNs serve as effective tools or backbones for molecular conformation generation (Jing et al., 2021; Ganea et al., 2021; Xu et al., 2021; Xu et al., 2021; Moogeboom et al., 2022). Following the previous works (Xu et al., 2021; Shi et al., 2021; Xu et al., 2021), our approach also seeks to encode SE(3)-equivariance from an inter-atomic distance perspective. To the best of our knowledge, existing works do not yet provide a systemic analysis of distance, often relying on common or heuristic Gaussian assumption on distance changes (Xu et al., 2021). In this study, we conduct a thorough analysis of inter-atomic distances, drawing inspiration from physical atom motion phenomena. Specifically, we investigate the disintegration process of molecular structures and aim to learn how to reverse these processes for generating conformations. To this end, the disintegration of molecules can be viewed as being caused by the introduction of gradually increasing levels of perturbing force fields. We postulate that atoms within a molecule exhibit Brownian motion (Gaussian) under relatively small perturbing forces. When the forces are considerably large, chemical structures are disrupted, and the atoms are able to move without restrictions. In this stage, the atom speeds follow a Maxwell-Boltzmann distribution. Naturally, this can be connected to the distance distribution, in accordance with the escalation of perturbation intensity. See Fig. 1 for an overview. We thus put forth a precise estimation of the perturbed distance distribution through a closed-form shifting score function. Further, we propose a novel diffusion-based model named **SDDiff** (shifting distance diffusion) to reverse the force field to recover molecule conformations, leading to superior performance. Our main contributions are: * Inspired by molecule thermodynamics, we show that under the Gaussian perturbation kernel on molecular conformation, the distribution of relative speeds and the change of inter-atomic distances shift from Gaussian to Maxwell-Boltzmann distribution. * We propose a diffusion-based generative model, SDDiff, with a novel and closed-form shifting score kernel, with the mathematical support and empirical verification of its correctness. * Our method achieves state-of-the-art performance on two molecular conformation generation benchmarks, GEOM-Drugs (Axelrod and Gomez-Bombarelli, 2022) and GEOM-QM9 (Ramakrishnan et al., 2014). ## 2 Related work **Molecular conformation generation**. Learning techniques are increasingly equipped for molecular conformation generation. A shallow trial is GeoMol (Ganea et al., 2021), which predicts local 3D configurations and assembles them with heuristic rules. Instead, conformations can be holistically sampled via modelings of either inter-atomic distance (Shi et al., 2021; Simm and Hernandez-Lobato, 2020) or atom coordinates (Xu et al., Zhu et al.). Recently, a rising interest has been observed in diffusion-based approaches (Shi et al., 2021; Xu et al., 2021; Jing et al., 2021), where the most related works to ours are ConfGF (Shi et al., 2021) and GeoDiff (Xu et al., 2021). ConfGF perturbs the distance and estimates the corresponding score, which is subsequently converted to the coordinate score via chain rule. However, such a process may result in infeasible 3D geometry. GeoDiff perturbs coordinates instead and introduces an SE(3)-equivariant Markov kernel transiting the coordinate diffusion process to the distance process. However, this model's design is based on the assumption Figure 1: Demonstration of the diffusion process of SDDiff. As the Gaussian perturbation level on _atom coordinates_ increases, the distribution of _inter-atomic distances_ shifts from Gaussian to Maxwell-Boltzmann, which SDDiff learns to reverse. that the perturbed distance follows a Gaussian distribution. This heuristic assumption can lead to mismatches and inaccuracy. **Diffusion-based generative models**. Denosing diffusion probabilistic models (DDPM) (Ho et al., 2020) delineates a Markov chain of diffusion steps to add random noise to data and subsequently learns to invert the diffusion process for generating desired data samples. Analogous to DDPM, the score matching with Langevin dynamics (SMLD) models (Song and Ermon, 2019, 2020) train noise conditional score networks (NCSN) that approximate the score function of the dataset and apply the stochastic gradient Langevin dynamics to approximate the data distribution. The above two models can be unified under the context of stochastic differential equations (SDEs) (Song et al., 2020). The denoising diffusion implicit model (DDIM) (Song et al., 2020) has a controllable sampling stochasticity, allowing the generation of higher-quality samples with fewer steps. The latent diffusion model (LDM) (Rombach et al., 2022) is another accelerated sampler by implementing the diffusion process in the latent space. \(\mathrm{SE}(3)\)**Neural Networks**. The Euclidean group, denoted as \(\mathrm{SE}(3)\) or \(\mathrm{E}(3)\) when including reflections, represents a group of symmetries in 3D translation and rotation. Due to the geometric symmetry nature of molecules, incorporating this property in feature backbones is essential. One typical line of research is related to GNNs. Schnet (Schutt et al., 2017) is an \(\mathrm{E}(n)\)-invariant network for modeling quantum interactions in molecules. \(\mathrm{E}(n)\)- Equivariant Graph Neural Networks (EGNNs) (Satorras et al., 2021) is an \(\mathrm{E}(n)\)-equivariant GNN, which does not rely on computationally expensive higher-order representations in intermediate layers. A hierarchy-based GNN named Equivariant Hierarchy-based Graph Networks (EGHNs) (Han et al., 2022) can increase the expressivity of message passing, which is also guaranteed to be \(\mathrm{E}(3)\)-equivariant to meet the physical symmetry. Another related line of research is not restricted to the message-passing paradigm (Gilmer et al., 2017). Some existing works (Thomas et al., 2018; Fuchs et al., 2020) utilize the spherical harmonics to compute a basis for the transformations, which preserve \(\mathrm{SE}(3)\)-equivariance. ## 3 Background ### Molecular conformation generation The generation of molecular conformation can be regarded as a generative problem conditioned on a molecular graph. For a given molecular graph, it is required to draw independent and identically distributed (i.i.d.) samples from the conditional probability distribution \(p(\mathcal{C}|G)\), in which \(p\) adheres to the underlying Boltzmann distribution (Noe et al., 2019), while \(\mathcal{C}\) and \(G\) signify the conformation and formula of the molecule, respectively. Formally, each molecule is depicted as an undirected graph \(G=(V,E)\), with \(V\) representing the set of atoms within the molecule and \(E\) denoting the set of inter-atomic chemical bonds, as well as the corresponding node features \(\mathbf{h}_{v}\in\mathbb{R}^{f},\forall v\in V\) and edge features \(\mathbf{e}_{uv}\in\mathbb{R}^{f^{\prime}},\forall(u,v)\in E\) representing atom types, formal charges, bond types, etc. To simplify the notation, the set of atoms \(V\) in 3D Euclidean space is expressed as \(\mathcal{C}=[\mathbf{c}_{1},\mathbf{c}_{2},\cdots,\mathbf{c}_{\mathbf{n}}] \in\mathbb{R}^{n\times 3}\), and the 3D distance between nodes \(u\) and \(v\) is denoted as \(d_{uv}=||\mathbf{c}_{\mathbf{u}}-\mathbf{c}_{\mathbf{v}}||\). A generative model \(p_{\theta}(\mathcal{C}|G)\) is developed to approximate the Boltzmann distribution. ### Equivariance in molecular conformation Equivariance under translation and rotation (\(\mathrm{SE}(3)\) groups) exhibits multidisciplinary relevance in a variety of physical systems, hence plays a central role when modeling and analyzing 3D geometry (Thomas et al., 2018; Weiler et al., 2018; Chmiela et al., 2019; Fuchs et al., 2020; Miller et al., 2020; Simm et al., 2020; Batzner et al., 2022). Mathematically, a model \(\mathbf{s}_{\theta}\) is said to be equivariance with respect to \(\mathrm{SE}(3)\) group if \(\mathbf{s}_{\theta}(T_{f}(\mathbf{x}))=T_{g}(\mathbf{s}_{\theta}(\mathbf{x}))\) for any transformation \(f,g\in\mathrm{SE}(3)\). Utilizing conformational representations directly to achieve equivariance presents challenges in accurately capturing the chemical interactions between atoms. Consequently, this approach may result in the generation of molecular structures with inaccuracies and poor configurations. An alternative approach is to use the inter-atomic distance that is naturally equivariant to \(\mathrm{SE}(3)\) groups (Shi et al., 2021; Xu et al., 2021; Sasteiger et al., 2020), which will be further introduced in Sec. 4.2. ### Learning via score matching **Langevin dynamics**. Given a fixed step size \(0<\epsilon\ll 1\), take \(\mathbf{x}_{0}\sim\pi(\mathbf{x})\) for some prior distribution and use Euler-Maruyama method for simulating the Langevin dynamics \[\mathbf{x}_{t}=\mathbf{x}_{t-1}+\frac{\epsilon}{2}\nabla_{\mathbf{x}}\log p\left( \mathbf{x}_{t-1}\right)+\sqrt{\epsilon}\mathbf{z}_{t}, \tag{1}\] where \(\mathbf{z}_{t}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\). As \(t\rightarrow\infty\), \(\mathbf{x}_{t}\) can be considered as a sample draw from \(p(\mathbf{x})\) under some regularity conditions (Welling and Teh, 2011). This implies that if we know the score function \(\nabla_{\mathbf{x}}\log p\left(\mathbf{x}\right)\), we can use Langevin dynamics to sample from \(p(\mathbf{x})\). **Denoing score matching**. The process of denoising score matching (Vincent, 2011) involves the perturbation of data \(\mathbf{x}\) in accordance with a predetermined perturbing kernel, denoted by \(q_{\sigma}(\tilde{\mathbf{x}}|\mathbf{x})\). The objective \(\mathbf{s}_{\boldsymbol{\theta}}\) that minimize the following: \[\frac{1}{2}\mathbb{E}_{q_{\sigma}(\tilde{\mathbf{x}}|\mathbf{x})p_{\text{data }}(\mathbf{x})}\left[\left\|\mathbf{s}_{\boldsymbol{\theta}}(\tilde{\mathbf{x} })-\nabla_{\tilde{\mathbf{x}}}\log q_{\sigma}(\tilde{\mathbf{x}}\mid\mathbf{ x})\right\|_{2}^{2}\right] \tag{2}\] satisfies \(\mathbf{s}_{\boldsymbol{\theta}^{*}}(\mathbf{x})=\nabla_{\mathbf{x}}\log q_{ \sigma}(\mathbf{x})\) almost surely (Vincent, 2011). This implies that to train a denoising model \(\mathbf{s}_{\boldsymbol{\theta}}\), we can set the loss functions to be \[\mathcal{L}\left(\mathbf{s}_{\boldsymbol{\theta}};\left\{\sigma_{i} \right\}_{i=1}^{L}\right) \triangleq\frac{1}{L}\sum_{i=1}^{L}\lambda\left(\sigma_{i}\right) \ell\left(\mathbf{s}_{\boldsymbol{\theta}};\sigma_{i}\right) \tag{3}\] \[\ell(\mathbf{s}_{\boldsymbol{\theta}};\sigma) \triangleq\frac{1}{2}\mathbb{E}_{p_{\text{data}}(\mathbf{x})} \mathbb{E}_{\tilde{\mathbf{x}}\sim q_{\sigma}(\tilde{\mathbf{x}}|\mathbf{x})} \left\|\mathbf{s}_{\boldsymbol{\theta}}(\tilde{\mathbf{x}},\sigma)-\nabla_{ \tilde{\mathbf{x}}}\log q_{\sigma}\left(\tilde{\mathbf{x}}|\mathbf{x}\right) \right\|_{2}^{2}. \tag{4}\] where \(\lambda(\sigma)\propto 1/\mathbb{E}\left[\left\|\nabla_{\tilde{\mathbf{x}}}\log p_{ \sigma}(\tilde{\mathbf{x}}\mid\mathbf{x})\right\|_{2}^{2}\right]\) is a reweighting coefficient so that the magnitude order of the loss function does not depend on \(\sigma\)(Song et al., 2020b). After obtaining a model \(\mathbf{s}_{\boldsymbol{\theta}^{*}}(\mathbf{x})\approx\nabla_{\mathbf{x}} \log q_{\sigma}(\mathbf{x})\), following the (annealed) Langevin dynamics (Song and Ermon, 2019), one can draw sample from \(p_{\text{data}}(\mathbf{x})\) by recursive computing \(\tilde{\mathbf{x}}_{t}=\tilde{\mathbf{x}}_{t-1}+\frac{\alpha_{t}}{2}\mathbf{ s}_{\boldsymbol{\theta}}\left(\tilde{\mathbf{x}}_{t-1},\sigma_{i}\right)+ \sqrt{\alpha_{i}}\mathbf{z}_{t}\), where \(\alpha_{i}=\epsilon\cdot\sigma_{i}^{2}/\sigma_{i}^{2}\). **Maxwell-Boltzmann distribution**. In the domain of statistical mechanics, the Maxwell-Boltzmann (MB) distribution serves as a model for delineating the velocities of particles within idealized gaseous systems. These systems are characterized by freely moving particles within a stationary enclosure, where interactions among the entities are negligible apart from momentary collisions. From a mathematical perspective, the MB distribution is the \(\chi\)-distribution with three degrees of freedom (Young et al., 2008). The probability density function of \(\mathrm{MB}(\sigma)\) is given by \(f_{\sigma}(x)=\sqrt{\frac{2}{\pi}}\frac{x^{2}e^{-x^{2}/\left(2\sigma^{2} \right)}}{\sigma^{3}}\) with support \(\mathbb{R}_{++}\). ## 4 Methodology ### Modeling the distribution of inter-atomic distances In the present investigation, molecular disintegration is facilitated by the application of progressively intensified perturbation force fields. Upon perturbing a single atom, adjacent atoms experience a consequent force, arising from the chemical bonds interconnecting them with the perturbed atom. In case when a relatively minor perturbative force field is employed, chemical bonds remain unbroken, thereby restricting atomic motions. This observation leads us to hypothesize that individual atoms exhibit Brownian motions under such conditions. Contrarily, when a sufficiently potent force field is imposed, chemical bonds are destroyed, permitting atoms to undergo virtually uninhibited motion with the bare occurrence of collisions. We further hypothesize that the relative speed between any two atoms adheres to the Maxwell-Boltzmann (MB) distribution. Focusing on the inter-atomic distances \(d\) within a molecule, we establish that the marginal distribution of perturbed inter-atomic distances \(\tilde{d}\), given \(d\), is equivalent to the distribution of relative velocities among the atoms. Specifically, let \(\sigma_{t}\) measure the perturbing force fields at time \(t\) and \(\{\sigma_{t}\}_{t=0}^{T}\) is an increasing non-negative sequence. Then, \[p_{\sigma_{0}}(\tilde{d}|d)=p_{\sigma_{0}}(v)=\mathcal{N}(\tilde{d}|d,2\sigma_{ 0}^{2}\mathbf{I}),\qquad p_{\sigma_{T}}(\tilde{d}|d)=p_{\sigma_{T}}(v)=\mathrm{ MB}(\sqrt{2}\sigma_{T}). \tag{5}\] For intermediate perturbing forces, we set \(p_{\sigma_{t}}(\tilde{d}|d)\propto\tilde{d}^{2f_{\sigma}(\tilde{d},d)}e^{-\frac{ (\tilde{d}-d)^{2}}{4\sigma_{t}^{2}}}\), where several constrains are on \(f_{\sigma}\). For a smoothly shifting perturbing force field, we require \(f_{\sigma}(\tilde{d},d)\) to be smooth with respect to \(\sigma,\tilde{d}\) and \(d\). To make the limiting perturbing force field be Gaussian and MB, we require \(\lim_{\sigma\to 0}f_{\sigma}=0\) and \(\lim_{\sigma\to\infty}f_{\sigma}=1\). Thus, we have (note that when \(\sigma_{T}\) is sufficiently large, \(\tilde{d}-d\approx\tilde{d}\)) \[p_{\sigma_{0}}(\tilde{d}|d) \propto e^{-\frac{(\tilde{d}-d)^{2}}{4\sigma_{0}^{2}}}\propto \mathcal{N}(\tilde{d}|d,2\sigma_{0}^{2}\mathbf{I}) \tag{6a}\] \[p_{\sigma_{T}}(\tilde{d}|d) \propto\tilde{d}^{2}e^{-\frac{(\tilde{d}-d)^{2}}{4\sigma_{T}^{2}}} \propto\mathrm{MB}(\sqrt{2}\sigma_{T}) \tag{6b}\] If we take \(f_{\sigma}(\tilde{d},d)=1-e^{-\sigma/d}\), \[\nabla_{\tilde{d}}\log q_{\sigma}(\tilde{d}\mid d)=\left(1-e^{-\sigma/d} \right)\frac{2}{\tilde{d}}-\frac{\tilde{d}-d}{2\sigma^{2}} \tag{7}\] We can simply use a Gaussian kernel as an approximation of perturbing force fields acting on the molecule conformation, i.e., \(p_{\sigma}(\tilde{\mathcal{C}}|\mathcal{C})=\mathcal{N}(\tilde{\mathcal{C}}| \mathcal{C},\sigma^{2}\mathbf{I})\), for \(\mathcal{C}\in\mathbb{R}^{n\times 3}\), so that the limiting distributions of atoms' speed and conditional perturbed inter-atomic distance are Gaussian and MB distributions. This is because \[\tilde{\mathcal{C}}_{u} =\mathcal{C}_{u}+\mathbf{z}_{u}\qquad\tilde{\mathcal{C}}_{v}= \mathcal{C}_{v}+\mathbf{z}_{v}\qquad\text{where }\mathbf{z}_{u},\mathbf{z}_{v}\sim \mathcal{N}(\mathbf{0},\sigma^{2}\mathbf{I})\] \[\tilde{d}_{uv} =\|\mathbf{z}+\mathcal{C}_{u}-\mathcal{C}_{v}\|\qquad(\mathbf{z} =\mathbf{z}_{u}-\mathbf{z}_{v}\sim\mathcal{N}(\mathbf{0},2\sigma^{2}\mathbf{ I}))\] \[=\|\mathcal{C}_{u}-\mathcal{C}_{v}\|+\|\mathbf{z}+\mathcal{C}_{u} -\mathcal{C}_{v}\|-\|\mathcal{C}_{u}-\mathcal{C}_{v}\|\] \[=d_{uv}+\frac{2\mathbf{z}^{\top}(\mathcal{C}_{u}-\mathcal{C}_{v}) +\|\mathbf{z}\|^{2}}{\|\mathbf{z}+\mathcal{C}_{u}-\mathcal{C}_{v}\|+\| \mathcal{C}_{u}-\mathcal{C}_{v}\|}\] When \(\sigma\) is sufficiently small, \(\tilde{d}_{uv}\approx d_{uv}+\frac{2\mathbf{z}^{\top}(\mathcal{C}_{u}-\mathcal{ C}_{v})}{2\|\mathcal{C}_{u}-\mathcal{C}_{v}\|}=d_{uv}+\hat{z}\), where \(\hat{z}\sim\mathcal{N}(0,2\sigma^{2})\). When \(\sigma\) is sufficiently large, \(\tilde{d}_{uv}\approx d_{uv}+\frac{\|\mathbf{z}\|^{2}}{\|\mathbf{z}+\mathcal{ C}_{u}-\mathcal{C}_{v}\|}\approx\|\mathbf{z}\|,\quad\text{where }\|\mathbf{z}\|\sim\mathrm{MB}(\sqrt{2}\sigma)\). For a comprehensive elucidation of intermediary mathematical procedures, we direct the readers to Appendix A. We conduct experiments to verify the above mathematical derivation. In the conducted experiments, Gaussian perturbations with varying levels of variation are introduced to molecular conformations, i.e., \(p(\mathcal{C}|\mathcal{C})=\mathcal{N}(\mathbf{0},\sigma^{2}\mathbf{I})\), for \(\mathcal{C}\in\mathbb{R}^{n\times 3}\), and the marginal distributions of the difference in inter-atomic distances before and after perturbation are examined. The resultant observations can be seen in Fig. 2 and 3. ### Modeling conformations We model the inter-atom distances instead of the conformation for equivariance as discussed in Sec. 3.2. Consider molecules formed by \(n\) atoms, where \(n\geq 5\). Given any \(C\in\mathbb{R}^{n\times 3}/\mathrm{SE}(3)\), let \(d(\cdot):\mathbb{R}^{n\times 3}/\mathrm{SE}(3)\to\mathbb{D}\) be the mapping from conformations to all inter-atomic distances, where \(\mathbb{D}:=\mathrm{image}(d)\). Hence, \(\mathbb{R}^{n\times 3}/\mathrm{SE}(3)\) and \(\mathbb{D}\) are isomorphisms since to ascertain the relative position of a particular point, it is merely necessary to determine its distances from 4 other non-coplanar Figure 2: In the investigation of perturbed distance distributions resulting from the introduction of Gaussian noise to molecular conformation, a transition from Gaussian to MB is observed as the noise level escalates. The perturbation’s intensity is denoted by \(\sigma\). Within the graphical representation, the orange curve delineates the pdf of \(\mathcal{N}(0,2\sigma^{2})\), the green curve corresponds to the pdf of \(\mathrm{MB}(\sqrt{2}\sigma)\), and the blue dotted curve represents the pdf of \(p(\tilde{d}|d)\). distinct points. We use \(d_{ij}\) to denote the entry \((i,j)\) of the adjacent matrix and we have, by slight abuse of notations, \[\nabla_{\tilde{\mathcal{C}}}\log q_{\sigma}(\tilde{\mathcal{C}}| \mathcal{C}) =\frac{\partial}{\partial\mathcal{C}}\log q_{\sigma}(\tilde{ \mathcal{C}},d(\tilde{\mathcal{C}})|\mathcal{C},d(\mathcal{C})) \tag{8a}\] \[=\sum_{i,j}\frac{\partial d_{ij}(\tilde{\mathcal{C}})}{\partial \tilde{\mathcal{C}}}\frac{\partial}{\partial d_{ij}(\tilde{\mathcal{C}})}\log q _{\sigma}(d(\tilde{\mathcal{C}})|d(\mathcal{C}))\quad\text{(almost surely)}\] (8b) \[=\sum_{i,j}\frac{\partial\tilde{d}_{ij}}{\partial\tilde{\mathcal{ C}}}\nabla_{\tilde{d}_{ij}}\log q_{\sigma}(\tilde{d}|d) \tag{8c}\] The above property also holds for \(\tilde{d}(\cdot)\) that maps the conformation to a partial distance vector where each atom is associated with at least 4 distances. A previous work (Shi et al., 2021) showed that for any \(\mathbf{s}_{\mathbf{\theta}}(\tilde{d})\approx\nabla_{\tilde{d}}\log q_{\sigma}( \tilde{d}|d)\) as a function of the perturbed inter-atomic distance \(\tilde{d}\), the scoring network \(\mathbf{s}_{\mathbf{\theta}}\) is equivariant w.r.t. \(\mathrm{SE}(3)\). By Eq. 3, 4, 8c and 7, the denoising score matching objective for conformations is \[\mathcal{L}\left(\mathbf{\theta};\left\{\sigma_{i}\right\}_{i=1}^{L}\right) \triangleq\frac{1}{L}\sum_{i=1}^{L}\lambda\left(\sigma_{i}\right) \ell\left(\mathbf{\theta};\sigma_{i}\right) \tag{9a}\] \[\ell(\mathbf{\theta};\sigma) =\frac{1}{2}\mathbb{E}_{p_{\mathbf{\theta}}\left(d\right)}\left\| \mathbf{s}_{\mathbf{\theta}}(\tilde{d},\sigma)-\frac{\partial\tilde{d}}{\partial \tilde{\mathcal{C}}}\left[\left(1-e^{-\sigma/d}\right)\frac{2}{\tilde{d}}- \frac{\tilde{d}-d}{2\sigma^{2}}\right]\right\|_{2}^{2} \tag{9b}\] Note that \(\nabla_{\tilde{\mathcal{C}}}\log q_{\sigma}(\tilde{\mathcal{C}}\mid\mathcal{C })\neq-\frac{\tilde{\mathcal{C}}-\mathcal{C}}{\sigma^{2}}\) since \(\tilde{\mathcal{C}},\mathcal{C}\in\mathbb{R}^{n\times 3}/\mathrm{SE}(3)\) and the probability density function is different from that in \(\mathbb{R}^{n\times 3}\). Take \(\lambda\left(\sigma_{i}\right)=\sigma_{i}^{2}\), \(\lambda\left(\sigma_{i}\right)\ell\left(\mathbf{\theta};\sigma_{i}\right)\propto 1\) for any \(\sigma_{i}\). Thus, the loss magnitude order of the loss function does not depend on the specific selection of \(\sigma_{i}\). ### Network for modeling conformation score The network employed for the purpose of modeling \(\mathbf{s}_{\theta}\) must adhere to two specific criteria which are delineated in Sec. 4.2. For simplification, we omit the model's parameter of molecular graph \(\mathcal{G}\). \(\mathrm{SE}(3)\)**equivariance**. It is imperative that the network abstains from utilizing molecular conformation directly as input; rather, it should incorporate inter-atomic distance to achieve \(\mathrm{SE}(3)\) equivariance. The employment of perturbed distance as a means to directly forecast the conformation score necessitates a domain transition, thereby augmenting the complexity of the learning process. Thus, following the parametrization of the conformation score as discussed in Sec. 4.2, a generative model for estimating the score of distances is formulated, followed by the application of the chain rule to facilitate the conversion of distance scores into their corresponding values for conformation scores. **Isomorphisms.** Each individual atom must be associated with a minimum of four distances, in order to establish isomorphisms between \(C\in\mathbb{R}^{n\times 3}/\mathrm{SE}(3)\) (representing conformation space) and \(\mathbb{D}\) (signifying feasible inter-atomic distance space). On the other hand, correlating an atom with an excessive number of distances exacerbates the challenge for the model to generate a feasible \(d\). The underlying reason for this complication is the disparity in cardinal numbers of \(\mathbb{R}^{n\times 3}/\mathrm{SE}(3)\) and \(\mathbb{D}\). \(\mathbb{D}\) is a subset of \(\mathbb{R}^{m}_{++}\), where \(m=\binom{n}{2}\) is the number of edges in complete graph induced by the molecule. For a more detailed illustration, we refer readers to Appendix B. As a result, we connect Figure 3: Distribution approximation. The actual pdf \(p_{\sigma}(\tilde{d}-d|d=\text{const})\) is illustrated by the orange curve, whereas the blue dotted curve signifies the proposed approximated pdf. the three-hop neighborhood in each chemical molecule so that almost every atom in a molecule is connected with at least four other atoms. Following GeoDiff (Xu et al., 2021), we adapt a similar network for modeling \(\mathbf{s_{\theta}}\). Given an input graph \(G\), the Message Passing Neural Networks (MPNN) (Gilmer et al., 2017) is adopted as \(\mathbf{s_{\theta}}\), which computes node embeddings \(\mathbf{h}_{v}^{(t)}\in\mathbb{R}^{f},\forall v\in V\) with \(T\) layers of iterative message passing: \[\mathbf{h}_{u}^{(t+1)}=\psi\left(\mathbf{h}_{u}^{(t)},\sum_{v\in\mathcal{N}_{u}}\mathbf{h }_{v}^{(t)}\cdot\phi(\mathbf{e}_{uv},d_{uv})\right) \tag{10}\] for each \(t\in[0,T-1]\), where \(N_{u}=\{v\in V|(u,v)\in E\}\), while \(\psi\) and \(\phi\) are neural networks, e.g. implemented using multilayer perceptrons (MLPs). Note that the node features, distances and edge features are input into \(\mathbf{s_{\theta}}\) as initial embeddings when \(t=0\), but we only keep the distance \(d\) in the above sections as the input of \(\mathbf{s_{\theta}}\) for notation simplification. Besides, as no coordinates information is explicitly engaged in this network, this kind of modeling can preserve the above two properties. For more details about this part, refer to Appendix B. ### Sampling by Langevin dynamics ``` Input: molecular graph \(G\), network \(\mathbf{s_{\theta}}\), scheduler \(\{\sigma_{i}\}_{i=1}^{T}\). Output: conformation \(\mathcal{C}\). 1: Sample \(\mathcal{C}\sim\mathcal{N}(\mathbf{0},\sigma_{i}^{2}\mathbf{I})\). 2:for\(i=T,T-1,\cdots,1\)do 3:\(\alpha_{i}\leftarrow\epsilon\cdot\sigma_{i}^{2}/\sigma_{\mathcal{C}}^{2}\) {\(\alpha_{i}\) is the step size.} 4: Sample \(\mathbf{z_{i}}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\) 5:\(\mathcal{C}_{i-1}\leftarrow\mathcal{C}_{i}+\alpha_{i}\mathbf{s_{\theta}}(d( \mathcal{C}_{i}),\sigma_{i})+\sqrt{2\alpha_{i}}\mathbf{z_{i}}\) {Langevin dynamics.} 6:endfor 7:return\(\mathcal{C}_{0}\) ``` **Algorithm 1** Sampling via annealed Langevin dynamics The learned score matching network \(\mathbf{s_{\theta}}\) that minimizes Eq. 9a can approximate the score of molecular conformation and following the annealed Langevin dynamics, we provide the pseudo-code of the sampling process in Alg. 1 from which we can draw conformations given molecule. ### Analysis **Marginal v.s. joint distributions**. From existing literature, the diffusion models are built on adding isotropic Gaussian noise \(\mathcal{N}(\mathbf{0},\sigma^{2}\mathbf{I})\) to the modeled objects such as pixel values in image generations. In SDDiff, we add isotropic Gaussian noise to molecule conformation (coordinate), and noise is mapped to inter-atomic distances. Thus, entries of noise on distance are not independent, whereas the marginal distribution of distances can be applied for score matching, this is because \[\nabla_{\tilde{d}_{i}}\log p_{\sigma}(\tilde{d}\mid d)=\nabla_{ \tilde{d}_{i}}\log p_{\sigma}(\tilde{d}_{i}|d_{1,2,\cdots,m})\cdot p_{\sigma}( \tilde{d}_{1,2,\cdots,i-1,i+1,\cdots,m}\mid d_{1,2,\cdots,m},\tilde{d}_{i},d_{i})\] \[=\nabla_{\tilde{d}_{i}}\log p_{\sigma}(\tilde{d}_{i}|d_{i})+ \nabla_{\tilde{d}_{i}}\log p_{\sigma}(\tilde{d}_{N(i)}\mid d_{N(i)},\tilde{d} _{i},d_{i})\approx\nabla_{\tilde{d}_{i}}\log p_{\sigma}(\tilde{d}_{i}|d_{i})\] where \(N(i)\) is the set of edge indices whose edges are incident with edge \(i\). The second equality holds because \(\tilde{d}_{i}\) gives no information on the distribution of other perturbed edges that are not incident with edge \(i\). Also, \(d_{j}\) gives no information on the distribution of \(\tilde{d}_{i}\) where \(i\neq j\). We hypothesize that disregarding the term \(\nabla_{\tilde{d}_{i}}\log p_{\sigma}(\tilde{d}_{N(i)}\mid d_{N(i)},\tilde{d} _{i},d_{i})\) introduces no bias. This supposition stems from the observation that possessing knowledge of both \(\tilde{d}_{i}\) and \(d_{i}\), we remain uninformed about the increase or decrease in the value of \(\tilde{d}_{N(i)}-d_{N(i)}\). **Approximation by optimal transportation (OT).** Given the knowledge of the distributions at end time points \(p_{t=0}(x)\) and \(p_{t=T}(x)\), the problem of obtaining the distributions in between can be formulated as a Shrodinger Bridge problem whose solution is also the solution of entropic OT. We compute the regularized Wasserstein Barycenter of \(p_{t=0}(\tilde{d}|d)\) and \(p_{t=T}(\tilde{d}|d)\) by employing the approach presented in a previous work (Benamou et al., 2015). However, the regularization term impacts the limiting weighted Barycenter, leading to divergences from \(p_{t=0}(\tilde{d}|d)\) to \(p_{t=T}(\tilde{d}|d)\). As a result, the regularized Wasserstein Barycenter approach is unsuitable for intermediate distribution approximation. See Appendix C for a more detailed analysis. ## 5 Experiment ### Experiment settings **Datasets**. We use two widely used datasets, GEOM-QM9 (Ramakrishnan et al., 2014) and GEOM-Drugs (Axelrod and Gomez-Bombarelli, 2022) for evaluating molecular conformation generation. The GEOM-QM9 dataset comprises molecules with an average of 11 atoms, while the GEOM-Drugs dataset consists of larger molecules with an average of 44 atoms. For a fair comparison, we adopted the same dataset split as GeoDiff (Xu et al., 2021). For both datasets, the training set contains 40k molecules, the validation set contains 5k molecules and the test set contains 200 molecules. Please refer to GeoDiff (Xu et al., 2021) for more details regarding the dataset. **Evaluation metrics.** We use the metrics of COV (coverage) and MAT (matching) (Xu et al.) to measure both diversity and accuracy. Specifically, we align ground truth and generated molecules by the Kabsch algorithm (Kabsch, 1976), and then calculate their difference with root-mean-square-deviation (RMSD). Then the COV and the MAT are defined as follows: \[\mathrm{COV}=\frac{1}{|S_{r}|}\{\mathcal{C}\in S_{r}|\,\mathrm{RMSD}( \mathcal{C},\mathcal{C}^{\prime})<\delta,\exists\mathcal{C}^{\prime}\in S_{g} \},\quad\mathrm{MAT}=\frac{1}{|S_{r}|}\sum_{\mathcal{C}^{\prime}\in S_{g}} \mathrm{RMSD}(\mathcal{C},\mathcal{C}^{\prime})\] where \(S_{g}\) and \(S_{r}\) denote generated and ground truth conformations, respectively. Following some baselines (Xu et al., 2021; Ganea et al., 2021), we set the threshold of COV \(\delta=0.5\,\mathrm{\SIUnitSymbolAngstrom}\) for GEOM-QM9 and \(\delta=1.25\,\mathrm{\SIUnitSymbolAngstrom}\) for GEOM-Drugs, and generate twice the number of ground truth conformation for evaluation. **Baselines**. We choose 5 state-of-the-art models for comparison: GeoMol (Ganea et al., 2021) is not a generative model that generates conformation by hand with predicted molecular information. CGCF (Shi et al., 2021) is a two-step method, and ConfVAE (Xu et al., 2021) is a VAE-based model. ConfGF (Shi et al., 2021) and GeoDiff (Xu et al., 2021) are two similar works that are also diffusion-based. Other implementation details are provided in Appendix D ### Results and analysis The results of molecular conformation generation are shown in Table 1. The baseline results are obtained from GeoDiff (Xu et al., 2021). In order to mitigate the impact of the model's backbone and primarily evaluate the efficacy of distance distribution modeling, we have opted to utilize a backbone that closely resembles that of GeoDiff. This will enable us to more accurately assess the performance of the distance distribution modeling technique while minimizing the potential confounding effects of the model's underlying architecture. The Visualization of selected generated conformation can be found in Appendix G. **Score distribution.** In the existing literature, the ground truth score function follows a normal distribution. Specifically, the ground truth of score matching objects is set to \(\sigma\nabla_{\tilde{\mathbf{x}}}\log p(\tilde{\mathbf{x}}|\mathbf{x})\sim \mathcal{N}(\mathbf{0},\mathbf{I})\). The proposed distance distribution diverges from the Gaussian distribution when the perturbation level is significantly large and requires the model to parametrize a non-Gaussian distribution. In \begin{table} \begin{tabular}{c|c c c c|c c c} \hline \hline \multirow{3}{*}{Methods} & \multicolumn{3}{c|}{GEOM-QM9} & \multicolumn{3}{c}{GEOM-Drugs} \\ & \multicolumn{2}{c}{COV(\%) \(\uparrow\)} & \multicolumn{2}{c|}{MAT(Å) \(\downarrow\)} & \multicolumn{2}{c}{COV(\%) \(\uparrow\)} & \multicolumn{2}{c}{MAT(Å) \(\downarrow\)} \\ & Mean & Median & Mean & Median & Mean & Median & Mean & Median \\ \hline CGCF & 78.05 & 82.48 & 0.4219 & 0.3900 & 53.96 & 57.06 & 1.2487 & 1.2247 \\ ConfVAE & 77.84 & 88.20 & 0.4154 & 0.3739 & 55.20 & 59.43 & 1.2380 & 1.1417 \\ GeoMol & 71.26 & 72.00 & 0.3731 & 0.3731 & 67.16 & 71.71 & 1.0875 & 1.0586 \\ ConfGF & 88.49 & 94.31 & 0.2673 & 0.2685 & 62.15 & 70.93 & 1.1629 & 1.1596 \\ GeoDiff & 90.54 & 94.61 & 0.2090 & 0.1988 & 89.13 & 97.88 & 0.8629 & 0.8529 \\ \hline **SDDiff (ours)** & **91.07** & **94.69** & **0.2048** & **0.1941** & **90.68** & **98.48** & **0.8564** & **0.8503** \\ \hline \hline \end{tabular} \end{table} Table 1: Results of molecular conformation generation. order to investigate the efficacy of existing backbones in approximating such distribution, we visually depict the _distribution of score functions_ (not inter-atomic distance), along with our backbone's output under varying levels of perturbation. The ensuing results have been found in Fig. 4. It is evident that our proposed distribution closely resembles the Gaussian distribution when \(\sigma\) is reasonably small. Conversely, when \(\sigma\) is substantially large, the proposed score function transforms into a long-tailed Gaussian distribution. Despite this alteration, the model's output distribution still approximates the proposed score function effectively. This substantiates that the proposed distribution can be effortlessly approximated, and thus can be incorporated into a wide array of models. **Planar structure generation** As mentioned in Eq. 8b, the score function of distance can be transformed into the score function of conformation _almost surely_, provided that the conformation is non-planar. Nonetheless, certain molecular structures like benzene rings, exhibit a planar conformation within local regions, which may render this transformation inapplicable (see Fig. 5). A viable solution to optimize these local planar structures further involves utilizing post-processing with variants of rule-based methods (e.g., force field) which encode the unvarying property of certain local structures like benzene rings being planar. ## 6 Conclusion In this study, we present a novel molecular conformation generation approach - SDDiff - by incorporating the shifting score function inspired by molecule thermodynamics. Our main findings include that the distribution of the change of inter-atomic distances shifts from Gaussian to Maxwell-Boltzmann distribution under the Gaussian perturbation kernel on molecular conformation, which can be accurately approximated by our approach. By proposing a diffusion-based generative model with a shifting score kernel, we have provided both the mathematical derivation and experimental validation of its correctness. The effectiveness of our approach has been demonstrated through achieving new state-of-the-art results on two widely used molecular conformation generation benchmarks, namely GEOM-Drugs, and GEOM-QM9. Our method effectively captures the essential aspects of molecular dynamics and inter-atomic interactions, leading to improved performance in generating accurate and feasible molecular conformations.
2303.00044
The Initial Mass Function and Other Stellar Properties Across the Core of the Hydra I Cluster
The Hydra I cluster offers an excellent opportunity to study and compare the relic old stellar populations in the core of its two brightest galaxies. In addition, the differing kinematics of the two galaxies allows a test of the local validity of general scaling relations. In this work we present a direct comparison employing full spectral fitting of new high-quality long-slit optical and NIR spectroscopic data. We retrieve age, metallicity and 19 elemental abundances out to about 12 kpc within each galaxy, as well as the IMF in their central regions. Our results suggest that the inner 5 kpc region of both galaxies, despite their different masses, formed at the same time and evolved with a similar star formation time-scale and chemical enrichment, confirming their early formation in the cluster build up. Only the overall metallicity and IMF radial profiles show differences connected with their different velocity dispersion profiles. The radial trend of the IMF positively correlates with both [Z/H] and velocity dispersion. While the trends of the IMF with metallicity agree with a global trend for both galaxies, the trends with the velocity dispersion exhibit differences. The outer regions show signs of mixed stellar populations with large differences in chemical content compared to the centers, but with similar old ages.
Ilaria Lonoce, Wendy Freedman, Anja Feldmeier-Krause
2023-02-28T19:32:33Z
http://arxiv.org/abs/2303.00044v1
# The initial mass function and other stellar properties across the core of the Hydra I cluster1 ###### Abstract The Hydra I cluster offers an excellent opportunity to study and compare the relic old stellar populations in the core of its two brightest galaxies. In addition, the differing kinematics of the two galaxies allows a test of the local validity of general scaling relations. In this work we present a direct comparison employing full spectral fitting of new high-quality long-slit optical and NIR spectroscopic data. We retrieve age, metallicity and 19 elemental abundances out to \(\sim 12\) kpc within each galaxy, as well as the IMF in their central regions. Our results suggest that the inner \(\sim 5\) kpc region of both galaxies, despite their different masses, formed at the same time and evolved with a similar star formation time-scale and chemical enrichment, confirming their early formation in the cluster build up. Only the overall metallicity and IMF radial profiles show differences connected with their different velocity dispersion profiles. The radial trend of the IMF positively correlates with both [Z/H] and \(\sigma\). While the trends of the IMF with metallicity agree with a global trend for both galaxies, the trends with the velocity dispersion exhibit differences. The outer regions show signs of mixed stellar populations with large differences in chemical content compared to the centers, but with similar old ages. Unified Astronomy Thesaurus concepts: Early-type galaxies (429), Initial mass function (796) + Footnote †: journal: ## 1 Introduction Despite its apparent simplicity, reconstructing the formation and evolution of massive elliptical galaxies is still a great challenge, and both theoretical and observational efforts are still ongoing with the aim of creating a complete assembly picture for these stellar systems. Large local galaxy surveys have allowed the characterization of the stellar population properties of the overall population of ellipticals, and the construction of scaling relations to derive information on their past histories, with the robustness of statistical samples (e.g.: Thomas et al., 2010; Sanchez et al., 2012; Ma et al., 2014; McDermid et al., 2015). More recently, it has also been possible to derive the global trend of stellar properties, including elemental abundances, as a function of the galaxy radius (e.g.: Parikh et al., 2019; Zibetti et al., 2020). These studies confirmed the presence of radial gradients for a large population of local galaxies. A radial variation has been also widely observed for the stellar Initial Mass Function (IMF), leading to the conclusion that the IMF is non-universal, among and within galaxies (e.g.: Treu et al., 2010; Cappellari et al., 2012; Conroy and van Dokkum, 2012; Martin-Navarro et al., 2015). Observations of radial gradients generally support the scenario of a two-phase process for the build-up of massive ellipticals (Naab et al., 2009; Oser et al., 2010), with the _in situ_ stars formed at high-redshift (\(z>3\)) as a consequence of a rapid cold accretion of gas (Dekel et al., 2009), and the _ex situ_ ones accreted in the outskirts during a prolonged following phase. Complementary to these global studies, generally obtained from stacked spectra, studies of a single or a few peculiar objects can offer the advantage of having higher signal to noise data, which can be studied in greater detail. This is the case for the Hydra I cluster, the object of this work, whose brightest cluster galaxy (BCG) NGC3311 has been intensively investigated together with its surrounding stellar halo and cluster companion stellar systems. From previous studies we have learnt that NGC3311, a cD galaxy with a low central surface brightness and extended radial profile, has a \(\sim 3\) kpc inner core characterized by an old age, super solar metallicity and Mg and Na enhanced abundances with respect to solar values. The core is likely a relic of the _in situ_ stars that formed early in the first phase of galaxy formation (e.g.: Barbosa et al., 2016, 2021), according to the framework of the two-phase scenario. In its very center, an irregular dust disk embeds new star formation, confirmed by the presence of bright blue spots and strong emission lines (Richtler et al., 2020). At larger radii, kinematic signatures and gradual variation and scatter in its stellar properties indicate the presence of a more complex stellar content, added in subsequent phases of accretion of material from other surrounding stellar systems (e.g.: Coccato et al., 2011; Ventimiglia et al., 2011; Arnaboldi et al., 2012; Barbosa et al., 2016). Beyond \(\sim 6-7\) kpc, where the contribution of dark matter increases and dominates (Richtler et al., 2011), a dynamically hot stellar halo extends out to \(\gtrsim 40\) kpc from the central BCG (Barbosa et al., 2018). First estimates point to a still old stellar population, similarly \(\alpha\)-enhanced, but much more metal poor (Coccato et al., 2011), although a large scatter dominates the measurements (Barbosa et al., 2016). Indeed, tidal streams, dwarf galaxies (e.g. HCC 026, HCC 007) and a large number of globular clusters populate and are currently falling into the cluster core (Arnaboldi et al., 2012), enriching it with stars having possibly different origins. The brightest companion of NGC3311 is NGC3309, a massive elliptical galaxy with a line-of-sight velocity offset of \(\sim 250\) km/s from the BCG, and with a separation on the sky of only \(100^{\prime\prime}\), but with no signs of interaction with the BCG (Arnaboldi et al., 2012). Although close to each other in the cluster core, the two giant ellipticals have different surface brightness profiles, with NGC3309 characterized by a typical R\({}^{1/4}\) profile, and NGC3311 by multiple components (Arnaboldi et al., 2012). Also their velocity dispersion profiles are remarkable different: while the central 5 kpc region of NGC3309 has a symmetric negative gradient, typical of ellipticals, NGC3311 has a peculiar inverse positive gradient. This is an important sign that suggests that the two objects have had a different formation and evolutionary history, opening the question of what has been the driver of such differences. Moreover, this peculiarity in the velocity dispersion profile offers the rare opportunity to test the local validity of the widely observed scaling relations (e.g.: Thomas et al., 2010; Conroy et al., 2014; Parikh et al., 2019). The IMF is an essential ingredient for understanding the mechanisms of formation and evolution of stellar systems (e.g.: Conroy and van Dokkum, 2012; Martin-Navarro et al., 2015; La Barbera et al., 2017; Vaughan et al., 2018; Sarzi et al., 2018). Correlations of the IMF with other stellar properties are a first tool to investigate the drivers of the IMF shape during the galaxy formation process (e.g.: Martin-Navarro et al., 2015; van Dokkum et al., 2017; Barbosa et al., 2021). As a consequence, directly comparing the IMF radial profile of the two galaxies in this study, as well as in relation to other stellar properties, can provide insight into their formation and evolution. However, as discussed in our previous papers (Feldmeier-Krause et al., 2020; Lonoce et al., 2021), the measurement of the IMF is technically very challenging, and different assumptions, as well as the choice of method or models used, can lead significantly different results. Very high S/N ratio data are needed (\(>100\)A\({}^{-1}\)), and a full and solid characterization of the chemical content is crucial to avoid biased results (Lonoce et al., 2021). In this work we further investigate the stellar population radial profile of the two main elliptical galaxies of the Hydra I cluster, NGC3311 and NGC3309, adding for the first time details on the chemical content with the retrieval of many elemental abundances necessary to retrieve their IMF radial profile with good precision. We derive relations among stellar properties, retrieved with the same data set, analysis and models, in order to isolate possible driver(s) of the first phase of formation of NGC3311 and NGC3309. We also characterize part of the surrounding stellar halo, giving for the first time estimates of many halo elemental abundances. Since different chemical elements are enriched on different time-scales, we are also able to compare the star formation time-scale of the halo with respect to the central regions and find clues of their past origin. The paper is organized as follows: in Section 2 we present the observations, details on the data reduction process and a focus on the observed emission lines; in Section 3 we provide details on the analysis setup, including a description on how we deal with the outer regions and the determination of systematic errors; we comment on our results in Section 4, and we discuss them more broadly in Section 5; finally we summarize the findings of this work in Section 6. ## 2 Spectroscopic data ### Observations The two targets, NGC3309 and NGC3311, were observed simultaneously during the nights of April 28-29, 2019 with the Inamori-Magellan Areal Camera & Spectrograph (IMACS, Dressler et al., 2006) on the Baade Magellan Telescope at the Las Campanas Observatory, Chile. Indeed, their apparent proximity in the sky, i.e. only \(100^{\prime\prime}\), and the length of IMACS longslit, \(15^{\prime}\), al lowed us to obtain spectroscopic data out to two effective radii for both galaxies, as well as part of the surrounding stellar halo in both directions as shown in Figure 4. The position angle was \(\sim 110^{\circ}\) East of North. Due to the decreasing S/N of the stellar halo light at larger distances from the cluster center, as shown in Fig. 1, we focused the analysis only on the region between \(\sim 100^{\prime\prime}\) west of NGC3309 and \(\sim 100^{\prime\prime}\) east of NGC3311, and used the remaining outermost regions to evaluate the background and foreground, as discussed below. We observed the sources with two grating configurations as listed in Table 1: with the \(600-8.6\) grating and grating angle (GA) of \(9.71^{\circ}\) to cover the optical region (i.e. \(3500-6700\)A), and with the \(600-13.0\) grating and GA of \(17.11^{\circ}\) to cover the near-IR (\(7500-10500\)A). Since the CCD consists of \(4+4\) separated chips, the wavelength region of each GA configuration is divided in four subregions and, as a consequence, it has three chip gaps of \(\sim 50\)A, at around 4300, 5100 and 5900A in the optical. We made the choice of GA values in order to make sure that all the relevant spectral features did not fall on the chip gaps. From the red GA, we used only the chip around the Calcium Triplet (CaT) feature, i.e. from 8100A to 8900A. We observed both GA configurations for 2.33 hours divided in seven frames of 1200s each, and with an average seeing of \(0.6^{\prime\prime}\). The choice of the \(2.5^{\prime\prime}\) slit width provided us with a \(\lambda\)-constant spectral resolution of about 5.5A. We observed a small variation of the spectral resolution in the vertical direction along the slit, with values differing by \(\sim 10\%\) from top to bottom. ### Data reduction The data reduction was carried out with the standard tools of IRAF (Tody, 1993) and custom IDL scripts. For both GA configurations we removed cosmic rays and bad pixels, performed bias subtraction, flat-fielding and applied the wavelength calibration in air wavelength. The two targets were located in the lower row of four chips and their light, plus the light of the outer halo, plus the presence of an unforeseen foreground emission (see below and Appendix A), entirely covered the spatial vertical direction of the frames, preventing us from estimating the background. Similarly, the upper row of chips could not be used to extract the background, again due to the presence of stellar halo light and foreground emission. Therefore, we made use of the publicly available ESO tool SkyCal Sky Model Calculator (Cerro Paranal Sky Model, Noll et al., 2012; Jones et al., 2013). Given the details of the observations, i.e. telescope coordinates and altitude, time of observations and target coordinates, this tool retrieves the predicted radiance sky spectrum for the desired wavelength range, wavelength grid and spectral resolution. In particular, the radiance spectrum includes the following components: scattered moonlight and starlight, zodiacal light, molecular emission of lower and upper atmosphere and airglow continuum. For each reduced scientific frame, we calculated three modelled sky spectra with three different spectral resolutions, i.e. those measured at the top - middle - bottom rows of the frame, to take care of the varying spectral resolution in the slit direction. Interpolating the three spectra along the vertical direction, we computed the 2D sky model spectrum with an optimized spectral resolution. We then calculated, for each frame, the multiplicative factor that minimizes the difference between the tabulated and observed sky counts. The sky-subtracted frame was obtained by simply subtracting the modelled 2D sky from the original frame. In the optical region sky residuals were minimal. Only about one to two pixels per emission line still have residuals due to the unavoidable difference in the shape of the tabulated sky (boxcar) emission lines with respect to the real ones. All residuals have been flagged and not included in the analysis fit. However, in the red region, the subtraction of the SkyCal sky spectrum left larger residuals. This was due to the presence of numerous sky emission lines that scale with other sky features in non-linear ways, and the optimized solution did not always remove the presence of residuals like in the NaI and CaT spectral features. This was more frequent in the low S/N spectra of the outer halo. As before, we masked the sky residual regions in the analysis. Red GA data, obtained since 2018, have a fringing pattern all over the 2D frames. As described in Loonce et al. (2021), this effect can be efficaciously removed with the help of spectroscopic flat field frames taken just before and after the scientific exposures. Each scientific frame is then corrected for fringing using the flat field frame interpolated to its proper time. This minimizes the effect of time variability of the fringing pattern. The extraction of the 1D spectra was performed by means of a custom IDL code on the sum of all of the seven 1200s reduced frames. After retrieving the curvature of each CCD chip in the spatial direction by fitting the position of the peak of NGC3309 along the dispersion direction, the code sums the flux in all of the chosen physical regions from the west stellar halo, NGC3309, the halo between the two galaxies, NGC3311 and the east stellar halo. These regions have been defined to ensure that the S/N in the region around 5000A was \(\gtrsim 100\) A\({}^{-1}\), as shown in Fig. 1. However, as discussed in Section 3, due to the increasing broadening of features that prevent a reliable retrieving of stellar population prop erties, we only used 40 of them, from \(\sim 50^{\prime\prime}\) west of NGC3309 and \(\sim 70^{\prime\prime}\) east of NGC3311. The extracted 1D spectra have been flux calibrated by means of standard stars observed at the same airmass soon after each scientific exposure. Finally, they have been further corrected for tellurics \(\gtrsim 5000\)A, with the software MOLECFIT (Smette et al., 2015; Kausch et al., 2015). The data reduction process was performed on each of the four chips separately. The step of the wavelength calibration relies on arc frames whose number of strong emission lines differ from chip to chip. This means that the quality of the \(\lambda\) calibration is different depending on which chip is considered. As a consequence, possible \(\lambda\) shifts can occur between adjacent chips. In order to attach together the \(4+1\) (optical+NIR) spectral regions, we thus first retrieved their kinematic properties, in particular their radial velocities, which gave us an estimate of the relative wavelength shifts among chips. We derived the kinematics of all the extracted spectra using PPXF (Cappellari, 2017) with the MILES models (Vazdekis et al., 2010) and a Chabrier IMF. We found differences among radial velocities measured from different chips of the order of 100 km/s. Spectra from GA9.71-chip6 and GA17.11-chip8, which have arcs with the higher number of lines (\(\sim 20\)) are close to the expected values observed in the literature (see Figure 9). To homogenize the wavelengths of different chips to a common grid, we shifted all wavelengths such that the center of NGC3309 has the literature value of 4089 km/s (Smith et al., 2004). In this way, while homogenizing within the same spectrum along the whole wavelength range, we kept the relative velocity shifts among spectra from different physical regions. For a final check, we ran PPXF again on the final spectra with all of the chips combined. We found consistent results with literature values not only for NGC3309, as expected, but also for NGC3311 (Richtler et al., 2011; Barbosa et al., 2018). Four examples of the final reduced spectra are shown in Fig. 2: the center of NGC3309 (pink), the center of NGC3311 (green), one of the regions between the two galaxies where the stellar halo is dominant (light blue), and one from the external halo to the east of NGC3311 (blue). We intentionally omit a description of the absolute flux matching between adjacent spectral chunks since the extraction of information on stellar parameters has been carried out on each normalized chip spectrum in a parallel way, and by comparing _relative_ flux features with those of models, as detailed below in Section 3.1. ### Star formation and foreground emission Some visible emission lines can be observed in the central regions of NGC3311 (e.g.: H\(\beta\), H\(\alpha\) and [NII]) where some star formation is ongoing (Arnaboldi et al., 2012; Richtler et al., 2020). Fig. 4 shows the image of the dusty center of NGC3311 as observed by IMACS (left) in the I band, and by HST-WFPC2 (right) in F555W 1. In the left panel we highlight in yellow the orientation of the \(2.5^{\prime\prime}\) longslit across NGC3311. As observed by Richtler et al. (2020), an excess of blue light is present, corresponding to the bright spots embedded in the dust. In our IMACS data, and as better delineated by the higher resolution HST image, we can clearly identify these regions, as shown in Fig. 4. We then extracted the 1D spectra by keeping these regions separated, and we observed the most intense emission lines corresponding to the bright spot in the north-east corner of the dust structure. The spectrum extracted from the central dusty region shows moderate emission lines due to the presence of the smaller central bright spot, confirming that some star formation is ongoing in the disk. As described in Section 3, in our analysis we fit the spectra with the inclusion of emission lines, and in the particular cases of the spectra around the NGC3311 center, we allowed for the presence of two stellar components, to take into account the small contribution of young stars. Footnote 1: Richtler et al. (2020), based on observations made with the NASA/ESA Hubble Space Telescope, and obtained from the Hubble Legacy Archive, which is a collaboration between the Space Telescope Science Institute (STScI/NASA), the Space Telescope European Coordinating Facility (ST-ECF/ESA) and the Canadian Astronomy Data Centre (CADC/NRC/CSA) Looking closely at all extracted spectra, both around the two galaxies and along the halo, we noticed the presence of a set of foreground emission lines, including: [OII3727A], H\(\beta\), [OIII5007A], [NIS200A], H\(\alpha\) and [NII6585A] (see Figure 15). As fully detailed in Ap \begin{table} \begin{tabular}{l c c c c c c} \hline \hline Grating Angle & Period & Grating & Slit Width & Position angle & Spectral Range & Exposure Time \\ (\({}^{\circ}\)) & & & (\({}^{\prime\prime}\)) & (\({}^{\circ}\)) & (Å) & (min) \\ \hline 9.711 & April 28-29, 2019 & 600-8.6 & 2.5 & 110 & 3500-6700 & 140 \\ 17.11 & April 28-29, 2019 & 600-13.0 & 2.5 & 110 & 7500-10500 & 140 \\ \hline \end{tabular} \end{table} Table 1: Main features of the Hydra I spectroscopic data: grating angle (GA), date of observations, grating, slit width, position angle, obtained spectral range and total exposure time. pendix A, this local emission, constant along the entire observed field of view, is typical of the Warm Ionized Medium (WIM), as confirmed also by the Wisconsin H-Alpha Mapper (WHAM, Haffner et al., 2003) Sky Survey, that observed diffuse H\(\alpha\) emission in the foreground of the Hydra I cluster. Since this emission is local and does not affect the Hydra I cluster region located at \(z\sim 0.013\), we masked all of the local emission lines in all spectra so as to not include them in the analysis. ## 3 Analysis For the analysis of the 40 spectra extracted across the Hydra I cluster center, we adopted the publicly available full spectral-fitting code ALF (Conroy et al., 2018). The full spectral-fitting technique is ideal when dealing with large wavelength ranges and when the goal is the retrieval of many stellar population parameters, as demonstrated in our previous works (Feldmeier-Krause et al., 2020; Lonoce et al., 2021), and also in the literature (e.g.: Conroy and van Dokkum, 2012; Vaughan et al., 2018; Barbosa et al., 2021). Indeed, the fit is performed not only over the main spectral features, but over the entire wavelength range with all of the good pixels of the spectrum contributing to the fit. This allows us to exploit every part of the spectrum that can have a dependence on one or more parameters, thus giving more accurate results. ALF is, in particular, optimized to fit the absorption lines of optical and NIR spectra of stellar systems older than 1 Gyr. The fitting is performed with the Monte-Carlo Markov-Chain (MCMC) sampler EMCEE (Foreman-Mackey et al., 2013). To run ALF, we adopted the Conroy et al. (2018) stellar population models, with ages ranging from 1 to 13.5 Gyr, metallicity from \(-1.5\) to \(+0.2\) dex and with a large range of IMF slope values2, from 0.5 to 3.5, and with the possibility of using different parametrizations, as for example with one or two slopes. The spectral resolution is 100 km/s over the whole spectral range of \(3500-9000\)A, which perfectly covers our data. These models adopt the MIST isochrones (Choi et al., 2016) and are based on the optical and NIR empirical stellar spectra presented in Sanchez-Blazquez et al. (2006) and Villaume et al. (2017). Figure 1: _Upper panel:_ S/N per Å trends along the slit direction as measured for each CCD chip on the final reduced spectra. The pixel scale is \(0.111^{\prime\prime}\) per pixel. In detail: chip7-GA\(=9.71^{\circ}\) covers \(\sim 3500-4300\)Å (orange), chip8-GA\(=9.71^{\circ}\sim 4300-5100\)Å (green), chip5-GA\(=9.71^{\circ}\ 5100-5900\)Å (red), chip6-GA\(=9.71^{\circ}\ 5900-6700\)Å (blue) and chip8-GA\(=17.11^{\circ}\sim 8000-8800\)Å (brown). _Lower panel:_ light profile along the slit that connects the two target galaxies NGC3309 (green) and NGC3311 (pink). Shades highlight the region within \(1\)R\({}_{e}\) for each galaxy, i.e. R\({}_{e}^{3309}=21.9^{\prime\prime}\) and R\({}_{e}^{3311}=36.2^{\prime\prime}\)(Arnaboldi et al., 2012). Crosses indicate the central position of each region where a spectrum has been extracted. In order to retrieve non-solar values of many elemental abundances, we made use of the theoretical response functions of Conroy et al. (2018), provided for a wide range of age and metallicity and for a fixed Kroupa IMF (Kroupa, 2001), at the same spectral resolution of the models. With the help of these response functions, we were able to retrieve the following 19 elemental abundance distribution. The \(\alpha\)-elements are also found to Figure 4: _Upper panel:_ B band image of the Hydra I cluster IMACS field. The vertical yellow line traces the position of the 2.5′′-wide slit used for the acquisition of our spectroscopic data. _Lower panel:_ NGC3311 center imaging. _Left:_ IMACS I band imaging; yellow lines trace the position and orientation of the 2.5′′ longslit. _Right:_ HST-WFPC2 F555W imaging of the same region. Both images show the dusty central region of NGC3311, where bright spots indicate the presence of internal ongoing star formation (Richtler et al., 2020). As a result of the orientation of the slit across this region, we could extract 1D spectra isolating the region with the brighter spot and no dust, and the dusty region with the smaller bright spot, confirming the presence of a very young stellar population and detailing its chemical characteristics (see Section 4). dances: Fe, O+Ne+S (called "a"), C, N, Na, Mg, Si, K, Ca, Ti, V, Cr, Mn, Co, Ni, Cu, Sr, Ba and Eu. In Lonoce et al. (2021) we demonstrated the importance of retrieving the elemental abundances to obtain unbiased values of the IMF and other parameters. Indeed, every spectral feature that changes as a function of the IMF shape, also changes as a function of many elemental abundances. This is especially important for a full spectral-fitting analysis, since each pixel, with its own stellar parameter dependencies, contributes to the fit. To minimize the possible biases affecting the IMF, we thus chose to fit all of the available elemental abundances as free parameters. ### ALF settings We prepared our spectra by transforming wavelengths to vacuum, masking gaps and bad pixels, and set up ALF with the following characteristics: * MCMC parameters: we generally fit with a number of walkers, nwalker\(=1024\), a number of steps during the burn-in phase, nburn\(=10^{4}\), and a number of steps after the burn-in phase, nmcmc\(=100\). In cases of insufficient convergence of any parameter (typically in the outer regions), we repeated the fit increasing nwalker as needed. * Fit type: to include all possible elemental abundances as well as the IMF as free parameters, we adopted the full mode fitting, which allows the retrieval of up to 46 parameters including: all stellar population properties (21), kinematics (up to 4 components), emission lines (8), two-component star formation history (2) and non-constant IMF (up to 4 components). Additional "nuisance" parameters are also included to correct for stellar evolution and data uncertainties (7). * IMF parametrization: we based our main analysis adopting a single power-law IMF slope of the form \(dN/dm\propto m^{-x}\), with a fixed lower cutoff of 0.08 M\({}_{\odot}\). Above 1M\({}_{\odot}\) the slope is fixed to 2.3, i.e. to the Salpeter value (Salpeter, 1955). We have also repeated the whole analysis with a double power-law IMF, retrieving X1 (from 0.08 to 0.5 M\({}_{\odot}\)) and X2 (from 0.5 to 1M\({}_{\odot}\)). However, as discussed in Section 5, the degeneracy between X1 and X2 is high, as already noted in Lonoce et al. (2021) and Feldmeier-Krause et al. (2021), preventing the retrieval of solid results. IMF slope values span from 0.5 to 3.5. * Stellar components: ALF allows a simultaneous fit to two stellar population components with different ages. The fit retrieves the age and the mass fractions of the two components. All other parameters are the same as the main component. In our analysis, we allowed the presence of a secondary stellar population only in the four spectra extracted from the center of NGC3311, where there are signs of ongoing star formation (see Section 2.3), and in the very center of NGC3309. We highlight, however, that the minimum age allowed in ALF is 0.5 Gyr, and thus we can only give an upper limit of the age of the younger component. * Parameter ranges: for each stellar property, a uniform prior range is set in a customized way. Ages run from 0.5 to 14 Gyr, metallicity from \(-1.9\) to 0.3 dex and the IMF slope from 0.5 to 3.9. For elemental abundances we started with fixing the interval from \(-0.3\) to \(+0.5\) dex, with the exception of Na which was allowed up to \(+1.0\) dex. Since our spectra span from halo regions to the centers of the two ellipticals where stellar population properties can be largely different, these ranges have been adapted accordingly for each spectrum. As a consequence, in the outer regions we allowed the parameters to reach higher values, e.g. \(>\pm 1\) dex, if needed. We caution that in these cases the results could suffer from systematic uncertainties due to model extrapolation (considering that response functions are provided for values \(\pm 0.3\) dex, with some exceptions). A special case is potassium, where its only strong feature in our wavelength range at around 4100A could not be well fit by models even with [K/H] \(>3.0\) dex. We fixed the maximum limit at 3.0 dex and tested that this assumption does not impact the determination of all of the other stellar population parameters. * Wavelength ranges: as discussed in Section 2, each final spectrum has three wavelength gaps as a result of the subdivision of the separate CCD chips. To avoid mismatched flux alignment between adjacent chips, we imposed the fit to be performed in the following five separated wavelength regions: \(3650.3-4207.6\)A, \(4253.3-5001.8\)A, \(5048.9-5777.8\)A, \(5857.4-6606.7\)A and \(7966.3-8727.7\)A. Within each wavelength range the spectrum and the model are continuum matched by means of a polynomial function with one order per 100A. In the outer halo spectra we masked many pixels at long wavelengths due to strong sky residuals, as described in Section 2 and shown in Figure 2. * Emission lines: having masked all of the foreground emission lines (as described in Section 2 and in Appendix A), we fit the local emission lines of the Hydra I galaxies, as allowed by ALF. The lines are: Balmer (H\(\delta\), H\(\gamma\), H\(\beta\) and H\(\alpha\), with line ratios assumed from Case B recombination Osterbrock, 1989), [OII]\(3726-3729\), [OIII]\(4959-5007\), [NI]5200 and [NII]\(6548-6583\), where all doublets have relative strengths adopted from Cloudy models (Ferland et al., 2017)). Their retrieved intensity and kinematics are discussed in Appendix B. After each fit, we processed the results as suggested in the ALF documentation. This includes that the total metallicity [Z/H] and the [Fe/H] abundance have been combined by adding the two quantities together. All elemental abundances, provided by the models in relation to the total H, have been properly transformed in relation to Fe (as we show our results in Section 4). In particular, O, Ca, Mg, Ti and Si have been corrected with the library correction factors from Schiavon (2007) and Bensby et al. (2014), as suggested by the ALF documentation. This is to compensate for the fact that models with non-solar values of elemental abundances are built with stars from the solar neighborhood. We note that these corrections are more important for lower metallicity values (e.g.: \(\sim 0.4\) dex at around [Z/H]\(\sim-1.5\) dex). As a consequence, the outer regions and halos are mostly affected by this approximation. Moreover, excluding C, N, Cr, Ni and Na that so far have not shown the need for these corrections at low Z, all other elements have not yet been tested and corrected. We have carefully checked the full convergence of each parameter by directly looking at its MCMC chain. In particular, we considered the end of the chains generally including their final 1% steps (i.e. \(\sim 1000\) steps). At the same time we verified that the values spanned by each parameter did not hit a prior limit. In cases where these conditions were not satisfied, the fits have been repeated with wider setup constraints. Figure 5 shows four examples of the fit obtained by ALF for the same four spectra from Figure 2, i.e. the center of NGC3309, the center of NGC3311, one from the halo between the two galaxies and one for the external halo. ### Outer Regions The stellar halo regions, located at \(>1\)R\({}_{e}\) from each galaxy center (see Figure 1) and not dominated by the two galaxies' light, are characterized by having lower S/N spectra, higher velocity dispersion (\(>250\) km/s) and likely a more complex stellar population composition (Barbosa et al., 2016). Similar characteristics also hold, in a gradual way, in the annular regions included between \(\sim 10\arcsec\) from the center of each galaxy (i.e. at 0.45R\({}_{e}\) for NGC3309 and at 0.28R\({}_{e}\) for NGC3311) and their effective radius. The determination of the stellar parameters in these regions is more difficult, not only due to the noise and faintness of the features, but also because it is affected by possible biases due to the fact that we fit a simple stellar population model where a mix of multiple stellar populations may be the case. We will refer to the regions from \(\sim 10\arcsec\) to 1R\({}_{e}\) as _outer regions_, to distinguish them from those of the inner galaxies, which are, in contrast, well described by a single stellar population and with homogeneous stellar properties. Since in the outer regions we faced these above-mentioned difficulties in the retrieval of their stellar properties, we decided to fix the kinematic values in the ALF fit (described in Section 3.1), as obtained by fitting only the spectrum from chip 5 (around 5500A) with the ALF super-simple mode. We further fix age and metallicity in the halo regions with values again obtained from the super-simple mode fit. The results obtained are consistent both with previous values and with the expectations of spectral indices. For more details on these choices, see Appendix C. ### Systematic errors We estimated the systematic errors by considering that different regions of the spectra hold different kinds of information on the stellar population properties, and thus that fitting only a specific wavelength range can bring biased results in one or more parameters. Repeating the same fit on different wavelength ranges, therefore can provide an estimate of such uncertainties. We analysed systematics in a slightly different way for the centers of the two galaxies and for the outer regions and halos. For the central regions, since their stellar populations exhibit similar values, we created stacked spectra to increase the S/N and highlight possible biases. In particular the stacked spectra of NGC3309 and NGC3311 are the sum of their innermost \(<3\arcsec\) spectra (i.e. \(\sim 6\) spectra each). We then tested the differences when fitting the whole spectral range, without the bluest region (\(<4200\)A), without the red region (\(>8000\)A) and excluding the region \(>6400\)A. We then added in quadrature the standard deviation obtained from these fits for each stellar parameter to their statistical errors. For the outer regions, due to the increasing velocity dispersion and complexity of their stellar content, we preferred not to create stacked spectra but instead to consider three single spectra chosen as representative for three regions, i.e. one for the outer halos, one for the halo between the two galaxies and one for the outer region in the middle between the halo and galaxy cen ters. The tests performed are the same as for the stacked spectra, and their standard deviations have been added to the statistical errors in the same way. Results in Figures 6 and 7 show the final values with both statistical and systematic errors. Briefly, in the galaxy centers we found systematic error values of \(\sim\pm 1\) Gyr for age, \(\pm 0.04\) dex for metallicity, \(\pm 0.25\) for the IMF slope and \(\pm 0.08\) dex on average for the elemental abundances. For the outer regions and halos we obtained systematic errors on metallicity from \(\pm 0.3\) to \(\pm 0.5\) dex, and on elemental abundances on average from \(\pm 0.2\) to \(\pm 0.4\) dex. The IMF slope in the halos could not be constrained with our data and models as explained in Section 4.1. The systematic uncertainties we found in these regions on the IMF are indeed high with values around \(\pm 1\). ## 4 Results In this Section we present the results from the analysis described in the previous Section 3, obtained when fitting only one IMF slope. A comparison with the results with two IMF slopes is detailed in Section 4.1.1. All our results, as a function of the distance from the center of the two galaxies, are shown in Figures 6, 7, and 9. In the following Subsections we focus on the stellar population properties (Section 4.1) and on the kinematic results (Section 4.2). Kinematics of the gas emission component can be found in Appendix B. ### Stellar population properties Our stellar population results, obtained by fitting with ALF the 40 spectra with the setup described in Section 3, are shown in Figures 6 and Figure 7. The first set of plots shows the retrieved age, metallicity ([Z/H]+[Fe/H]), IMF slope and the derived mismatch parameter \(\alpha_{r}\). \(\alpha_{r}\) is defined as the ratio between the M/L in the \(r\) band obtained from the best fit model, and the M/L of the same model but with a Milky-Way IMF (Kroupa): \((M/L)/(M/L)_{MW}\). While a value of \(\alpha_{r}=1\) corresponds to a Kroupa IMF by definition, a value of \(\alpha_{r}=1.55\) corresponds to a Salpeter IMF, as indicated by the horizontal lines in the two bottom panels of Figure 6. Figure 7 shows the results of the elemental abundances with respect to Fe. K is not shown since models could not converge even with values \(>3.0\) dex (see Section 3.1). In all plots, error bars include systematic errors as discussed in Section 3.3. The results show rather constant old ages, with no visible trend from the center of the two galaxies to the external regions, with values around 13 Gyr. This behavior is consistent with previous literature results, e.g. Coccato et al. (2011), Loubser and Sanchez-Blazquez (2012) and Barbosa et al. (2016), but also in contrast with the latest findings of Barbosa et al. (2021) who found a negative gradient. However, Barbosa et al. (2021) show the radial results of all the Voronoi bins around NGC3311, and by inspecting their figure 5 the sharp age gradient is mostly caused by ages as young as 5 Gyr measured Figure 5: Same four example spectra of Figure 2 (center of NGC3309, center of NGC3331, halo between the two galaxies and external halo) in the optical region, and the best-fit spectra (green). Fitted emission lines are indicated by grey vertical dashed lines. Fluxes are normalized around \(4500-5500\)Å and shifted for clarity. along the major axis, at a position angle PA\(\sim 32^{\circ}\)that is nearly orthogonal to the one we analyzed in this work (PA\(\sim 108^{\circ}\)); whereas the ages near PA\(\sim 108^{\circ}\)are higher, around \(8-9\) Gyr. NGC3311 hosts a dust disk, as discussed in Section 2.3, where some level of star formation is still ongoing. As mentioned in Section 3.1, in those spectra corresponding to the regions with dust, we fit two stellar populations to take into account the presence of the younger component. The main components have similar old ages (i.e. \(\sim 13\) Gyr, as shown in Figure 6) as do the other surrounding central regions of NGC3311. The younger components have ages \(\sim 1\) Gyr with mass fractions below 1%. The stellar metallicity in the center of the two galaxies reaches solar values, while in the outer regions we obtain a negative gradient toward sub-solar values down to \(\sim-1.5\) dex. Close to the center of NGC3309 the metallicity shows a clear negative gradient starting from super-solar values around \(\sim 0.2\) dex, while in the center of NGC3311 the metallicity trend is flat around solar values. This particular behavior is similar to the velocity dispersion trend shown in Figure 9, and it will be discussed in Section 5.4 where correlations among parameters are analyzed. The solar metallicity values that we found in the center of NGC3311 are in slight tension with those obtained by Barbosa et al. (2021) who report a higher [Z/H]\(\sim 0.2\) dex. This gap could be attributed to the different adopted models. Indeed, Barbosa et al. (2021) used the EMILES models (Vazdekis et al., 2016), which are known to have a difference of the order of \(\sim 0.1\) dex with the Conroy et al. (2018) models for old and solar/supersolar metallicities (see e.g. Feldmeier-Krause et al., 2020, Lonoce et al., 2021). The total metallicity shown in Figure 6 does not show significant differences among the western, eastern and inner halos, however the [Fe/H] alone (see Figure 7) presents slightly higher values in the western region (on the left of NGC3309 in the plot). This is again in agreement with previous findings of Barbosa et al. (2016). Interestingly, also the IMF slope trend shows some similarity with the metallicity, in particular in the regions belonging to the two galaxies, where it is better constrained. NGC3309 shows a clear negative IMF gradient, from super-Salpeter (i.e. bottom-heavy) values in its very center to a top-heavier IMF at around \(10^{\prime\prime}\), confirming the typical trend found for local ellipticals (e.g.: Martin-Navarro et al., 2015; La Barbera et al., 2017; Sarzi et al., 2018). On the contrary, the IMF profile of NGC3311 is flat in its center at sub-Salpeter values, with mild signs of a positive gradient from \(\sim 5^{\prime\prime}\). We stress that the IMF slope values beyond \(10^{\prime\prime}\) for both galaxies are not robust as the analysis suffers from lack of IMF sensitive features, low S/N and larger velocity dispersion broadening, as reflected in their large error bars. The mismatch parameter \(\alpha_{r}\) obviously has a similar trend as the IMF slope, holding the same information. We decided to also show it in this form since, being unbounded to a particular parametrization of the IMF, it is more useful for a comparison of our results with other analyses obtained with different stellar population models. Elemental abundances have been precisely retrieved in the center of the two galaxies where we found solar or super-solar values with typical errors of 0.06 dex. [Cu/Fe], [Sr/Fe] and [Eu/Fe] have larger uncertainties also in the galaxy centers (i.e. 0.2 dex) since the fitted wavelength ranges do not include strong features sensitive to these elements. Some elements have clear negative gradients around the centers, like [Na/Fe], [Ti/Fe], [C/Fe], [O/Fe], [V/Fe] and [Co/Fe], others have flat trends, and only [Cu/Fe] has a positive gradient. We note a very close similarity of the chemical content between the two galaxy centers, which is valid for all elements. This important result will be discussed in Section 5. In the halos, elemental abundance values are typically different from the inner galaxy regions, reaching in several cases extreme values beyond those provided by models and thus subject to further uncertainty due to extrapolation. For example, in the case of copper, the extrapolation occurred up to 3.0 dex. Generally, we do not find evident differences from values retrieved from the western, eastern or inner halos, as also confirmed by the total metallicity trend. We also calculated the \(\alpha\)-elements enhancement trend by averaging together C, O, Mg, Ca, Si and Ti. The derived [\(\alpha\)/Fe] has a value of \(\sim 0.2\) dex in the center of NGC3309 and NGC3311, and a mild negative gradient toward their outskirts to around the solar value. However, the large scatter (i.e. \(\sim 0.25\) dex) prevents a robust confirmation of an actual gradient. Results from Coccato et al. (2011), Loubser & Sanchez-Blazquez (2012) and Barbosa et al. (2016) show slightly higher values (\(0.3-0.4\) dex), but due to our large scatter, it is still consistent with our findings. More discussion on the [\(\alpha\)/Fe] trend is presented in Section 5.4.3. #### 4.1.1 One versus two IMF slopes As detailed in Section 3, we have assumed a single slope IMF as a baseline. However, we have also performed the same fits assuming a double-slope IMF, with the first slope X1 describing the IMF in the mass range \(0.08-0.5\) M\({}_{\odot}\), and the second one X2 in the range \(0.5-1\) M\({}_{\odot}\). Above 1 M\({}_{\odot}\), the IMF is again Salpeter. While the other parameters are not affected by this change, the retrieved IMF values are visibly different, as shown in Figure 8. There, we focus on the comparison of the retrieved IMF in case of one IMF slope (solid lines) with the case of two IMF slopes (dashed lines) in the centers of NGC3309 (upper panels, pink) and NGC3311 (lower panels, green). When switching to two IMF slopes, we found very high X1 values in the center of NGC3309, reaching the limit of the allowed model values. In NGC3311 instead, we observe a larger scatter in X1, with adjacent points going from Kroupa-like to bottom-heavy IMF almost alternating. This behavior is due to the mutual degeneracy between X1 and X2, as also observed in Lonoce et al. (2021). We carefully checked the cross-correlation ellipses between X1 and X2 and confirmed high levels of correlation, with a mean Spearman correlation coefficient \(\rho=-0.45\) with \(p=0.009\). We conclude that with our set-up, the best choice is to adopt only one IMF slope. Some level of correlation with other parameters of the fit is still present for the case of one IMF slope, e.g. age and metallicity (and slightly Na and Ti, see Section 5.4 and Appendix D), but with lower values (Spearman coefficient \(\rho\sim 0.30\)), well within their final error. ### Kinematics Our kinematics results for the stellar component alone are shown in Figure 9 as orange lines. As discussed previously, we derived the kinematics values with ALF in Figure 6: Age, metallicity and IMF slope trends across the Hydra I cluster center as retrieved with ALF. West is on the left, East is on the right. The bottom panel shows the derived mismatch parameter \(\alpha_{r}\). Dash-dotted horizontal tan lines in the IMF and \(\alpha_{r}\) panels outline the corresponding values for a Salpeter IMF, dashed those for a Kroupa IMF. Open diamonds refer to regions where the results are less robust due to lower S/N, velocity dispersion broadening and/or lack of important features. full mode over the entire spectra only in the centers of both galaxies, i.e. within \(\sim 10^{\prime\prime}\), while for outer regions of the galaxy, the \(\sim 10^{\prime\prime}\) region is not well-defined. Figure 7: Similar to Figure 6 but for all the retrieved elemental abundances. gions and halos we relied our measurements on those extracted from fitting the chip 5 spectra (from 5100A to 5900A) with ALF in super-simple mode (Appendix C). This way we retrieved well-converged values that are in agreement with the literature. Indeed, in Figure 9 we have also plotted the estimated trends of the kinematic values from Richtler et al. (2011, green triangles) and Hilker et al. (2018, red circles), as they appear in figure 6 of Hilker et al. (2018). They show good agreement also in regions of low S/N, particularly in the halo between the two galaxies. The velocity dispersion profile across the core of the Hydra I cluster shows rather high values in the halo regions around \(300-350\) km/s, and rapid drops in the proximity of the two galaxies, toward \(\sim 200\) km/s. In the center of NGC3309 there is a negative and symmetric gradient of \(\sigma\), typical of elliptical galaxies, starting from its center toward \(\sim 10^{\prime\prime}\), with values running from \(\sim 250\) to around 200 km/s. On the other hand, in the center of NGC3311, the \(\sigma\) profile is flat at around 160 km/s within its innermost \(10^{\prime\prime}\). As in Barbosa et al. (2021), we will use this difference in the velocity dispersion profiles to test the validity of correlations between stellar population parameters, and in particular the IMF, with \(\sigma\) (see Section 5.4). The radial velocity profile shows a difference of \(\sim 250\) km/s between the two galaxies, with NGC3309 in the foreground with respect to NGC3311. As discussed in Richtler et al. (2011), NGC3309 is presumably spatially at a larger distance from the BGC which resides at the center of the cluster potential. Similarly to its \(\sigma\) profile, NGC3309 presents a (small) negative gradient in its center, not observed in Richtler et al. (2011) data. Instead, NGC3311 still has a flat trend. No internal rotation is visible for both objects along the adopted position angle. ## 5 Discussion With the results shown in Section 4, we have provided a detailed description of the stellar population properties across the two main galaxies of the Hydra I cluster, giving for the first time an extensive picture of their chemical content and IMF. Studying the stellar properties of these two companion galaxies, including their surrounding halos, with the same data set, as well as the same type of analysis and models, gives us the possibility of directly interpreting observed differences between the galaxies. This is fundamental to provide unbiased constraints on their assembly history. In this section, we will discuss the results, focusing on both the central and outer regions, in comparison with the literature, as well as on the obtained correlations among parameters (detailed in Section 5.4). ### Ngc3311 in the literature Regarding NGC3311, the BCG of the cluster, we already have a description of its main stellar properties, for example, Coccato et al. (2011), Loubser & Sanchez-Blazquez (2012), Arnaboldi et al. (2012), Barbosa et al. (2016, 2018, 2021), including information on the radial trend of its IMF (Barbosa et al., 2021). Despite the fact that these works have been undertaken with different Figure 8: Comparison of the retrieved IMF when adopting 1 vs 2 slopes in the center of NGC3309 (upper panels) and NGC3311 (lower panels). Solid lines refer to a single slope IMF while dashed lines to a double slope IMF. The right panels show the mismatch parameter \(\alpha_{r}\) obtained in both cases; dashed and dotted-dashed horizontal tan lines indicate the Kroupa and Salpeter value respectively. The two IMF slopes are mutually degenerate when fitted together, causing X1 to hit unnatural high values at the edge of model limits. To better highlight the uncertainties coming from the fit alone, in these plots error bars do not include systematic errors. data sets (thus with different wavelength ranges, etc...), different kinds of analysis (spectral index fitting or full spectral fitting) and different families of models, the results on age and metallicity are generally in agreement, pointing to an old (\(>12\) Gyr) and solar metallicity population in its center. We note an exception in Barbosa et al. (2021), who find a negative age gradient, in contrast with these other findings. As already mentioned in Section 4, our general findings are consistent with this picture, and add more information to the observed trends of 18 elemental abundances. In particular, the comparison of the retrieved IMF in the form of the mismatch parameter \(\alpha_{r}\) with the results of Barbosa et al. (2021) (performed with different models, thus with a different parametrization of the IMF) across the direction of our longslit, is very good within the innermost \(20\arcsec\) of NGC3311, showing a flat trend around \(\alpha_{r}\sim 1.5\). ### NGC3309 and NGC3311 centers Focusing on the central regions (within \(20\arcsec\)) of both NGC3309 and NGC3311, where we have retrieved all the stellar parameters with high precision, we found a general agreement with similar available values found in the literature both in the center (e.g.: Graves and Schiavon, 2008; Johansson et al., 2012; Worthey et al., 2014; Conroy et al., 2014; Gu et al., 2021) and up to \(\sim 1\)R\({}_{e}\)(e.g.: Feldmeier-Krause et al., 2021; Parikh et al., 2019; Newman et al., 2017; van Dokkum et al., 2017) of local elliptical galaxies, showing in some cases also similar trends with R. Nevertheless, there are few studies for which as many elemental abundances for single objects can be retrieved; to date these have generally focused on the center of galaxies and with stacked spectra. Only recently have stellar population synthesis models with non-solar abundances and very high quality spectroscopic data become available. In the near future we will be able to compare these results in detail with much larger samples both in the central and outer regions of nearby galaxies. The centers of elliptical galaxies are thought to host the core of the _in situ_ stellar population in the two phase Figure 9: Velocity dispersion (upper panel) and radial velocity (lower panel) profiles as retrieved with ALF (orange). Green triangles and red circles show an indication of the trends retrieved by the works of Richtler et al. (2011) and Hilker et al. (2018), respectively, as they appear in figure 6 of Hilker et al. (2018) for the case of position angle \(108^{\circ}\). Typical error bars are shown on the right of both panels. formation scenario (Naab et al., 2009; Oser et al., 2010); thus focusing on the central regions of NGC3311 and NGC3309 may give us hints of their formation history. To better visualize this comparison, in Figure 10 the central 20\({}^{\prime\prime}\) of both galaxies for most of the retrieved stellar population properties are shown. This region corresponds to the limit of the deep potential well, just before the sharp rise of the velocity dispersion (see Figure 9). As can be seen in the upper left panel, and as already mentioned, the two galaxies show a significant difference in their velocity dispersion profiles, with NGC3309 (pink) showing the typical negative gradient of massive elliptical galaxies, and NGC3311 (green) instead exhibiting a flat trend in the center following a decrease from the outskirts. Other differences can be observed in the very center (\(<2^{\prime\prime}\)) where the metallicity and IMF (and Ni) both have higher values in NGC3309. On the other hand, the age and all elemental abundances in these regions, within their uncertainties, have the same values. This is an important result suggesting that the stars in the cores of these two objects have formed at the same cosmic time and from a similar chemically enriched material. A number of studies have found correlations between the abundance patterns and velocity dispersion (e.g.: Worthey et al., 2014; Conroy et al., 2014; Parikh et al., 2019; Feldmeier-Krause et al., 2021), with generally more elemental enhancement at higher masses (see also Section 5.4). Our two galaxies, however, deviate from these correlations. Indeed, their centers show similar abundances while their velocity dispersion is different. Moreover, also their dynamical mass is different, with M\({}_{dyn}^{NGC3309}\)/M\({}_{dyn}^{NGC3311}=0.7\), when derived both in the core and at R\({}_{e}\). This could be a particular case, though, since the proximity of the two stellar systems may suggest that both _in situ_ populations have effectively originated during the same star forming event. However, the difference in their IMFs suggests different paths in their star formation histories. ### Outer regions results While in the center of the two galaxies the parameters are precisely retrieved with well-shaped trends, in the outer regions and halos we find larger uncertainties and some scatter. As anticipated in Section 3.2, this can be connected to many factors: i) the lower S/N of these regions (Figure 1) due to the lower surface brightness \(\mu>22\) mag/arcsec\({}^{2}\) in the outer \(20-30^{\prime\prime}\) from the centers (Arnaboldi et al., 2012), ii) the increasing velocity dispersion that in \(\sim 20^{\prime\prime}\) doubles its value and broadens the absorption features causing more degeneracy among parameters, iii) the use of simple stellar population models on likely mixed stellar populations due to later accreted stars and minor mergers. Indeed, some level of scatter was noticed and commented in Barbosa et al. (2016) in the regions at R\(>\)1R\({}_{e}\) from NGC3311, and accredited to the presence of multiple components that make the halo around NGC3311 not homogeneous. Allowing for the above-mentioned caveats, we give a first estimate of the detailed chemical content of the stellar halo surrounding NGC3311 and NGC3309 within the stated uncertainties. As shown in Figure 7, most of the elemental abundances in the halos show evident differences with respect to the two galaxy centers, with some going toward sub-solar (e.g. Fe, O, C, Na, Ca, Sr) or super-solar values (e.g. Mg, Si, Ti, V, Co, Ni). However, considering the large uncertainties of the outer regions, significant differences can be confirmed only for Fe, Na, Si, V, Co, Ni, Cu and Sr; even so, the last 5 elements from this list still suffer the lack of calibration corrective factors in models, which, as mentioned in Section 3.1, are expected to be larger in regions of low metallicity. All \(\alpha-\)elements (C, O, Mg, Ca, Si, Ti) have rather similar trends, although, again, they show large error bars. Both higher S/N data and an improvement of models and codes to include mixed stellar populations would be required to significantly improve on this work. At present, we can provide a comparison with results of similar works to strengthen our findings. Studies of elemental abundances in galaxy centers are rare in the present literature; however, results in the outer regions are even rarer. We can compare our values in the outer regions with those from stacked spectra of Greene et al. (2015), for Fe, Mg, C, N and Ca out to \(\sim 60^{\prime\prime}\). We find good agreement for Fe, Mg and C, and some deviation for N and Ca. Calcium in particular has more sub-solar values in our results, but the disagreement may simply result from the different way in which it is determined, i.e. from the optical Ca4227 index alone for Greene et al. (2015). We note that fitting many Ca-sensitive features (like CaH, CaK, CaT) in our case, doesn't necessarily imply a better constraint (see Appendix E). However, our Ca is likely well-constrained since we fit the many elemental abundances on which Ca features also depend. Parikh et al. (2019) studied the stacked spectra of early-type galaxies out to 1R\({}_{e}\), extracting C, Mg, N, Ca, Na and Ti. Comparing with our results at \(\sim 1\)R\({}_{e}\), we find mild consistency with C and N, and more evident deviations for Na, Ti, Mg and Ca. Also of note is that the metallicity is different, with our values \(\sim 0.5\) dex more sub-solar, which may be due to the different adopted models. Other works retrieved fewer parameters (i.e. age, metallicity and some also [\(\alpha\)/Fe]), but extended to larger radii, as Boardman et al. (2017), Goddard et al. (2017), Greene et al. (2019) and Perez-Hernandez et al. (2022). However, given the larger scatter in the parameters in the outer regions for all of these measurements, we conclude that further data and a more complex treatment of way to include mixed stellar populations in the comparison with models are warranted to better constrain the halo properties. Nevertheless, we note that stellar parameters derived from the western, central and eastern halo regions are generally centered on very similar values, suggesting that, despite their larger uncertainties, they hold a chemical identity and likely share their past accretion history. We further discuss their possible origin in Section 5.5. ### Correlations Among Measured Parameters Correlations among physical parameters have been observed and studied in galaxies in order to find the drivers and mechanisms of their star formation history (Maiolino and Mannucci, 2019). These correlations can quantitatively contribute to the improvement of galaxy formation and evolution models, as described, for example, in Pipino et al. (2009), Vincenzo et al. (2016) and Guidi et al. (2018). Moreover, correlations among different chemical species can give us clues on the nu Figure 10: Zoom on the comparison of the retrieved parameters in the centers of the two galaxies: NGC3309 (pink) and NGC3311 (green). Excluding the velocity dispersion, metallicity and IMF, all the other parameters show very good agreement suggesting that the cores of the two companion galaxies have been formed from the same enriched material and at the same time. Differences in the velocity dispersion and IMF can be sign that the IMF trend is connected with the kinematic distribution during galaxy formation. cleosynthesis of each particular element (e.g.: Worthey et al., 2014; Maiolino and Mannucci, 2019). An often discussed global correlation is that between the stellar metallicity and the velocity dispersion \(\sigma\)(e.g.: Trager et al., 2000; Thomas et al., 2005; Gallazzi et al., 2005; Thomas et al., 2010; McDermid et al., 2015). Recently, it has been investigated if this correlation still holds within the same galaxy when measuring [Z/H] and \(\sigma\) as a function of the galaxy radius, and also expanded to individual elements (e.g.: Worthey et al., 2014; Greene et al., 2015; Parikh et al., 2019; Feldmeier-Krause et al., 2021). Finally, with increasing indications that the IMF is not universal, a radial correlation of the low-mass IMF with \(\sigma\) is also under investigation (e.g.: Conroy and van Dokkum, 2012; Cappellari et al., 2012; Spiniello et al., 2014) but still debated (Barbosa et al., 2021; Feldmeier-Krause et al., 2021). Since our galaxies have two different velocity dispersion profiles, particularly in their inner regions, our results are an optimal benchmark to test the global validity of such correlations. However, before investigating possible correlations among stellar properties, it is necessary to take into account the correlations that occur during the fit among parameters, i.e. their degeneracy. To do this, we inspected the marginal posterior distributions of all the pairs of parameters for each analyzed spectrum, calculated the Spearman coefficient and took into account those relevant in the discussion below in Sections 5.4.1 and 5.4.2. All the details of the fit correlations analysis are described in Appendix D. In the following sections we will focus on specific correlations, i.e. those with the IMF slope and velocity dispersion, and leave the comments on more general correlations to Appendix E. #### 5.4.1 Correlations with the IMF Among radial correlations, those with the IMF slope are actively under study, for example, in Sarzi et al. (2018), Barbosa et al. (2021) and Gu et al. (2021). In particular, Barbosa et al. (2021) have shown for our same galaxy NGC3311, that a robust radial correlation with the IMF can be found with the age and not with \(\sigma\). Although observed, a correlation with [Z/H] was not considered reliable by these authors since they observed a similar positive trend in the posterior-distribution, thus addressing the correlation to internal fit degeneracy (as we discuss in Appendix D). We notice that along our long-slit direction, our _IMF slope_ is fully consistent with the values of Barbosa et al. (2021) showing a flat trend, although their overall distribution of values as a function of radius from all the Voronoi bins around NGC3311's center presents a mild negative gradient. On the contrary, and similarly to their previous work Barbosa et al. (2016), we do not see the same sharp negative _age_ gradient, which in Barbosa et al. (2021) is rather significant. Our results are consistent with the positive IMF-[Z/H] correlation and we checked that our internal fit degeneracy has only a mean \(\rho=0.21\) with \(p=0.13\), indicating a low probability of finding a correlation due to degeneracy. These differences underscore the difficulty of comparing results of stellar populations from analysis based on different codes and models. With our analysis, albeit based on only one direction across the galaxies, we can compare the two companions in a robust way. Moreover, in Appendix C.1 we discuss the strengths of our IMF measurements, including the comparison with the expectations of spectral indices. We show the trends of the IMF slope with \(\sigma\) and metallicity in the centers of these galaxies in Figure 11. Regarding the dependence of the IMF on the velocity dispersion (left panel), we first compare our results with the global relations from, for example, Conroy and van Dokkum (2012) and Cappellari et al. (2013). By averaging our central values, we indeed find a good consistency with their findings. However, our two local IMF slope trends follow two different positive correlations as shown by the line in the left panel of Figure 11: a much steeper one for NGC3309 (diamonds, dot-dashed line), and a milder one for NGC3311 (stars, dashed line) that is also consistent with a flat trend. Interestingly, higher values of both \(\sigma\) and IMF slope are seen in the center of NGC3309 and, on the contrary, in the outskirts of NGC3311, as a consequence of their opposite radial profile of both \(\sigma\) and IMF. As noted in the following Section 5.4.2, generally the global scaling relations are not equally replicated by the local ones. In addition, Parikh et al. (2018) show the local trend of the \(\alpha\) mass excess factor with \(\sigma\) and find different slopes with respect to the global relation. With the direct comparison our two objects, we further show the complexity and peculiarity of each galaxy, but also confirm that a trend with \(\sigma\) holds for both of them. We thus conclude that, although there is not a unique trend that radially correlates the IMF slope with the velocity dispersion in an absolute way, \(\sigma\) and the IMF may be interconnected and local processes may also affect this relation. IMF trends with metallicity of both galaxies show a positive correlation (Figure 11, right panel, dashed line). Few outliers may be explained by the higher uncertainties in the IMF measures in the outer regions, as described in Appendix C.1. While the correlation for only NGC3309 is strong with \(\rho=0.88\) and \(p=0.000025\), the overall correlation is milder with \(\rho=0.41\) and \(p=0.04\). However, the most central values of NGC3311, where errors are smaller, are well fitted in the trend, reinforcing the hypothesis that a local connection between IMF and metallicity does hold. This finding confirms the results of Parikh et al. (2018) who find that regardless of the mass bin or radial position, the IMF tracks very well the total metallicity globally and locally in a similar way. Other examples of similar results come from Martin-Navarro et al. (2015), van Dokkum et al. (2017) and Feldmeier-Krause et al. (2021). We have also checked for the presence of a correlation between IMF and [Mg/Fe] in the two centers, but found no correlations, neither separately, nor together. This is in agreement with La Barbera et al. (2015) and Martin-Navarro et al. (2015). Indeed, we stress once again, that the retrieved single-element abundances are very similar in both galaxy centers. In general, as noticed in van Dokkum et al. (2017), the correlations of the IMF slope with single elemental abundances show larger scatter and differences; only the overall metallicity follows the IMF trend. #### 5.4.2 General scaling relations In Figure 12 and 13, all elements as a function of the velocity dispersion are shown, with the same color-coding of Figure 6, i.e. with darker red in the two galaxy centers and bluer in the outer halo regions. In the lowest rightmost panel of Figure 13, the [\(\alpha\)/Fe] trend, calculated as the average of C, O, Mg, Ca, Si and Ti, is shown. A general comment is that the well-established global scaling relations among early-type galaxies are not easily reflected in the local relations, as also observed in Parikh et al. (2019), here complicated also by the presence of the outer stellar halo. This is evident for the [Z/H] vs \(\sigma\) relation, that in our results shows a general clear negative gradient, while we usually find higher metallicity at higher velocity dispersion. We remark that by fitting the spectrum composed by stacking all the radial regions spectra within 1Re of each galaxy, thus considering the observation of the global galaxy, we obtained results perfectly in agreement with local relations (Thomas et al., 2010), with a (slightly) higher metallicity for a (slightly) higher velocity dispersion ([Z/H]\({}_{NGC3311}=-0.13\pm 0.04\) dex and \(\sigma_{NGC3311}=215\pm 5\) km/s, and [Z/H]\({}_{NGC3309}=-0.04\pm 0.01\) dex and \(\sigma_{NGC3309}=238\pm 2\) km/s). Our result indicates that locally within galaxies, the [Z/H]-\(\sigma\) positive correlation is only an artifact that occurs because typical elliptical galaxies have a decreasing velocity dispersion profile. As in the case of NGC3309's center, indeed, we find a steep increasing local gradient of [Z/H] with \(\sigma\) as a consequence of their both negative gradient with R. This is also valid for some of the elemental abundances as Na and Ti (see Figure 12). While in the centers, elements behave smoothly and homogeneously, with more flat trends for the lower and flatter velocity distribution galaxy (NGC3311) and steeper positive gradients with \(\sigma\) for the other one which has a steeper velocity distribution profile (NGC33309), if we look globally, trends with \(\sigma\) are more difficult to be spotted and justified. At \(\sigma>250\) km/s, the distinction between the two galaxies is erased, meaning that the surrounding halo is equally non-homogeneous. In these areas elements exhibit more scatter but, with the exception of Ni and Ti, values are or all sub-solar or all super-solar. This could be the first sign that the halo regions have their own unique chemical identity, although the large scatter decreases the statistical significance of this result. Moreover, it must also be noted that in these outer regions and halos, the higher velocity dispersion values are not tracing higher stellar mass contributions as in the central regions, but a higher contribution from dark matter. Indeed, Richtler et al. (2011), analyzing the velocity dispersion profile derived from NGC3311 and its surrounding globular clusters out to \(\sim 200\) kpc, have argued that a cored dark matter halo is necessary to explain the observed kinematics. #### 5.4.3 \(\alpha\)-elements The global scaling relation of [\(\alpha\)/Fe] with \(\sigma\) has been also widely observed and discussed (e.g.: Thomas et al., 2005, 2010; Johansson et al., 2012), thus far indicating an enhancement of \(\alpha\)-elements in more massive galaxies. This parameter has been studied in particular because it is directly connected with the timescale of the star formation process (Thomas et al., 2005). Specifically, more massive galaxies, observed to be more \(\alpha\)-enhanced, are thought to have formed with a shorter timescale, typically \(<1\) Gyr for early-type galaxies. As a consequence, considering the centers of NGC3309 and NGC3311, and that the ratio of their dynamical mass is \(\sim 0.7\), we would expect a difference in the \(\alpha\)-elements abundance. Instead, if we average all the retrieved \(\alpha\)-elements together, we find consistent values between the two galaxies. Indeed, in Section 5.2 and Figure 10 we have already shown the high similarity of elemental abundances in the two central regions. The same flat [\(\alpha\)-Fe] vs \(\sigma\) trend, as shown in Figure 13, remains constant also out to higher \(\sigma\) values, i.e. in the halos (as also observed in Barbosa et al., 2016). As already noted in Graves & Schiavon (2008), and widely investigated afterwards, the abundance patterns of stellar populations are too complex to be described with only one parameter. Indeed, the detailed abundance characteristics of stellar populations offer a wealth of information on galaxy formation processes and stellar nucleosynthesis. With the possibility of observing the radial variation of many single \(\alpha\)-elements, we can thus hope to better characterize the past history of these cluster members. We then derived the [\(\alpha\)/Fe] trend in three different ways by averaging C-O, Mg-Si and Ca-Ti separately. These trends are shown in Figure 14, from which it is clear that they do follow different behaviours, as expected. C-O (orange) traces roughly the Ca-Ti (red), albeit being higher mostly in the centers, while Mg-Si (blue) shows a totally different behavior especially in the halos. In the right panel of Figure 14, with the same colors, we show the three [\(\alpha\)/Fe] trends as a function of the velocity dispersion. It can be noticed that the Mg-Si trend clearly shows a positive gradient, totally in contrast with the other two trends. If we would have considered the average of all \(\alpha\)-elements trend alone, as is usually done, we would have concluded that the central regions and halos shared the same \(\alpha\)-enhancement and thus star formation timescale. Instead, by inspecting separate elements, produced by different processes or by a different mix of them, it becomes clear that the halo regions have probably experienced a different star formation history. If following the [Mg/Fe] ratio, as used in Thomas et al. (2005, 2010), our Mg-Si trend suggests a star formation timescale in the range \(0.2-1\) Gyr for regions within \(1\mathrm{R}_{e}\), and \(<0.1\) Gyr in the outer regions. We have also checked if the difference among \(\alpha\)-element trends can be addressed to the corrections applied during the post-processing of the fit. Without the corrections the difference is still visible, however the Mg-Si trend is flat at \(\sim 0.2\) dex also in the halo regions, with the consequence of an overall constant star-formation at \(\sim 1\) Gyr. From the analysis of the detailed \(\alpha\)-elements we can conclude that: i) regardless of their different mass and velocity dispersion, the core of the two galaxies have formed with the same star formation timescale, ii) the outer regions show signs of different production mechanisms for different \(\alpha\)-elements, iii) the outer regions stars have formed with a different star formation timescale than the centers. ### The possible origin of halo stars In Section 3.2 we have discussed the similarity among the western, central and eastern regions. With the results obtained on the \(\alpha\)-element distributions and the related information on the star formation time-scale discussed in Section 5.4.3, we can also speculate on the possible origin of the halo stars. The relatively small spread of elemental values around the cluster seems consistent with an origin due to the accretion of dwarf galaxies rather than globular clusters, the latter of which, with their significantly larger numbers, would have resulted in a more diverse distribution of abundance values (e.g., Aoki et al., 2020; Ji et al., 2020). Moreover, the single \(\alpha\)-element trends in the halos (see Figure 14) show a clear difference in their values in the central regions, suggesting that the stars Figure 11: Correlations between IMF slope and velocity dispersion (left), and metallicity (right). Points are color-coded as in Figure 6, with darker red in the center. Diamonds refer to NGC3309 and stars to NGC3311. On the left, the dashed line indicates the linear fit to the NGC3311 points and the dot-dashed line the fit to those of NGC3309; on the right, the dashed line refers the global linear fit. Figure 12: Correlations with velocity dispersion of metallicity and all retrieved elements. Points are color-coded as in Figure 6, with darker red in the center and blue in the outskirts. Diamonds refer to NGC3309 and stars to NGC3311. Within 250 km/s, in the central regions, it can be easily seen that the different \(\sigma\) radial profiles of the two galaxies produce different trend of elements with \(\sigma\), proving that local scaling relations are only explained by their \(\sigma\) profiles. At \(\sigma>250\) km/s, instead, elements in the halos are distributed in a different way, which is hard to connect to the inner regions due to the increasing contribution of dark matter. in the halos have been formed in different stellar systems, where the star formation occurred over different time-scales. A possible origin of such stars could be dwarf spheroidal or ultra-faint dwarf galaxies. Indeed, these kinds of stellar systems are known to host chemical properties similar to the ones we observed in the Hydra halo, i.e. very low metallicity associated with, for example, super-solar [Mg/Fe] and very low [Ba/Fe] (Koch et al., 2008). Additional observational evidence supporting this scenario can be found in the works of Coccato et al. (2011), Ventimiglia et al. (2011) and Arnaboldi et al. (2012). On the basis of stellar population parameters and kinematic properties of planetary nebulae in the halo around NGC3311 and NGC3309, these works show evidence of accreted satellite galaxies which have been tidally stripped and then diffused into the stellar halo. This mechanism is witnessed to be still ongoing for the dwarf low-metallicity galaxy HCC 026 and the S0 galaxy HCC 007, in the proximity of the cluster center, where tidal streams are observed and dynamically characterized. ## 6 Summary and Conclusions In this work we have analyzed high-quality long-slit optical+NIR spectroscopic data across the two brightest galaxies of the Hydra I cluster. We have characterized in detail the stellar population of their centers, where the _in situ_ component still resides, and compared them to their surrounding stellar halos where, on the contrary, their evolution has resulted in mixed stellar components. The advantage of studying these two galaxies together is that we have been able to compare directly data taken with the same instrumental setup and also using the same data-reduction methods and stellar-population-synthesis models. In addition, we have been able to test the validity of many scaling relations (in a local setting), since the two objects differ in their mass and velocity dispersion radial profiles. With full spectral fitting over a large wavelength range, in comparison with stellar population synthesis models allowing for non-solar values of elemental abundances and a non-constant IMF slope, we determined age, overall metallicity, IMF slope and 19 elemental abundances with good precision. Due to the lower signal-to-noise of IMF-sensitive spectral features in the halo regions, the IMF could only be robustly derived in the centers of the two galaxies. Despite their different masses and velocity dispersions, we find that the two galaxy centers are very similar in their stellar content, with same age and same elemental abundances. This suggests that their formation happened at the same cosmic epoch and that they shared a similar chemical enriching history. Moreover, since we can correlate \(\alpha\)-elements with the star formation time-scale, it also appears that their star formation history has been prolonged in the same way. Beyond such shared characteristics that may suggest that NGC3311 and NGC3309 followed a similar evolutionary path, we also measured some disparities that suggest a slightly more complex picture. In particular, the two galaxies differ in their overall metallicity, the IMF slope, and the radial velocity dispersion profile. Focusing on these three properties that change, and investigating their possible relation, we found that: i) the IMF correlates well with [Z/H] both locally and globally, with higher metallicity having a bottom-heavier IMF. Although the difference in metallicity in the centers of the two galaxies is small (\(\sim 0.1\) dex), the _local_ metallicity-IMF correlations are consistent with the suggestion by Martin-Navarro et al. (2015b) that the metal content could have affected the initial collapse of the molecular clouds, thus shaping the low-mass end of the IMF. We can also speculate that the high-mass end of the IMF, co-responsible for the chemical enrichment, is similar for the two galaxy centers. ii) The IMF correlates with the velocity dispersion, with higher \(\sigma\) connected with a bottom-heavier IMF. Moreover, we found that the local correlations of the two galaxies have different slopes, suggesting not only that the IMF and \(\sigma\) can be related, but also that local processes within the same galaxy can drive this connection. In a similar way, the elemental abundance trends with \(\sigma\) also show different local behaviours for the two galaxies. Given the different velocity dispersion profiles of NGC3311 and NGC3309, we were able to distinguish trends that are likely robust global correlations from those that are only the consequence of possessing both abundance gradients and a negative velocity dispersion gradient, typical of elliptical galaxies. Analysing the outskirts and stellar halo regions, we found gradually larger uncertainties in the retrieved stellar properties. These larger uncertainties are due to many factors, including a lower signal-to-noise, as well as the increasing line broadening due to the higher velocity dispersion and the presence of mixed stellar components. Indeed, the investigation of stellar halos is at the moment limited by the lack of fitting codes that allow for multiple populations that differ not only in their age but also in their chemical properties. In addition to these limits, we found clear chemical patterns in the halos, with homogeneity among the eastern, western and the regions between the two galaxies, suggesting an overall common evolution for the central \(\sim 200\arcsec\) of the cluster. Although it is not yet possible Figure 13: Continued. to resolve stars in such distant stellar halos, a dedicated study of chemical properties for different regions around the Hydra cluster halo would help in understanding the origin and nature of the accreted systems. From our findings we can speculate that the origin of the halo stars can likely resides in dwarf spheroidal or ultra-faint dwarf galaxies that have been bounded to the cluster potential well, as also observed in previous works (Vemitiglia et al., 2011; Arnaboldi et al., 2012). Further investigations will be needed to confirm these findings and in particular to understand if they are a characteristic of cluster galaxies. ## Acknowledgments We are thankful to the anonymous referee for reviewing the manuscript and for the helpful suggestions. I.L. thanks B. Madore for his help during the observations of the data used in this work. This research has made use of "Aladin sky atlas" developed at CDS, Strasbourg Observatory, France. The Wisconsin H-Alpha Mapper and its Sky Survey have been funded primarily through awards from the U.S. National Science Foundation. Magellan: Baade (IMACS) IRAF (Tody, 1986, Tody, 1993), IDL, MOLECFIT (Smette et al., 2015; Kausch et al., 2015), emcecee (Foreman-Mackey et al., 2013), ALF (Conroy et al., 2021, 2018), PPXF (Cappellari, 2017). Figure 14: [\(\alpha\)/Fe] trend when averaging: C-O (orange), Mg-Si (blue) and Ca-Ti (red). Left panel as a function of the distance from NGC3309, right panel as a function of the velocity dispersion. Error bars in the left panel are the standard deviations of each average; to better see the trends, error bars are omitted in the right panel. ## Appendix A Foreground emission All the spectra extracted from the long slit positioned across the Hydra I cluster center show a uniform emission of the lines [OII3727A], H\(\beta\), [OIII5007A], [NI5200A], H\(\alpha\) and [NII6585A]. By analysing this emission, in particular the strongest one, i.e. [OII], it is clear that they do not belong to the cluster's light, but rather are local in the Milky Way. We fit all the lines with a Gaussian profile, and found a mean cz\(\sim 25\) km/s with a mean velocity dispersion of 360 km/s. This foreground emission is constant along the whole physical direction of the slit. Fig. 15 shows a zoom of each foreground emission line in an example spectrum extracted in the halo between NGC3309 and NGC3311. The observed lines and their intensities are consistent with those typical of the Warm Ionized Medium (WIM), as studied over the past two decades (_e.g.,_ see the review by Haffner et al., 2009). Indeed, the WIM is characterized by strong [NII] and [SII] lines, while [OIII] and [SIII] are generally weaker (Mathis, 2000). Moreover, [OII3727A] is generally the strongest line in the optical region. We also checked some line ratios, like [NII6583A]/H\(\alpha\), and found that they are consistent with Mathis (2000). Generally, the WIM is considered to be produced by UV bright O-B stars, but studies are still investigating if other sources and mechanisms, like supernovae remnants (Raymond, 1992), or shock excitation (Martin, 1997), or dust scattered radiation (Barnes et al., 2015), can contribute to the ionization. We found a confirmation of our detection of WIM in the foreground region of the Hydra I cluster from the Wisconsin H-Alpha Mapper Sky Survey (WHAM, Haffner et al., 2003). The WHAM survey scanned the whole sky at the H\(\alpha\) wavelength with a spectral resolution of 12 km/s and a spatial resolution of one degree. By inspecting the WHAM H\(\alpha\) intensity map around the region of the Hydra I cluster, we confirmed that there is a moderate emission of H\(\alpha\) which is constant around the cluster, as measured in our spectroscopic data. With the aim of finding possible known candidates for the source of this emission, we looked for O-B-A stars around the cluster. We cross-checked the list of nearby UV-bright stars with Galex data (Bianchi et al., 2011) using Aladin (Bonnarel et al., 2000). We only found two stars with high flux in B-band, i.e. HD-91209 and HD-93657 with spectral type A3IV and A1V respectively, but their locations do not match the morphology of the higher intensity regions in the H\(\alpha\) map very well. We have also checked for the presence of supernovae remnants from the catalog Green (2019), Figure 15: Foreground emission lines in four different regions of an example spectrum extracted in the halo between NGC3309 and NGC3311. but did not find any matches. No visible X-ray emission sources were found in this same region: this is not unexpected since the X-ray emission originates from the potential well of the cluster at z\(\sim 0.013\), as spectroscopically confirmed by the Chandra data of Hayakawa et al. (2004). We further analyzed the photometric data available in both the B (Bessell-B1) and I (CTIO-I1) band, as observed with IMACS during the same night of the spectroscopic observations. The large field of view of 15\({}^{\prime}\) could potentially help in localizing a hypothetical excess of blue light. We then reduced the photometric data in both bands and generated the color B-I frames of the Hydra I cluster. No blue excess is observed; the color is highly uniform in all observed regions. We conclude that with the available data and catalogs, we are not able to retrieve the actual source of this diffuse ionization. ## Appendix B Gas Emission Figure 16: Retrieved gas emission properties. From the top to the bottom: recession velocity offset from the main stellar component, gas velocity dispersion, intensity of Balmer, [OII] and [NII] emission lines. In Figure 16, top two panels, the kinematics of the gas component, retrieved with ALF, is shown. The top panel indicates the relative shift in radial velocity of the gas component with respect to the main stellar component. We note that, for NGC3309, the small amount of gas present in its center is strongly decoupled from the stellar component with velocities offset by up to 300 km/s. While for NGC3311, which hosts strong emission lines in its center (as shown for Balmer lines and [NII] in the lower panels of Figure 16), there is a velocity offset of only 100 km/s from the stellar system. This is consistent with the findings of Richtler et al. (2020) (see their figure 6), when considering the position of our slit with respect to the galaxy (see Figure 4). A difference between the two galaxies can be observed also looking at the velocity dispersion of their gas component: in the center of NGC3311 the gas has similar velocity dispersion values (comparing to Figure 9) to the stellar component, while NGC3309's center present much higher \(\sigma\) values. The latter, however, have larger error bars probably due to the more difficult detection of the gas in emission considering its paucity. As specified in Section 3.1, in the fit with ALF we included the emission of the following lines, typical of young, star forming regions, as free parameters: Balmer lines, [OII], [OIII], [SII], [Nl] and [NII]. In Figure 16 we only show the retrieved intensity of Balmer lines, [OII] and [NII], which are the strongest in our fitted wavelength range, and constrained best. Due to their weakness, [OIII], [Nl] and [SII] generally span all the possible values and have large error bars. The detection of emission lines in the core of NGC3311 is not unexpected due to the presence of the dusty disk as shown in Figure 4, as extensively observed in the literature (e.g. Lindblad, 1977; Wirth & Gallagher, 1980; Vasterberg et al., 1991; Grillmair et al., 1994). More recently, Richtler et al. (2020) showed that this inner dust disk embeds an ongoing star formation region whose gas, detected in emission, is perfectly confined within its limits. Taking into account the likely presence of a small and young stellar component in this confined region, we have fit two stellar components as allowed by ALF. The age of the minor one is bound in the fit from 0.5 to 3 Gyr. We found young component fractions \(<\) 1% with ages from 0.8 to 1.3 Gyr, in agreement with the estimates in Richtler et al. (2020). Regarding the center of NGC3309, we fit only the very central bin with a double component, since the retrieved age does not converge well otherwise. We found a 1.1% young component with an age of 1.8 Gyr. ## Appendix C Outer Regions Fit Details In this section we focus on how we treated the analysis of the outer regions and halos, located at \(>10^{\prime\prime}\) and characterized by lower S/N, higher velocity dispersion, and likely mixed stellar populations. Lower S/N and higher velocity dispersion make the absorption line features less evident and broader. Moreover, the large number of rejected pixels due to increasing sky residuals, mostly around 6400A and 8400A, makes the fit cover wavelength ranges over several gaps. Mixed stellar populations can instead create a bias on the retrieved parameters obtained with SSP models. After several tests aimed at verifying the robustness of the fit on these outer spectra when changing wavelength region and using ALF in either full or simple mode (the latter consisting of a smaller set of free parameters), we noticed that the retrieved parameters that suffer most from the above-listed difficulties are the kinematics ones, i.e. the velocity dispersion and the radial velocity. Indeed, these parameters did not converge well even after increasing the number of walkers. Since the kinematics severely affect the retrieval of stellar population parameters due to degeneracy, we further investigated this problem as follows. We fit each spectral range separately with ALF in the super simple mode (retrieving only the kinematics, age and metallicity) and compare the kinematics results with those obtained with PPXF, set up as described in Section 2. The comparison shows large scatter both among results obtained with the same code but from different spectral ranges, and between the results obtained with different codes on the same wavelength region. The only observed good consistency between the two codes was the results from chip 5 (around 5500A) and partially chip 8 (around 4700A), where the spectra have higher signal and a larger number of strong features (e.g. Mg\({}_{b}\)). Repeating the same test on inner galaxy spectra instead, we found a full consistency. Kinematic results from chip 5 are also in very good agreement with those in the literature, as detailed in Section 4 and shown in Figure 9. For these reasons, we decided to fix the kinematic values of the halo and outer regions to those obtained by our ALF run on chip 5 spectra with the super simple mode. With similar arguments, we decided to keep fixed also the age and metallicity values as extracted from chip 5 in the halo regions, when later performing the full-mode fit. The retrieved values of age and [Z/H], indeed, are in good agreement with the indications of the two spectral indices H\(\beta\) and [MgFe]', which consolidates our choice. To summarize, when performing the fit with ALF in full mode, as described in Section 3.1, in the outer regions we fixed the kinematic values to the simple mode ones extracted from chip 5, while in the halos we fixed not only the kinematics, but also age and metallicity. ### The IMF in the centers In this section we test the accuracy of our results by comparing them with the expectations of some IMF sensitive indices, such as TiO2 and bTiO. Indeed, when taking into account age, metallicity and all of the elemental abundances (see Loonce et al., 2021), spectral indices can give reliable indications on the IMF slope value, as widely done in the literature (e.g.: Martin-Navarro et al., 2015; La Barbera et al., 2015; Parikh et al., 2018). We measured the value of the indices in our wavelength range and made comparisons with the results obtained with ALF. In addition to metallicity that perfectly matches the measured total metallicity indicator [MgFe]' (Thomas et al., 2003) at the retrieved ages, also the IMF trends in the centers are confirmed, for example, by TiO2 and bTiO. In Figure 17, the plot of the retrieved IMF slopes vs the measured TiO\(2_{sdss}\) index is shown. In this plot we only show the measures from spectra that were not disturbed by sky residuals, i.e. only the central regions. Together with our measurements, we also show the expectations of models at different metallicity values (different colors) and with a depletion of [Ti/Fe] (dashed lines). We recall, however, that [Ti/Fe] is confined to a region of \(\pm 0.1\) dex in the center of both galaxies. The models also outline regions of different velocity dispersion values, but notably, \(\sigma\) does not affect this index much (while other indices are more affected, like bTiO). From this figure it is clear that the very central region of NGC3309 (dark red diamonds) has increasing values of TiO\(2_{sdss}\), suggesting a degeneracy effect of both increasing [Z/H] and IMF, while NGC3311 values (dark red stars) are confined in a narrow region pointing to constant metallicity and IMF. Since [Z/H] is very well constrained as confirmed by [MgFe]', we are confident that the retrieved trend of IMF in both galaxies is well validated by the value of this index. Finally, from this plot it is also possible to understand how difficult the retrieval of the IMF at lower metallicity values is, where the slope of models is steep, meaning that the same value of TiO\(2_{sdss}\) can be explained with both a Kroupa-like or bottom-heavy IMF. This is the case for the orange points, plotted with open symbols to indicate the higher uncertainty of their retrieved values. In conclusion, our IMF measurements in the centers of NGC3309 and NGC3311 have the following strengths: i) we fitted a large wavelength range in the optical and NIR with high S/N, including many IMF-sensitive spectral features, ii) all fits are well-converged in all parameters, including the IMF, iii) we took into account all elemental abundances that contribute to the shaping of spectral features, iv) we considered the systematics affecting all parameters, including the IMF, and v) we checked the values of IMF-sensitive spectral indices to confirm the results obtained from full spectral fitting. We will discuss the implications of the retrieved values in Section 5.4.1. ## Appendix D Fit correlations As mentioned in the main text, before discussing correlations among parameters, it is important to check the level of degeneracy among them during the fit, to rule out nonphysical correlations. In this brief section we list out results. For each analyzed spectrum, we inspected the marginal posterior distributions of all the pairs of parameters, and to be quantitative, we calculated the Spearman correlation coefficient \(\rho\) for each distribution. In general, each spectrum presents many relevant correlations (i.e. those with p-value \(p<0.1\)), but not all are recurrent for all spectra. When averaging correlations on the spectra with higher S/N (in the central regions), indeed, we found that around 30% of parameter pairs have a correlation with \(\rho>0.20\) with p-value \(p<0.1\), and only around 7% with \(\rho>0.40\). In five cases \(\rho>0.60\), they are: C-O (\(\rho=0.74\), mostly due to the extraction of these two elements from the same molecule CO, see Worthey et al., 2014), Mg-Fe, Ca-Fe, Fe-[Z/H] and Na-[Z/H]. In more detail, age correlates with [Z/H] (\(\rho=-0.45\)) and IMF slope (\(\rho=-0.48\)); [Z/H] correlates mostly with Fe, Na, C, Ca, Cr, Mg, Mn, O, Si; Na correlates with [Z/H], Fe, C, Mg and O; Mg correlates also with C, Ca, Ti, V and Cr. The IMF slope correlates moderately only with age, and mildly with Na and Ti (\(\rho\sim 0.30\)). The IMF with [Z/H] has \(\rho=0.21\) but with \(p=0.13\). These degeneracy indicators are similar to their analogs in Barbosa et al. (2021), obtained with different codes; however, the strength of each correlation show some differences. An example, as noted in the main text, is the correlation between [Z/H] and IMF slope which we measured \(\rho=0.21\) with \(p=0.15\) while Barbosa et al. (2021) report \(\rho=0.35\) with \(p=0.00\), indicating a more important internal degeneracy in the latter. ## Appendix E General correlations In Figure 18 we show some examples of the mutual correlations between the retrieved parameters in the form of a corner plot. Ellipses indicate the degree of correlation with their ellipticity proportional to the measured Spearman rank coefficient. We divided the results in center of NGC3309 (pink), center of NGC3311 (green) and the remaining outer regions and halo (blue) to highlight their differences. In general, it can be observed that there are no strong correlations apart from the center of NGC3309. This galaxy shows strong correlations, with \(\rho>0.6\), between [Z/H] and \(\sigma\), Na, IMF slope, C, Ti, and O; between \(\sigma\) and IMF slope, O, Ti, C, N and Na; and as a consequence, all the combinations of these quantities. There is also an important anti-correlation between age and [Z/H], IMF slope, and \(\sigma\). However, most of these correlations are not found in the center of NGC3311, which in some cases shows even opposite correlations. There is agreement between the two galaxies only for \(\sigma\)-IMF, [Z/H]-Ti, Ti-O, [Z/H]-N, Ti-N, Na-O and Na-N. On one hand, this evident difference is a sign that the kinematics radial profile plays a role in the distribution of elements. Indeed, as it will be further discussed in Section 5.4.2, there is large disagreement between the two galaxies correlations with \(\sigma\), with the exception of the IMF. On the other hand, looking more closely, for example, at correlations with the total metallicity, it can be noticed that, although they are different in the two galaxies for their slope and strength, there is a common positive trend with O, C, Ti, N and Na. This suggests that, although the metallicity trends are slightly different for the two objects, these elements track [Z/H]. This finding is in agreement with the MaNGA data results in Parikh et al. (2019), in particular for Na, N and partially for Ti. However, they do not find a local correlation with C. That N follows [Z/H] is expected since N can be enhanced by a delayed secondary production, activated only at higher metallicity (Johansson et al., 2012; Maiolino and Mannucci, 2019, and references therein). In the halo regions we do not see many strong correlations, with the exception of e.g.: Ca-[Z/H], Ca-Si, C-age, Na-Mg, Na-Ca, Fe-N; we only note the excellent accord among all regions in the correlation between N and Ti. This general disagreement between central and outer regions indicates likely different origins and star formation histories of their stellar content. Comparing the correlations in the centers of the two galaxies with our results described in Feldmeier-Krause et al. (2021) obtained from a sample of local ellipticals, we confirm the correlations between Na-O, Na-N, C-O (but affected by cross-correlation in the fit), N-O, Na-O, C-Ti, Na-C. We also observe the same [Z/H]-age anti-correlation, but much of it comes from the fit degeneracy. Figure 17: Retrieved IMF vs TiO\(2_{sdss}\) index values in the centers of the two galaxies NGC3309 (diamonds) and NGC3311 (stars). Solid colored lines are the model values at different metallicity and age 13.5 Gyr. Dashed lines show the same trends when the [Ti/Fe] abundance is sub-solar, i.e. \(-0.3\) dex. All other abundances are solar. Points are color-coded as in Figure 6, with darker red in the center and orange in the outskirts. While in the very central regions TiO\(2_{sdss}\) values indicate a clear gradient for IMF and [Z/H] for NGC3309 and a flat trend for NGC3311 (confirmed by the retrieved results), in the outer regions where the metallicity is lower, it is harder to constrain the IMF since the same value of TiO\(2_{sdss}\) corresponds to a wide range of IMF.
2309.03270
A Swift Fix II: Physical Parameters of Type I Superluminous Supernovae
In November 2020, the Swift team announced a major update to the calibration of the UltraViolet and Optical Telescope (UVOT) data to correct for the gradual loss of sensitivity over time. Beginning in roughly 2015, the correction affected observations in the three near ultraviolet (UV) filters, reaching levels of up to 0.3 mag immediately prior to the correction. Over the same time period, an increasing number of Type I superluminous supernovae (SLSNe-I) were discovered and studied. Many SLSNe-I are hot (T$_\textrm{eff}$ $\approx 10,000$ K) near peak, and therefore accurate UV data are imperative towards properly understanding their physical properties and energetics. We re-compute Swift UVOT photometry for SLSNe-I discovered between 2014 and 2021 with at least 5 Swift observations in 2015 or later. We calculate host-subtracted magnitudes for each SLSN and fit their spectral energy distributions with modified blackbodies to obtain the radius and temperature evolution. We also fit multi-band photometry using the Modular Open Source Fitter for Transients (MOSFiT) to obtain key parameters such as the spin period (P), magnetic field strength (B), ejecta mass (M$_\textrm{ej}$), and kinetic energy (E$_\textrm{kin}$). From our MOSFiT modeling, we also estimate the peak UV/optical luminosity (L$_\textrm{peak}$) and total radiative energy (E$_\textrm{rad}$). Under the assumption of magnetar-powered SLSNe we find several strong trends, including anti-correlations between P and both L$_\textrm{peak}$ and E$_\textrm{rad}$, a correlation between E$_\textrm{kin}$ and E$_\textrm{rad}$, and an anti-correlation between B and E$_\textrm{rad}$.
Jason T. Hinkle, Benjamin J. Shappee, Michael A. Tucker
2023-09-06T18:00:02Z
http://arxiv.org/abs/2309.03270v2
# A _Swift_ Fix II: Physical Parameters of Type I Superluminous Supernovae ###### Abstract In November 2020, the _Swift_ team announced a major update to the calibration of the UltraViolet and Optical Telescope (UVOT) data to correct for the gradual loss of sensitivity over time. Beginning in roughly 2015, the correction affected observations in the three near ultraviolet (UV) filters, reaching levels of up to 0.3 mag immediately prior to the correction. Over the same time period, an increasing number of Type I superluminous supernovae (SLSNe-I) were discovered and studied. Many SLSNe-I are hot (\(\rm{T_{eff}\approx 10,000}\) K) near peak, and therefore accurate UV data are imperative towards properly understanding their physical properties and energetics. We re-compute _Swift_ UVOT photometry for SLSNe-I discovered between 2014 and 2021 with at least 5 _Swift_ observations in 2015 or later. We calculate host-subtracted magnitudes for each SLSN and fit their spectral energy distributions with modified blackbodies to obtain the radius and temperature evolution. We also fit multi-band photometry using the Modular Open Source Fitter for Transients (MOSFiT) to obtain key parameters such as the spin period (P), magnetic field strength (B), ejecta mass (\(\rm{M_{ej}}\)), and kinetic energy (\(\rm{E_{kin}}\)). From our MOSFiT modeling, we also estimate the peak UV/optical luminosity (\(\rm{L_{peak}}\)) and total radiative energy (\(\rm{E_{rad}}\)). Under the assumption of magnetar-powered SLSNe we find several strong trends, including anti-correlations between P and both \(\rm{L_{peak}}\) and \(\rm{E_{rad}}\), a correlation between \(\rm{E_{kin}}\) and \(\rm{E_{rad}}\), and an anti-correlation between B and \(\rm{E_{rad}}\). Core-collapse supernovae (304) -- Near ultraviolet astronomy(1094) -- Supernovae (1668) -- Time domain astronomy (2109) -- Transient sources (1851) + Footnote †: journal: ApJ 0000-0002-8071-8885]Jason T. Hinkle 0000-0002-8861-7885]Benjamin. J. Shappee 0000-0002-4880-7885]Michael A. Tucker ## 1 Introduction A core-collapse supernova (CCSN) marks the death of a massive star (e.g., Woosley et al., 2002; Heger et al., 2003; Smartt, 2009). The "typical" Type Ib/c and Type II CCSNe have been well-known for decades (e.g., Minkowski, 1941; Zwicky, 1964; Porter and Filippenko, 1987). However, a rare class of supernovae known as superluminous supernovae (SLSNe; Quimby et al., 2007, 2011; Gal-Yam, 2012) has been observed over the past \(\approx\)15 years, with peak luminosities roughly \(10-100\) times more luminous than normal Type Ia and core-collapse supernovae (e.g., Folatelli et al., 2010). The light curves of SLSNe often evolve slower than typical supernovae (Gal-Yam, 2019), on timescales of \(\sim 20-80\) days (e.g., Nicholl et al., 2017; Chen et al., 2022). Similar to normal SNe, the growing class of SLSNe can be divided into two main spectroscopic classes, those without hydrogen emission (SLSN-I; Quimby et al., 2007; Gal-Yam, 2012; Nicholl et al., 2017; Chen et al., 2022) and those with hydrogen emission (SLSN-II; Miller et al., 2009; Gezari et al., 2009). Their superluminous nature notwithstanding, such events would otherwise be classified as SNe Ic and SNe IIn respectively in most cases. Nevertheless, SLSNe-I exhibit unique pre- and near-peak spectra with very blue continua and strong O ii absorption features (Quimby et al., 2018; Gal-Yam, 2019). Additional diversity in spectroscopic properties has been seen, such as the discovery of the SLSN-Ib subtype, lacking hydrogen lines but with strong helium absorption (Quimby et al., 2018; Yan et al., 2020). The radioactive decay thought to power normal supernovae (e.g., Hoyle and Fowler, 1960; Arnett, 1982) cannot generally explain the luminosities of observed SLSNe (Quimby et al., 2011). As such, several more exotic models have been put forth. These include a central engine - either the injection of energy from the spin-down of a magnetar (e.g., Kasen and Bildsten, 2010; Woosley, 2010) or accretion onto a newly-formed black hole (Dexter and Kasen, 2013), interactions between the SN ejecta and the circumstellar medium (CSM) (e.g., Chevalier and Irwin, 2011; Moriya et al., 2013), and the radioactive decay of unusually large amounts of \({}^{56}\)Ni from a pair-instability explosion (e.g., Barkat et al., 1967; Kasen et al., 2011; Woosley, 2017). As SLSN-II share many similarities with the Type IIn class of supernovae (Smartt, 2009; Gal-Yam, 2017), they are most likely powered by interactions with abundant CSM (e.g., Inserra et al., 2018). Conversely, the energy sources of SLSNe-I have proven more difficult to identify. Their spectra lack strong emission or absorption lines typically used to model SN photospheric evolution (Dessart et al., 2016; Woosley et al., 2021) and many proposed theories predict similar observables (e.g., Sukhbold and Woosley, 2016). Indeed, Chen et al. (2022) find that a majority of SLSNe-I are equally well fit by magnetar spin-down and CSM+\({}^{56}\)Ni decay models. A growing number of SLSNe-I exhibit bumps or undulations in their light curves, further confusing the problem. Such light curves are difficult to explain with a magnetar central engine (e.g., Nicholl et al., 2017), although some efforts have been made to extend the magnetar model (Dong et al., 2023). Instead, the light curve undulations may be more naturally explained by unstable accretion onto a BH or CSM interactions, although spectral predictions for such models are lacking (e.g., Gal-Yam, 2019). Furthermore, the recently-identified class of luminous supernovae (Gomez et al., 2022), with luminosities between those of typical CCSNe and SLSNe, can be powered by large amounts of \({}^{56}\)Ni or weak magnetar engines, suggesting an underlying continuum. Large samples of well-observed SLSNe are being compiled as all-sky surveys (e.g., ASAS-SN, ATLAS, and ZTF Shappee et al., 2014; Tonry et al., 2018; Bellm et al., 2019) and spectroscopic classification efforts (e.g., PESSTO, SCAT Smartt et al., 2015; Tucker et al., 2022) have expanded. This has allowed several population studies to be conducted (e.g., Nicholl et al., 2017; De Cia et al., 2018; Chen et al., 2022), generally finding that magnetar models can describe the light curves of most SLSNe-I while finding considerable diversity in the population (e.g., Yan et al., 2020; Chen et al., 2022, 2020). Notably, many SLSNe only have observer-frame optical data and those with observer-frame ultraviolet (UV) observations of SLSNe (often from _Swift_) occurred during the period in which the UltraViolet and Optical Telescope (UVOT) sensitivity calibration was affected (e.g., Hinkle et al., 2021). Given the strong UV emission of many SLSNe-I near peak, this motivates revisiting trends and correlations with corrected UV data. The paper is organized as follows. In Section 2 we discuss the sample selection and in Section 3 we discuss our reductions of the _Swift_ UVOT data. In Section 4 we describe our blackbody models of the SLSN SEDs. Section 5 details our modeling of the multi-band photometry with the Modular Open Source Filter for Transients (MOSFIT Nicholl et al., 2017; Guillochon et al., 2018). Section 6 presents several correlations between physical parameters. Finally, we provide conclusions in Section 7. Throughout this paper, we have used a cosmology with \(H_{0}=69.6\) km s\({}^{-1}\) Mpc\({}^{-1}\), \(\Omega_{M}=0.29\), and \(\Omega_{\Lambda}=0.71\)(Wright, 2006; Bennett et al., 2014). ## 2 Sample Selection To create our sample of supernovae, we searched both the Open Supernova Catalog (Guillochon et al., 2017) and the Transient Name Server1 (TNS) for spectroscopically-classified SLSNe-I discovered between 2014 and 2021, with _Swift_ data taken in 2015 or later. We then limited our sample to objects which had five or more epochs of _Swift_ UVOT photometry, to allow us to robustly measure the evolution of the UV fluxes. This threshold yielded 27 SLSNe-I. As its physical origin remains a matter of debate, we exclude the ambiguous source ASASSN-15lh (Dong et al., 2016; Leloudas et al., 2016) from our sample. Table 1 lists these objects, along with the appropriate references for the source classification and, when available, the discovery papers publishing _Swift_ photometry. The redshifts for the SLSNe in our sample were typically taken from the Open Supernova Catalog or publicly available classification spectra on TNS, but for some sources without such spectra we used the redshift listed in the appropriate discovery paper. In Table 1, we also note which SLSNe have undulations in their light curves, either those noted in the literature or ones with clear undulations in survey light curves for sources without published papers. Footnote 1: [https://www.wis-tns.org/](https://www.wis-tns.org/) Six of the _Swift_ UVOT (Roming et al., 2005; Poole et al., 2008) filters are typically used for photometric follow-up of transient sources: \(V\) (5425.3 A), \(B\) (4349.6 A), \(U\) (3467.1 A), \(UVW1\) (2580.8 A), \(UVM2\) (2246.4 A), and \(UVW2\) (2054.6 A). The wavelengths quoted here are the pivot wavelengths calculated by the SVO Filter Profile Service (Rodrigo et al., 2012), which we use throughout the remainder of this work. Many of the SLSNe in our sample have epochs with each of these filters, although some objects only used a subset of the full filter set. Additionally for some objects, filters with early non-detections were dropped for late-time epochs. The majority of UVOT epochs include multiple observations in each filter. We separately combined the images in each filter for each unique observation identification number using the HEASoft uvotimsum package. We then used the uvotsource package to extract source counts in a region centered on the position of the transient and background counts using a source-free region with radius of \(\approx 30-40\arcsec\). We used a source radius was \(5\arcsec\) to minimize UVOT aperture corrections. We then converted the UVOT count rates into fluxes and magnitudes using typical calibrations (Poole et al., 2008; Breeveld et al., 2010). For each UVOT image, we confirmed that the source did not lie on a region of the detector with known sensitivity issues2 (also see the Appendix of Edelson et al., 2015). Our raw _Swift_ photometry, uncorrected for the host-galaxy flux contribution and Galactic foreground extinction, is shown in Table 2. Footnote 2: [https://swift.gsfc.nasa.gov/analysis/uvot_digest/sss_check.html](https://swift.gsfc.nasa.gov/analysis/uvot_digest/sss_check.html) ### Host-Galaxy UV Contribution \begin{table} \begin{tabular}{c c c c c c} \hline \hline Object & TNS ID & Redshift & Right Ascension & Declination & References \\ \hline ATLAS18yff & SN2018hti & 0.063 & 03:40:53.76 & +11:46:37.38 & Lin et al. (2020) \\ ATLAS18unu1 & SN2018ibb & 0.1586 & 04:38:56.950 & \(-\)20:39:44.10 & Schulze et al. (2023) \\ ATLAS19ine & SN2019enz & 0.22 & 13:57:06.081 & +27:59:38.07 & Nicholl et al. (2019b) \\ ATLAS19pr2 & SN2019lsq & 0.14 & 00:04:40.6 & +42:52:11.35 & Chen et al. (2022a) \\ ATLAS19ynd3 & SN2019szu & 0.213 & 00:10:13.14 & \(-\)19:41:32.46 & Chen et al. (2022a) \\ ATLAS20xqi & SN2020mr & 0.27 & 00:40:00.187 & \(-\)14:35:25.14 & Chen et al. (2022a) \\ ATLAS202st & SN2020ctw & 0.0645 & 15:28:17.080 & +39:56:50.53 & Perley et al. (2020) \\ DES1552nr &... & 0.22 & 02:40:44.62 & \(-\)00:53:26.4 & D’Andrea et al. (2015) \\ Gaia16apd2 & SN2016eaay & 0.102 & 12:02:51.70 & +44:15:27.4 & Nicholl et al. (2017b); Kangas et al. (2017); Yan et al. (2017) \\ Gaia17biu & SN2017egm & 0.030721 & 10:19:05.620 & +46:27:14.08 & Nicholl et al. (2017a), Bose et al. (2018) \\ Gaia17cbp3 & SN2017gci & 0.09 & 06:46:45.030 & \(-\)27:14:55.86 & Fiore et al. (2021) \\ Gaia18beg & SN2018bgr & 0.0795 & 11:02:30.290 & +55:35:55.79 & Lunnan et al. (2020) \\ iPTF15esb4 & SN2016wi & 0.224 & 07:58:50.67 & +66:07:39.1 & Liu et al. (2017); Yan et al. (2017) \\ LSQ14mo &... & 0.253 & 10:22:41.53 & \(-\)16:55:14.4 & Chen et al. (2017) \\ PS15ae5 & SN2016ard & 0.2025 & 14:10:44.558 & \(-\)10:09:35.42 & Blanchard et al. (2018) \\ PS16dnq & SN2016els & 0.217 & 20:30:13.925 & \(-\)10:57:01.81 & Fraser et al. (2016) \\ PS22bca & SN2020zh & 0.159 & 19:07:49.550 & +62:57:49.61 & Perez-Fournon et al. (2020) \\ ZTF20acphdcg & SN2020nznr & 0.1 & 07:19:06.420 & 23:53:07.37 & Gromadzki et al. (2020) \\ ZTF20acpyldh & SN2020abjc & 0.219 & 09:28:00.274 & +14:07:16.62 & Blanchard et al. (2020a) \\ ZTF21aaarmti & SN2021ek & 0.193 & 03:23:49.914 & \(-\)10:02:41.18 & Srivastav et al. (2021) \\ ZTF21abaiono & SN2021lwz & 0.065 & 09:44:47.390 & \(-\)34:42:44.21 & Perley et al. (2021) \\ ZTF21acwovq & SN2021zcl & 0.117 & 05:09:14.458 & \(-\)06:03:13.87 & Gromadzki et al. (2021) \\ \hline \end{tabular} Note. – The 27 SLSNe-I we re-analyze in this manuscript. TNS ID is the ID given for objects reported on the Transient Name Server. References include the discovery papers and papers using _Swift_ data taken in 2015 or later. For objects without a discovery paper or inclusion in a survey paper, we cite the initial classification of a SLSNe-I. _If using the revised photometry presented here, please cite both this paper and the original paper(s) in which Swift photometry was published. \end{table} Table 1: Sample of Objects To compute accurate transient photometry, we require an estimate of the host-galaxy flux in each bandpass. By subtracting this host-galaxy flux from the _Swift_ photometry, we can isolate the supernova flux. We estimated the host-galaxy flux in the _Swift_ bands in two main ways. Some SLSNe-I had late-time _Swift_ exposures of the host-galaxy, often targeted specifically to estimate the host-galaxy flux. For these sources, we directly measured the _Swift_ photometry of the host using the same source and background regions as the reductions for the supernova. These magnitudes are shown in Table 3. For SNe without late-time _Swift_ data, we collected archival photometric data to fit the host-galaxy spectral energy distribution (SED) with stellar population synthesis models. We used gPhoton (Million et al., 2016) to measure UV fluxes from Galaxy Evolution Explorer (GALEX; Martin et al., 2005) data. We obtained optical catalog photometry from the Sloan Digital Sky Survey (SDSS) Data Release 16 (\(ugriz\); Ahumada et al., 2020), Pan-STARRS (\(grizY\); Chambers et al., 2016), or the Dark Energy Survey (\(grizY\); Abbott et al., 2018) depending on the source position. When possible, we obtained infrared \(JHK_{s}\) photometry from the VISTA Hemisphere Survey (McMahon et al., 2013) and \(W1\) and \(W2\) magnitudes from the Wide-field Infrared Survey Explorer (WISE; Wright et al., 2010) AllWISE catalog. As the hosts of SLSNe-I are typically faint, dwarf galaxies at moderate redshift (e.g., Perley et al., 2016; Taggart and Perley, 2021), many hosts do not have solid photometry in all bands. We list the archival photometry used in the host-galaxy fits in Table 4. We fit the available UV through IR photometry for each SN host with the Fitting and Assessment of Synthetic Templates code (FAST; Kriek et al., 2009). For our fits we assumed a Cardelli et al. (1989) extinction law with \(\rm R_{V}=3.1\) and Galactic extinction at the coordinates of the host galaxy (Schlafly and Finkbeiner, 2011), a Salpeter IMF (Salpeter, 1955), an exponentially declining star-formation rate, and the Bruzual and Charlot (2003) stellar population models. To estimate the host-galaxy flux in each of the _Swift_ UVOT filters, we computed synthetic photometry using the best-fit host SED from FAST and the UVOT filter response curves from the Spanish Virtual Observatory (SVO) Filter Profile Service (Rodrigo et al., 2012). To obtain uncertainties \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline \multicolumn{1}{c}{ Object} & \multicolumn{1}{c}{TNS ID} & \multicolumn{1}{c}{MJD} & Filter & Magnitude & Uncertainty & Flux Density & Uncertainty \\ & & & & & & (erg s\({}^{-1}\) cm\({}^{-2}\) Å\({}^{-1}\)) & erg s\({}^{-1}\) cm\({}^{-2}\) Å\({}^{-1}\) \\ \hline \(\ldots\) & \(\ldots\) & \(\ldots\) & \(\ldots\) & \(\ldots\) & \(\ldots\) & \(\ldots\) & \(\ldots\) \\ ATLAS18unu & SN2018ibb & 58464.741 & V & 17.59 & 0.12 & 3.40E-16 & 3.74E-17 \\ ATLAS18unu & SN2018ibb & 58472.714 & V & 17.76 & 0.14 & 2.91E-16 & 3.73E-17 \\ \(\ldots\) & \(\ldots\) & \(\ldots\) & \(\ldots\) & \(\ldots\) & \(\ldots\) & \(\ldots\) \\ ATLAS18unu & SN2018ibb & 58464.737 & B & 17.76 & 0.08 & 4.52E-16 & 3.32E-17 \\ \(\ldots\) & \(\ldots\) & \(\ldots\) & \(\ldots\) & \(\ldots\) & \(\ldots\) & \(\ldots\) \\ ATLAS18unu & SN2018ibb & 58464.736 & U & 18.44 & 0.08 & 3.81E-16 & 2.79E-17 \\ ATLAS18unu & SN2018ibb & 58472.711 & U & 18.53 & 0.09 & 3.50E-16 & 2.89E-17 \\ \(\ldots\) & \(\ldots\) & \(\ldots\) & \(\ldots\) & \(\ldots\) & \(\ldots\) & \(\ldots\) \\ ATLAS18unu & SN2018ibb & 58464.733 & UVW1 & 19.78 & 0.11 & 2.00E-16 & 2.02E-17 \\ ATLAS18unu & SN2018ibb & 58472.710 & UVW1 & 20.14 & 0.13 & 1.43E-16 & 1.71E-17 \\ \(\ldots\) & \(\ldots\) & \(\ldots\) & \(\ldots\) & \(\ldots\) & \(\ldots\) & \(\ldots\) \\ ATLAS18unu & SN2018ibb & 58464.742 & UVM2 & 20.60 & 0.13 & 1.24E-16 & 1.48E-17 \\ ATLAS18unu & SN2018ibb & 58472.714 & UVM2 & 20.74 & 0.13 & 1.09E-16 & 1.30E-17 \\ \(\ldots\) & \(\ldots\) & \(\ldots\) & \(\ldots\) & \(\ldots\) & \(\ldots\) & \(\ldots\) \\ ATLAS18unu & SN2018ibb & 58464.737 & UVW2 & 20.86 & 0.14 & 1.17E-16 & 1.50E-17 \\ ATLAS18unu & SN2018ibb & 58472.712 & UVW2 & 21.29 & 0.19 & 7.85E-17 & 1.37E-17 \\ \(\ldots\) & \(\ldots\) & \(\ldots\) & \(\ldots\) & \(\ldots\) & \(\ldots\) & \(\ldots\) \\ \hline \end{tabular} Note. –_Swift_ photometry of the SNe without the host flux subtracted and with no correction for Galactic extinction. For epochs where the flux was less than a 3\(\sigma\) detection, the magnitude column shows a 3\(\sigma\) upper limit on the magnitude. All magnitudes are presented in the AB system, using published conversions for systems naturally in the Vega system. The data for each source are grouped by filter and sorted by increasing MJD. Here we show the SLSN ATLAS18unu (SN2018ibb) to illustrate the format. The full table is available as an ancillary file. \end{table} Table 2: Unsubtracted _Swift_ Photometry for the host-galaxy fluxes, we did Monte Carlo sampling by perturbing the archival host fluxes assuming Gaussian errors and running 1000 different FAST iterations for each host galaxy. The synthetic _Swift_ UVOT magnitudes computed for each object are shown in Table 5. Two of our sources, SN2020rmv and SN2020abjc, have no optical survey detections of their host galaxy, consistent with their apparently hostless nature. This suggests that they will not have considerable host-galaxy contamination in the _Swift_ images. To confirm this, we used gPhoton (Million et al., 2016) to compute 3\(\sigma\) limits on the UV magnitudes at the SN location from pre-explosion GALEX data. For SN2020rmv we find upper limits of \(>\)22.65 mag and \(>\)22.42 mag in the NUV and FUV bands respectively. For SN2020abjc the corresponding limits are \(>\)22.61 mag and \(>\)23.20 mag. Given the limiting magnitude of _Swift_ UVOT for a typical exposure time, the host galaxies for these sources contribute negligibly to the measured fluxes. Our _Swift_ photometry with the host-galaxy flux contribution subtracted and corrected for Galactic foreground extinction is shown in Table 6. ## 4 Spectral Energy Distribution Fitting The SEDs of SLSNe show luminous UV/optical emission, with a UV excess well above what is typical for Type I supernovae, in some cases accounting for a majority of the emitted luminosity (e.g., Yan et al., 2017). In addition, this UV emission often persists at significant levels even after peak emission (e.g., Yan et al., 2017; Smith et al., 2018), extremely rare for less luminous classes of supernovae. As the optical spectra of SLSNe-I are relatively featureless, a blackbody is often \begin{table} \begin{tabular}{c c c c c} \hline \hline Object & TNS ID & Filter & Magnitude & Uncertainty \\ \hline \(\ldots\) & \(\ldots\) & \(\ldots\) & \(\ldots\) & \(\ldots\) \\ ATLAS18umu & SN2018ibb & g(DES) & 23.25 & 0.08 \\ ATLAS18umu & SN2018ibb & r(DES) & 21.98 & 0.03 \\ ATLAS18umu & SN2018ibb & i(DES) & 21.17 & 0.03 \\ ATLAS18umu & SN2018ibb & z(DES) & 20.81 & 0.05 \\ ATLAS18umu & SN2018ibb & Y(DES) & 20.81 & 0.15 \\ ATLAS18umu & SN2018ibb & J(VISTA) & 19.87 & 0.17 \\ ATLAS18umu & SN2018ibb & K\({}_{s}\)(VISTA) & 19.02 & 0.22 \\ ATLAS18umu & SN2018ibb & W1 & 19.29 & 0.08 \\ ATLAS18umu & SN2018ibb & W2 & 19.89 & 0.26 \\ \(\ldots\) & \(\ldots\) & \(\ldots\) & \(\ldots\) & \(\ldots\) \\ \hline \end{tabular} Note. – Archival UV, optical, and infrared photometry used in the FAST SED fits for our objects. All magnitudes are presented in the AB system, using published conversions for systems naturally in the Vega system. Here we show the SLSN ATLAS18unu (SN2018ibb) to illustrate the format. The full table is available as an ancillary file. \end{table} Table 4: Archival Host-Galaxy Multi-wavelength Photometry \begin{table} \begin{tabular}{c c c c c} \hline \hline Object & TNS ID & Filter & Magnitude & Uncertainty \\ \hline \(\ldots\) & \(\ldots\) & \(\ldots\) & \(\ldots\) & \(\ldots\) \\ ATLAS18yff & SN2018hti & V & 17.79 & 0.17 \\ ATLAS18yff & SN2018hti & B & 19.88 & 0.45 \\ ATLAS18yff & SN2018hti & U & 20.54 & 0.38 \\ ATLAS18yff & SN2018hti & UVW1 & 22.03 & 0.52 \\ ATLAS18yff & SN2018hti & UVM2 & 23.51 & 0.78 \\ ATLAS18yff & SN2018hti & UVW2 & 22.71 & 0.46 \\ \(\ldots\) & \(\ldots\) & \(\ldots\) & \(\ldots\) & \(\ldots\) \\ \hline \end{tabular} Note. – Measured photometry from _Swift_ epochs without present transient flux. All magnitudes are presented in the AB system, using published conversions for _Swift_. Here we show the SLSN ATLAS18yff (SN2018hti) to illustrate the format. The full table is available as an ancillary file. \end{table} Table 3: Measured Host-Galaxy _Swift_ Photometry \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline Object & TNS ID & MJD & Filter & Magnitude & Uncertainty & Flux Density & Uncertainty \\ & & & & & (erg s\({}^{-1}\) cm\({}^{-2}\) Å\({}^{-1}\)) & erg s\({}^{-1}\) cm\({}^{-2}\) Å\({}^{-1}\)) & \\ \hline \(\ldots\) & \(\ldots\) & \(\ldots\) & \(\ldots\) & \(\ldots\) & \(\ldots\) & \(\ldots\) \\ ATLAS18unu & SN2018ibb & 58464.741 & V & 17.52 & 0.12 & 3.57E-16 & 3.98E-17 \\ ATLAS18unu & SN2018ibb & 58472.714 & V & 17.69 & 0.14 & 3.05E-16 & 3.97E-17 \\ \(\ldots\) & \(\ldots\) & \(\ldots\) & \(\ldots\) & \(\ldots\) & \(\ldots\) & \(\ldots\) \\ ATLAS18unu & SN2018ibb & 58464.737 & B & 17.65 & 0.08 & 4.97E-16 & 3.66E-17 \\ ATLAS18unu & SN2018ibb & 58472.711 & B & 17.71 & 0.08 & 4.70E-16 & 3.46E-17 \\ \(\ldots\) & \(\ldots\) & \(\ldots\) & \(\ldots\) & \(\ldots\) & \(\ldots\) & \(\ldots\) \\ ATLAS18unu & SN2018ibb & 58464.736 & U & 18.30 & 0.08 & 4.32E-16 & 3.18E-17 \\ ATLAS18unu & SN2018ibb & 58472.711 & U & 18.39 & 0.09 & 3.98E-16 & 3.29E-17 \\ \(\ldots\) & \(\ldots\) & \(\ldots\) & \(\ldots\) & \(\ldots\) & \(\ldots\) & \(\ldots\) \\ ATLAS18unu & SN2018ibb & 58464.733 & UVW1 & 19.59 & 0.11 & 2.38E-16 & 2.41E-17 \\ ATLAS18unu & SN2018ibb & 58472.710 & UVW1 & 19.95 & 0.13 & 1.71E-16 & 2.04E-17 \\ \(\ldots\) & \(\ldots\) & \(\ldots\) & \(\ldots\) & \(\ldots\) & \(\ldots\) & \(\ldots\) \\ ATLAS18unu & SN2018ibb & 58464.742 & UVM2 & 20.33 & 0.13 & 1.58E-16 & 1.89E-17 \\ ATLAS18unu & SN2018ibb & 58472.714 & UVM2 & 20.47 & 0.13 & 1.39E-16 & 1.66E-17 \\ \(\ldots\) & \(\ldots\) & \(\ldots\) & \(\ldots\) & \(\ldots\) & \(\ldots\) & \(\ldots\) \\ \hline \end{tabular} Note. –_Swift_ photometry of the transients with the host flux subtracted corrected for Galactic extinction. The uncertainties incorporate both the error on the photometry and from the host SED fits. For epochs where the transient flux was less than a 3\(\sigma\) detection, the magnitude column shows a 3\(\sigma\) upper limit on the transient magnitude. All magnitudes are presented in the AB system, using published conversions for systems naturally in the Vega system. The data for each source are grouped by filter and sorted by increasing MJD. Here we show the SLSN ATLAS18unu (SN2018ibb) to illustrate the format. The full table is available as an ancillary file. \end{table} Table 6: Host-Subtracted _Swift_ Photometry \begin{table} \begin{tabular}{c c c c} \hline \hline Object & TNS ID & Filter & Magnitude & Uncertainty \\ \hline \(\ldots\) & \(\ldots\) & \(\ldots\) & \(\ldots\) \\ ATLAS18unu & SN2018ibb & V & 22.40 & 0.02 \\ ATLAS18unu & SN2018ibb & B & 23.89 & 0.03 \\ ATLAS18unu & SN2018ibb & U & 25.45 & 0.03 \\ ATLAS18unu & SN2018ibb & UVW1 & 27.67 & 0.03 \\ ATLAS18unu & SN2018ibb & UVM2 & 28.37 & 0.03 \\ ATLAS18unu & SN2018ibb & UVW2 & 28.45 & 0.03 \\ \(\ldots\) & \(\ldots\) & \(\ldots\) & \(\ldots\) & \(\ldots\) \\ \hline \end{tabular} Note. –_Swift_ photometry of the transients with the host flux subtracted corrected for Galactic extinction. The uncertainties incorporate both the error on the photometry and from the host SED fits. For epochs where the transient flux was less than a 3\(\sigma\) detection, the magnitude column shows a 3\(\sigma\) upper limit on the transient magnitude. All magnitudes are presented in the AB system, using published conversions for systems naturally in the Vega system. The data for each source are grouped by filter and sorted by increasing MJD. Here we show the SLSN ATLAS18unu (SN2018ibb) to illustrate the format. The full table is available as an ancillary file. \end{table} Table 5: Synthetic Host-Galaxy _Swift_ Magnitudes times a reasonable assumption for fitting the SED. This allows for straightforward estimates of the effective temperature and radius of the emitting region. ### Modified Blackbody Fits In the past several years, the number of rest-frame UV spectra of SLSNe-I has increased significantly, both due to deep surveys discovering faint objects at high redshift and all-sky surveys finding nearby, bright objects that can be observed by the Hubble Space Telescope (e.g., Yan et al., 2018). These UV spectra show two key features. One is a number of broad absorption features from species such as C ii, C iii, Ti iii, Si ii, and Mg ii (e.g., Yan et al., 2017, 2018; Smith et al., 2018). The other is that the FUV emission from SLSNe-I is suppressed as compared to a single blackbody fit to the optical and NUV emission (Chomiuk et al., 2011; Yan et al., 2017, 2018). This is likely due to a combination of blended absorption lines, metal-line blanketing (Hillier and Miller, 1998; Mazzali, 2000), and scattering of the UV photons within the expanding photosphere (Bufano et al., 2009). While the UV emission from SLSNe-I is suppressed, the line blanketing is significantly less than for SNe Ia, indicating low metallicity both in the progenitor star and the newly synthesized heavy element content of the ejecta (Yan et al., 2017). Many studies fit the UV/optical SED using a modified blackbody, to account for the suppression of the blackbody flux at short wavelengths. This ensures that blackbody fits to the full SED do not yield anomalously low temperatures. While a modified blackbody function with free parameters for the UV cutoff wavelength and slope (e.g., Yan et al., 2018) can allow for statistically better fits, the increase in parameters relative to the small number of UV data points expands the uncertainties on the luminosity, temperature, and radius dramatically. Instead, we adopt the prescription of Nicholl et al. (2017) to fit the _Swift_ UVOT UV/optical SEDs of our supernovae. This form of the modified blackbody assumes a simple linear UV suppression at wavelengths below 3000 A, which is both a reasonable choice for typical SLSNe-I (Chomiuk et al., 2011; Yan et al., 2017, 2018) and is consistent with the SED assumption for the light curve fitting with MOSFiT (Guillochon et al., 2018; Nicholl et al., 2017) to be discussed in Section 5. We fit each epoch of _Swift_ UVOT photometry using Markov Chain Monte Carlo (MCMC) methods (Foreman-Mackey et al., 2013) and a forward modeling approach. This accounts for the red leaks present in the \(UVW2\) and \(UVW1\) filters that may affect the photometry more significantly as the SN cools with time. We used the Spanish Virtual Observatories Filter Profile Service (Rodrigo et al., 2012) to obtain the _Swift_ UVOT filter response functions. We excluded ground-based optical data from our SED fits to both avoid de-weighting the UV data that is most important for an accurate temperature determination and mitigate cross-calibration issues between the _Swift_ data and a heterogeneous sample of optical follow-up data. We only include data with \(>2\sigma\) detections to ensure robust luminosity and temperature estimates. As the SLSNe DES15S2nr, iPTF15esb, and ZTF21accwovq have no detected UV emission in their Swift epochs, we exclude them from the remainder of the analysis. Figure 1 shows the bolometric luminosity (L\({}_{\rm bol}\)), effective radius (R\({}_{\rm eff}\)), and effective temperature (T\({}_{\rm eff}\)) evolution for our sample of SLSNe, which are detailed in Table 7. We find a wide range of luminosities and decline rates within the luminosity evolution. Two outliers, one epoch each for OGLE16dmu and PS15ae have not been shown in the figure, but are included in the table. The blackbody radii generally increase with time as the SN ejecta expands. Some objects show a late-time plateau in R\({}_{\rm eff}\), possibly indicating that they are entering the nebular phase (e.g., Nicholl et al., 2019). Conversely, the blackbody temperatures for nearly all objects decreases over time. The general temporal evolution seen in L\({}_{\rm bol}\), R\({}_{\rm eff}\), and T\({}_{\rm eff}\) is consistent with blackbody fits to other samples of SLSNe-I (e.g., Lunnan et al., 2018; Chen et al., 2022). ### Temperature and Radius at Peak Light With the temporal evolution of the modified blackbody parameters show in Figure 1, we calculated the temperature and radius at the peak bolometric emission for each SN. To do this, we first bolometrically-corrected optical ground-based light curves for each supernova by scaling the optical photometry to match the interpolated bolometric luminosity derived from the modified blackbody fits, similar to previous transient studies (e.g., Holoien et al., 2020; Hinkle et al., 2021). From this higher cadence bolometric light curve, often with pre-peak constraints, we fit for the time of peak luminosity using a generic magnetar model (Ostriker and Gunn, 1971; Kasen and Bildsten, 2010). We initially fit the full light curve to establish an initial estimate of the peak time and then restricted the fit to within \(-20\) and \(+30\) days of the estimated peak. We found this to generally return reliable estimates of the peak time. For two objects, SN2016els and SN2021ahpl, these fits were not reliable and we took the epoch of maximum luminosity as our estimate of the peak time. We then measured the temperature and radius at peak. We linearly interpolated the temperature with 5 days of peak on either side and then fit a line. The temperature and radius at peak were taken to be the value of this line at peak emission. We estimated the uncertainty on these values by taking the standard deviation of 3000 Monte Carlo iterations of this linear fit. To ensure robust estimates of the temperatures and radii at peak, we only show the 13 supernova with _Swift_ data prior to peak and two objects, PS16aqv and SN2018bgy, having _Swift_ data within 5 rest-frame days of peak. The rest of the sample either has the first _Swift_ epoch more than 5 rest-frame days after peak or has a highly uncertain time of peak. Histograms of T\({}_{\rm eff}\) and R\({}_{\rm eff}\) at peak are shown in Figure 2, excluding the SLSNe without _Swift_ data sufficiently close to peak light. Along with the histograms, we show kernel density estimates (KDE) of the underlying distribution computed using scipy.stats.gaussian_kde and Scott's Rule (Scott, 1992). We find that R\({}_{\rm eff}\) at peak spans \((9-200)\times 10^{14}\) cm with a peak in the radius distribution at \(\approx 4\times 10^{15}\) cm, consistent with previous results (e.g., Lunnan et al., 2018; Chen et al., 2022). T\({}_{\rm eff}\) values at peak span \(\approx 7-20\) kK with a central peak at \(\approx 11\) kK and a hotter inflection in the distribution at \(\approx 18\) kK, all consistent with earlier work (e.g., Lunnan et al., 2018; Chen et al., 2022). ### Comparison of Blackbody and Modified Blackbody Fits In addition to the modified blackbody models used above, we also fit each SLSN with a simple blackbody to compare the two SED models for a sample of SLSNe well-observed in the UV. In general, we find that many objects are equally well fit by both models, with 49% of epochs having a \(|\Delta\chi^{2}|/{\rm d.o.f}<0.2\) between the modified and simple blackbody fits. Increasing the agreement threshold to \(|\Delta\chi^{2}|/{\rm d.o.f}<0.3\) yields 71% of objects. Nevertheless, across all _Swift_ epochs, a modified blackbody model is preferred 62% of the time. Similarly, if we look at the median \(\chi^{2}/{\rm d.o.f}\) per object, 16 out of 23 SLSNe, or 70% of our objects, prefer a modified blackbody fit as compared to a simple blackbody. This, combined with the direct measurements of SLSNe-I SEDs from UV spectroscopy suggests that a simple blackbody does not provide a sufficient description of the UV emission from SLSNe-I. ## 5 Magnetar Modeling with Mosefit Beyond the modified blackbody fits to the SEDs of the SLSNe-I in our sample, we want to estimate physical parameters of the supernova explosion. One of the most commonly used models for SLSNe-I is the magnetar model (e.g., Kasen & Bildsten, 2010; Woosley, 2010; Nicholl et al., 2017). In such a model, a rapidly spinning neutron star with a large magnetic field (i.e. a magnetar) injects energy from its spin-down into the supernova ejecta. When the timescales of the magnetar spin-down and diffusion time within the ejecta are well-matched, this powers transient emission significantly brighter than a typical core-collapse supernova (Kasen & Bildsten, 2010; Woosley, 2010). Here, we use the Modular Open Source Fitter for Transients (MOSFiT; Guillochon et al., 2018) to fit the observed emission from our SLSNe. In particular, we use the slsn model developed by Nicholl et al. (2017). The MOSFiT slsn model operates by initializing a magnetar central engine which spins down with time (e.g., Chatzopoulos et al., 2012; Inserra et al., 2013). This spin-down energy is then diffused through the ejecta (e.g., Arnett, 1982; Inserra et al., 2013; Wang et al., 2015) to calculate a bolometric luminosity. MOSFiT then generates a temperature and radius from the physical parameters to create a transient SED. For the slsn model, the SED is a modified blackbody with a linear UV flux suppres \begin{table} \begin{tabular}{c c c c c c c c c c c c} \hline \hline \multicolumn{1}{c}{ Object} & \multicolumn{1}{c}{TNS ID} & \multicolumn{1}{c}{MJD} & \multicolumn{1}{c}{log(L)} & \multicolumn{1}{c}{dlog(L\({}_{l}\))} & \multicolumn{1}{c}{dlog(L\({}_{u}\))} & \multicolumn{1}{c}{log(R)} & \multicolumn{1}{c}{dlog(R\({}_{l}\))} & \multicolumn{1}{c}{dlog(R\({}_{u}\))} & \multicolumn{1}{c}{log(T)} & \multicolumn{1}{c}{dlog(T\({}_{l}\))} & \multicolumn{1}{c}{dlog(T\({}_{u}\))} \\ & & & \multicolumn{3}{c}{log([erg s\({}^{-1}\)])} & & \multicolumn{3}{c}{log([cm])} & & \multicolumn{3}{c}{log([K])} \\ \hline ATLAS18unu & SN2018ibb & 58464.7 & 44.293 & 0.026 & 0.025 & 15.822 & 0.036 & 0.035 & 3.949 & 0.012 & 0.012 \\ ATLAS18unu & SN2018ibb & 58472.7 & 44.265 & 0.030 & 0.029 & 15.853 & 0.038 & 0.038 & 3.926 & 0.012 & 0.013 \\ ATLAS18unu & SN2018ibb & 58476.4 & 44.269 & 0.033 & 0.031 & 15.866 & 0.041 & 0.040 & 3.921 & 0.013 & 0.014 \\ ATLAS18unu & SN2018ibb & 58481.0 & 44.300 & 0.041 & 0.039 & 15.950 & 0.050 & 0.050 & 3.887 & 0.017 & 0.016 \\ ATLAS18unu & SN2018ibb & 58484.7 & 44.234 & 0.034 & 0.035 & 15.894 & 0.043 & 0.046 & 3.898 & 0.015 & 0.015 \\... &... &... &... &... &... &... &... &... &... &... &... \\ \hline \end{tabular} Note. – Bolometric luminosity, effective radius, and temperature estimated from the modified blackbody fits to the host-subtracted and extinction-corrected _Swift_ data. Here we show a subset of the fits for the SLSN ATLAS18unu (SN2018ibb) to illustrate the format. The full table is available as an ancillary file. \end{table} Table 7: Modified Blackbody Fits sion at wavelengths shorter than 3000 A. Unlike many other models, MOSFiT compares directly to the multi-band photometry rather than pre-computing a bolometric light curve and then fitting a magnetar model. In addition to the _Swift_ UVOT data presented for our sample of SLSNe, we also obtained optical photometry of these events to better sample their light curve evolution. The optical data consisted of survey photometry from the Asteroid Terrestrial-impact Last Alert System (ATLAS; Tonry et al., 2018; Smith et al., 2020), Gaia (Wyrzykowski et al., 2012), the Panoramic Survey Telescope and Rapid Response System (Pan-STARRS; Chambers et al., 2016), the Zwicky Transient Facility (ZTF; Bellm et al., 2019), and/or ground-based follow-up photometry from the literature as appropriate. The long-baseline optical data, combined with the UV data near peak, is crucial as the decline rate is important for determining the magnetic field strength (Nicholl et al., 2017). We ran MOSFiT for each of our SLSNe using MCMC sampling with 300 walkers and recorded the bolometric luminosity for each chain as a function of time using the Figure 1: Temporal evolution of the UV/optical modified blackbody luminosity (top panel), radius (middle panel), and temperature (bottom panel) for the SLSNe-I in our sample. The solid lines are the median values and the semi-transparent shading corresponds to the \(1\sigma\) uncertainty. The time is in rest-frame days relative to the time of peak luminosity. dense_luminosities and dense_times flags with additional temporal coverage of 1000 days on either end of the observed data. To ensure self-consistency between our modified blackbody fits and the MOSFiT fits, we specified the redshift and luminosity distance for each SN. Nominally, we ran each MOSFiT model until convergence at a Potential Scale Reduction Factor (PSRF) value of 1.2 (Gelman and Rubin, 1992; Brooks and Gelman, 1998; Nicholl et al., 2017). In some cases, the runtime on the fits was prohibitively long, so we terminated them after reaching a PSRF value below 1.3. The mean and median PSRF for the full sample was 1.25. We excluded OGLE16dmu from our MOSFiT modeling as it has only marginal coverage in the _Swift_\(U\) and \(UVW1\) bands and no published optical light curves. ### Peak UV/optical Luminosity and Emitted Energy In Figure 3 we show the distribution of peak luminosities and emitted energies for our sample of SLSNe along with corresponding KDEs. Given the dense temporal sampling, we computed the peak luminosity by taking the maximum value of the bolometric luminosity curves from MOSFiT. We took the median value as the peak luminosity and the 16th and 84th percentiles as the 1\(\sigma\) bounds on the peak luminosity. The distribution of peak luminosities from MOSFiT as compared to the peak luminosities from our modified blackbody fits are similar. The luminosity distribution peaks at \(1.5\times 10^{44}\) erg s\({}^{-1}\), with a slight bump near \(5\times 10^{44}\) erg s\({}^{-1}\). The distribution spans \((2-90)\times 10^{43}\) erg s\({}^{-1}\), fully consistent with the luminosity distribution seen in previous studies (Lunnan et al., 2018; De Cia et al., 2018; Angus et al., 2019; Chen et al., 2022). The median peak luminosity is also similar to other studies, although this sample appears to have fewer low-luminosity SLSNe than some other samples (Angus et al., 2019). This is likely a result of the targeted nature of _Swift_ follow-up and the tendency for brighter SLSNe to be bluer (Chen et al., 2022). To calculate the total energy emitted, we integrated the bolometric luminosity over time and again took the median as the total energy with the 16th and 84th percentiles as the 1\(\sigma\) bounds. The total energy distribution ranges from \((1-60)\times 10^{50}\) erg, with a peak at \(2\times 10^{51}\) erg. This is largely consistent with the emitted energies of the sample of SLSNe from (Lunnan et al., 2018), especially since that sample does not cover the full SN light curve and may not account for all of the UV emission. ### Estimated Physical Parameters Beyond estimates of luminosity and energy, we used our MOSFiT modeling to estimate key physical parameters of the newly-formed neutron star and the supernova ejecta. In Figure 4 we show various parameter combinations from MOSFiT along with a comparison set of SLSNe from Nicholl et al. (2017). We also plot several lines representing different ratios of the magnetar spin-down timescale (t\({}_{\rm mag}\)) and diffusion timescale (t\({}_{\rm diff}\)), as Figure 2: Histograms of R\({}_{\rm eff}\) and T\({}_{\rm eff}\) at the time of peak luminosity. Shown in black are KDEs of each distribution normalized to the sample size. The individual SN contribution to the KDEs are weighted by the inverse square of the estimated uncertainty on the peak radius and temperature, with a 1% error floor added in quadrature to avoid over-weighting single objects. suming the median parameters of Nicholl et al. (2017c) for the parameters not shown in a given panel. Overall, we find good agreement between our sample and the Nicholl et al. (2017c) sample for each of the parameters. Using a K-S test (Massey, 1951), the distributions for each of the key physical parameters are consistent between our sample and that of Nicholl et al. (2017c). Our median and 1\(\sigma\) dispersion on key parameters are as follows: P = \(2.4^{+2.5}_{-1.0}\) ms, B\({}_{\perp}\) = \(1.2^{+1.0}_{-0.7}\times 10^{14}\) G, M\({}_{ej}\) = \(7.1^{+12.6}_{-4.7}\,M_{\odot}\), E\({}_{k}\) = \(4.7^{+5.3}_{-3.4}\times 10^{15}\) erg, fully consistent with previous studies (Nicholl et al., 2017c; Hsu et al., 2021; Chen et al., 2022b). The top left panel compares the perpendicular magnetic field to the NS spin period. We find good agreement with previous samples, with a slight bias towards higher magnetic field strengths. In terms of the t\({}_{\rm mag}\) / t\({}_{\rm diff}\) ratio, the SNe tend to prefer a value below 1. The top right panel compares the ejecta mass with the magnetic field. Interestingly, the sources in our sample lie neatly on the t\({}_{\rm mag}\) / t\({}_{\rm diff}\)\(\sim\) 0.1 line. However, several sources in the Nicholl et al. (2017c) sample lie off this line. There also appears to be a moderately significant anti-correlation between the ejecta mass and magnetic field, with Kendall \(\tau=-0.47\) and corresponding p-value of \(1.3\times 10^{-3}\). However, this may simply be the result of observational bias, as SLSNe with lower ejecta masses and weaker magnetic fields tend to be less luminous. The middle left panel compares the ejecta mass and spin period. Our sample is similar to that of Nicholl et al. (2017c), but with higher scatter. Regardless, we confirm the anti-correlation noted by previous studies (Nicholl et al., 2017c; Blanchard et al., 2020; Hsu et al., 2021; Chen et al., 2022b). The middle right panel compares the kinetic energy with the magnetic field strength. Here we have computed the kinetic energy as \(E_{k}=1/2M_{ej}v_{phot}^{2}\)(Nicholl et al., 2017c; Margalit et al., 2018). We note that under the assumption of a homologous density profile this relationship is instead \(E_{k}=3/10M_{ej}v_{phot}^{2}\), although such a difference is unimportant for this study. In neither the Nicholl et al. (2017c) sample or our sample do we find a SLSN that favors t\({}_{\rm mag}\) / t\({}_{\rm diff}\)\(>\) 10 in all parameter comparisons. The bottom left panel compares the kinetic energy with the NS spin period. The plotted line is the sum of the NS spin energy and a characteristic \(10^{51}\) energy for supernovae. Again, we find good agreement with previous work. In the bottom left panel, we compare kinetic energy and ejecta mass along with a one-to-one line. In both the Nicholl et al. (2017c) sample and our sample, the ejecta mass and kinetic energy scale together as expected. ### Effect of Updated Swift Reductions The _Swift_ UVOT photometry provides strong constraints on the SLSN temperature. Therefore, we ask what effect the updated _Swift_ data has on our inferred MOSFiT parameters. To test this, we fit several SNe that have published pre-correction _Swift_ and compared Figure 3: Histograms of peak UV/optical luminosity and total emitted energy computed from the MOSFiT outputs. Shown in black are KDEs of the radius and temperature distributions normalized to the sample size. The individual SN contribution to the KDEs are weighted by the inverse square of the estimated uncertainty on the peak luminosity and energy. Figure 4: Key physical parameters (spin period, magnetic field, ejecta mass, and kinetic energy) for the SLSNe in this sample (black squares) and a comparison sample from Nicholl et al. (2017c, blue circles). The lines in the first four panels are lines of constant ratio between the magnetar spin-down timescale and the diffusion timescale. The line in the bottom left panel is a sum of the rotational energy of the NS and a characteristic SN energy. The line in the bottom right panel is a 1:1 line. them to our fits including the updated _Swift_ data. These results are shown in Figure 5. Across all of the key parameters, we find good agreement between the values from fits including published and updated _Swift_, with all having median ratios less than 10%. The ejecta mass and ejecta velocity are the most different, expected as the different temperature constraints affect the diffusion timescale (Nicholl et al., 2017). The lack of stark difference in inferred parameters may not be particularly surprising given the high redshifts of many SLSNe. Each of our SLSNe has observer-frame UV data, whereas this is not true for a large majority of the Nicholl et al. (2017) sample. However, when accounting for the redshift, 60% of the Nicholl et al. (2017) sample has a rest-frame wavelength of \(<3000\) A for the bluest bandpass, and all have a bluest filter with a rest-frame wavelength blue-ward of _Swift_\(B\). Therefore, given the median peak temperature of \(\sim\)11,000 K, the temperature may still be reasonably well-constrained even without observer-frame UV data. ## 6 Correlations between SLSN Parameters and Radiative Emission In addition to the comparisons of key physical parameters shown in Figure 4, we searched for correlations between these physical parameters and the peak luminosities and radiative energies for the SLSNe in our sample. In total, we tested 25 correlations between the various parameters, yielding a revised p-value of \(\sim 0.002\) for significance. In Figure 6 we shown the strongest of these correlations. These correlations are as follows. In the upper left panel we show an anti-correlation between the spin period and peak UV/optical luminosity, with Kendall \(\tau=-0.59\) and a p-value of \(8.2\times 10^{-5}\). The upper right shows the anti-correlation the spin period and radiative energy, with Kendall \(\tau=-0.54\) and a p-value of \(2.9\times 10^{-4}\). The lower left shows the correlation between the kinetic energy of the ejecta and radiative energy, with Kendall \(\tau=0.54\) and a p-value of \(1.7\times 10^{-4}\). Finally, the lower right shows the anti-correlation between the magnetar magnetic field and radiative energy, with Kendall \(\tau=-0.60\) and a p-value of \(2.5\times 10^{-5}\). We confirmed that these correlations exist at significant levels whether we use the medians or best fits (using the score returned by MOSFiT) as our physical parameters. For consistency with previous MOSFiT results, we will continue use the median values for each physical parameter. The anti-correlations between the magnetar spin period and both the peak luminosity and total radiative energy are not surprising when considering the assumptions and expectations of a magnetar central engine. The input energy from a magnetar spin-down model scales most strongly with the spin period of the magnetar, with \(E_{mag}\propto P^{-2}\)(e.g., Ostriker & Gunn, 1971; Kasen & Bildsten, 2010). While this extra energy from the magnetar must then be diffused through the ejecta (Arnett, 1982), it is clear that shorter spin periods provide larger reservoirs of additional energy to power the supernova. Conversely, a longer spin period provides less additional energy to the supernova, placing a limit on the increased radiation seen for SLSNe as compared to typical Type Ic SNe. Interestingly, in both the peak luminosity and total energy there appears to be increased scatter at short spin periods. This may indicate that below some critical spin period that there is enough additional energy from the magnetar to power a SLSN, but variations in the other physical parameters, which can change the diffusion timescale and therefore the radiative luminosity, may result in different peak luminosities and energies. The physical origin of correlation between the kinetic energy of the ejecta and the radiative energy is not as straightforward under a magnetar spin-down model. To confirm that such a correlation was not simply the re Figure 5: Ratio of physical parameters (spin period in teal, magnetic field in gold, ejecta mass in red, and ejecta velocity in navy) estimated from MOSFiT fits of the updated _Swift_ photometry as compared to MOSFiT fits of published, pre-correction, _Swift_ photometry. The black dashed line is a ratio of one, with the gray shading indicating 10%. The colored lines on the right side are the median ratio for the corresponding physical parameters, all consistent within 10%. sult of modeling assumptions, we conducted a Monte Carlo simulation. We randomly drew parameters for 100 SLSNe assuming a Gaussian distribution centered on the median values for each parameter from Nicholl et al. (2017c) and a standard deviation for each parameter based on the \(1\sigma\) uncertainties from the joint posteriors of the Nicholl et al. (2017c) sample. We then calculated the luminosity as a function of time and the resulting integrated energy. We then applied the cut on kinetic energy discussed in Section 3.8 of Nicholl et al. (2017c). This typically yielded \(\approx 30\) objects, close to our sample size. We computed the Kendall \(\tau\) correlation strength and significance for the set of simulated SLSNe. We repeated the whole procedure 5000 times and asked for how many realizations were the kinetic energy and radiative energy correlated as strongly (i.e. higher \(\tau\)) and as significantly (i.e. lower p-value) than our observed correlation. We found that only 3 out of 5000 (0.06%) of the trials met these requirements, suggesting that this correlation is not a simple covariance Figure 6: _Upper panels_: spin period as compared to the peak UV/optical luminosity (left) and the total radiative energy (right). _Lower panels_: kinetic energy (left) and magnetic field (right) as compared to the total radiative energy. In all four panels the solid gray line is the line of best fit and the dashed gray lines are plus/minus one sigma from the best-fit line. introduced by the assumptions inherent to the MOSFiT modeling. One naive explanation for the correlation between kinetic energy and radiative energy is simply that sources with high ejecta masses also have high nickel masses providing additional energy. Under the assumption that \({}^{56}\)Ni decay provides the energy for these SLSNe, we can use the scaling between nickel mass and energy production (e.g., Nadyozhin, 1994) to estimate the fraction of the ejecta mass that must be in \({}^{56}\)Ni. To explain all of the emitted energy through nickel decay, half of our sample requires \({}^{56}\)Ni masses larger than the ejecta mass, which is clearly unphysical. Even for a more conservative assumption that 10% of the emitted energy is a result of \({}^{56}\)Ni decay requires that half of the sample has nickel masses larger than 1 M\({}_{\odot}\), considerably larger than other stripped-envelope supernovae (e.g., Afsariderdi et al., 2021). It therefore seems unlikely that this correlation between kinetic energy and radiative energy results from unmodeled \({}^{56}\)Ni decay. The physical explanation for such a correlation, if real and not a result of assumptions made when modeling the SN light curves, requires further exploration. The strong anti-correlation between magnetic field and radiative energy is likely the result of the need to match the magnetar spin-down timescale and diffusion timescale in the ejecta for a SLSNe to be luminous (e.g., Nicholl et al., 2017, also see Fig. 4). This may simply lead to an observational bias, where SNe that do not lie on such a correlation are not luminous and therefore are not observed at a given distance. Thus, the true distribution of these parameters in nature may not result in a strong anti-correlation. This correlation supports previous work suggesting that the magnetar spin-down and diffusion timescales must be well-matched for a SLSN to occur (Kasen and Bildsten, 2010; Metzger et al., 2015; Nicholl et al., 2015, 2017). ## 7 Conclusions In this work, we study the UV/optical evolution of 27 well-observed SLSNe. We select only sources which have been well observed by the _Swift_ UVOT, allowing for strong constraints on their UV emission and temperature evolution. The majority of our SLSNe also have long-term optical light curves enabled by modern all-sky transient surveys. Through our analysis of the SLSNe light curves, we have recovered several known trends among SLSNe. The first is that the SEDs of SLSNe are well-fit by modified blackbodies. Through a comparison of modified and simple blackbody models we found that while many sources are fit well by either SED model, a majority of sources prefer a modified blackbody. These findings are in agreement with direct studies of SLSNe rest-frame UV spectra (Chomiuk et al., 2011; Yan et al., 2017, 2018). From our modified blackbody fits we find a median temperature of \(\approx 11,000\) K and median radius of \(\approx 4\times 10^{15}\) cm, each consistent with previous work (Lunnan et al., 2018; Chen et al., 2022). While modified blackbody fits to the UVOT data provide strong constraints on temperature and radius evolution, the incomplete coverage of many events precludes the measurement of a peak luminosity and/or total energy for some objects. We therefore used MOSFiT and extrapolation of the best-fit model to find a median peak luminosity of \(1.5\times 10^{44}\) erg s\({}^{-1}\) and median total radiative energy of \(2\times 10^{51}\) erg, again consistent with earlier work (Nicholl et al., 2017; Lunnan et al., 2018; Angus et al., 2019; Chen et al., 2022). With the same MOSFiT runs, we estimated key physical parameters of the SLSNe, including the neutron star spin period and magnetic field strength, ejecta mass, and kinetic energy of the ejecta. The distributions of these parameters for our sample are in full agreement with MOSFiT fits to other samples of SLSNe (e.g., Nicholl et al., 2017; Blanchard et al., 2020; Hsu et al., 2021; Chen et al., 2022). We find that despite correcting UV data taken when the _Swift_ UVOT calibrations overestimated the source magnitudes, the key physical parameters remain consistent within \(\sim 10\%\). One interesting trend apparent from our MOSFiT runs is a possible anti-correlation between the ejecta mass and magnetic field strength. Such a correlation is not seen in previous works (Nicholl et al., 2017) and may simply be a result of observational bias, as SLSNe with lower ejecta masses and weaker magnetic fields are less luminous. However, this possible anti-correlation, combined with the known anti-correlation between ejecta mass and spin period, may have implications for the formation of neutron stars during core-collapse supernovae. We find additional correlations between physical parameters and the peak luminosity and radiative energy output of the SLSNe. The anti-correlations between spin period and luminosity and spin period and energy are caused by the spin period being the dominant factor in setting the available extra energy for the SLSNe under a magnetar model. We find no obvious explanation for the apparent correlation between kinetic energy and radiative energy, but it is inconsistent with being simply the result of additional nickel mass within the ejecta. Finally, the anti-correlation between magnetic field strength and energy seems most related to the requirement that the diffusion and magnetar spin-down timescales are well-matched to power a SLSNe, as compared to a typical Type Ib/c supernova. We note that our study only considers a magnetar model (Ostriker and Gunn, 1971; Kasen and Bildsten, 2010; Inserra et al., 2013; Nicholl et al., 2017c) when fitting the observed multi-band lights curves of these SLSNe. A non-negligible fraction of SLSNe-I show signs of bumps or undulations in their light curves (Nicholl et al., 2016; Lunnan et al., 2020; Hosseinzadeh et al., 2022; West et al., 2023). This behavior is not typical of a basic magnetar spin-down model, although some recent work has attempted to extend magnetar models to describe bumpy light curves (Dong et al., 2023). Furthermore, some studies have suggested that \(\sim 25\%\) of SLSNe-I, particularly those with light curve undulations, can be better described with a H-poor CSM interaction model (Chen et al., 2022). The origin of these light curve undulations remains unclear. Despite a sample of SLSNe with well-measured UV evolution, we find no strong trends between any of the physical parameters studied here and the presence of light curve undulations. As Type I SLSNe can be very luminous they allow for studies of supernova physics and rates at high redshift (Angus et al., 2019). Additionally, there is promising evidence that SLSNe-I can be used as cosmological probes (Inserra et al., 2021; Khetan et al., 2023). As such, it is important to understand the progenitors and explosion physics of such events. With the upcoming Legacy Survey of Space and Time (LSST; Ivezic et al., 2008) on the Vera Rubin Observatory, we will find many more SLSNe. As we have shown, as long as there is sufficient rest-frame UV coverage, such events can be well-studied and used to further understand this rare population of massive star supernovae. Finally, we have provided the uniformly reduced, updated Swift photometry for these 27 well-observed SLSNe. We thank Matt Nicholl for helpful feedback on the manuscript. J.T.H. and this work was supported by NASA award 80NSSC21K0136. B.J.S is supported by NSF grants AST-1908952, AST-1920392, AST-1911074, and NASA award 80NSSC19K1717. This research has made use of the SVO Filter Profile Service ([http://svo2.cab.inta-csic.es/theory/fps/](http://svo2.cab.inta-csic.es/theory/fps/)) supported from the Spanish MINECO through grant AYA2017-84089. MOSFiT (Nicholl et al., 2017; Guillochon et al., 2018), emcee (Foreman-Mackey et al., 2013)
2309.17205
Towards Complex-query Referring Image Segmentation: A Novel Benchmark
Referring Image Understanding (RIS) has been extensively studied over the past decade, leading to the development of advanced algorithms. However, there has been a lack of research investigating how existing algorithms should be benchmarked with complex language queries, which include more informative descriptions of surrounding objects and backgrounds (\eg \textit{"the black car."} vs. \textit{"the black car is parking on the road and beside the bus."}). Given the significant improvement in the semantic understanding capability of large pre-trained models, it is crucial to take a step further in RIS by incorporating complex language that resembles real-world applications. To close this gap, building upon the existing RefCOCO and Visual Genome datasets, we propose a new RIS benchmark with complex queries, namely \textbf{RIS-CQ}. The RIS-CQ dataset is of high quality and large scale, which challenges the existing RIS with enriched, specific and informative queries, and enables a more realistic scenario of RIS research. Besides, we present a nichetargeting method to better task the RIS-CQ, called dual-modality graph alignment model (\textbf{\textsc{DuMoGa}}), which outperforms a series of RIS methods.
Wei Ji, Li Li, Hao Fei, Xiangyan Liu, Xun Yang, Juncheng Li, Roger Zimmermann
2023-09-29T12:58:13Z
http://arxiv.org/abs/2309.17205v1
# Towards Complex-query Referring Image Segmentation: A Novel Benchmark ###### Abstract Referring Image Understanding (RIS) has been extensively studied over the past decade, leading to the development of advanced algorithms. However, there has been a lack of research investigating how existing algorithms should be benchmarked with complex language queries, which include more informative descriptions of surrounding objects and backgrounds (_e.g._, _"the black car."_ vs. _"the black car is parking on the road and beside the bus."_). Given the significant improvement in the semantic understanding capability of large pre-trained models, it is crucial to take a step further in RIS by incorporating complex language that resembles real-world applications. To close this gap, building upon the existing RefCOCO and Visual Genome datasets, we propose a new RIS benchmark with complex queries, namely **RIS-CQ**. The RIS-CQ dataset is of high quality and large scale, which challenges the existing RIS with enriched, specific and informative queries, and enables a more realistic scenario of RIS research. Besides, we present a niche targeting method to better task the RIS-CQ, called dual-modality graph alignment model (**DuMoGa**), which outperforms a series of RIS methods. ## 1 Introduction Correctly comprehend the subtle alignment (i.e., grounding) between language and vision has long been the pivotal research in the intersecting communities of computer vision and language processing [29; 28; 33]. Among a range of vision-language grounding topics, Referring Image Segmentation (RIS) has been proposed with the aim to ground a given language query onto a specific region of an image, i.e., typically represented by a segmentation map, as exemplified in Figure 1. By precisely bridging the semantics between the images and texts, RIS plays a crucial role in various downstream cross-modal applications, including image editing [2; 12], and language-based human-robot interaction[25; 32]. Although attracting increasing research attention, the existing RIS benchmarks, unfortunately, can be subject to the **simplicity and naivety of the language queries**. Specifically, via data statistics of the current popular RIS datasets, including RefCOCO and RefCOCO+ [30; 20], we find that _i)_ 85.3% text queries have short forms (i.e., with length \(\leq\) 5 words), and _ii)_ 83.8% queries involve only one or two visual objects. We note that such characteristics would inevitably lead to the key pitfalls that hamper the utility and applicability of RIS task. * On the one hand, caused by the inequality between vision and language, i.e., language is abstract and succinct while vision always entails richer details, simple short language queries with shallow object descriptions usually trigger ambiguity for the visual coreference. The sample in Figure 1 shows the case. More severely, in the existing RIS datasets, a significant majority of query sentences suffer from such ambiguity. Intuitively, RIS models, being trained on such overly simplistic data, would largely fail to disambiguate the image referring and lead to suboptimal performance. * On the other hand, in real-world applications, individuals are more likely to input detailed textual descriptions in complex forms, so as to accurately locate the desired objects. Although the recent RIS methods [27; 31; 5] achieve satisfactory performance on the in-house testing set, they can still struggle when confronted with complex and informative queries. Our preliminary experiment indicates that even the current top-performing RIS model, LAVT[27], drops dramatically (74.46 vs. 10.84 in mIoU on two RIS datasets) when testing on the complex query. In other words, being trained on existing RIS datasets can lead to the failure of generalizing to the input queries from realistic users in the wild. The above observations imperatively motivate the exploration of _Referring Image Segmentation with Complex Queries (RIS-CQ)_. In this work, we propose a novel benchmark for RIS under such complex scenario. Specifically, we construct a RIS-CQ dataset (cf. SS2), which is of high quality and large scale. Technically, we first extract the salient objects with their pertaining relations (spatial, action, _etc._) from an image, and then generate semantically detailed and meaningful descriptions for the referring objects towards their surrounding objects, which serve as the complex-form queries. Notedly, the recent large language models (LLMs), e.g., ChatGPT [19] have revealed the great capability of human-level understanding of language semantics. Thus we take advantage of the LLM to assist to generate large amounts of complex queries while without sacrificing the labeling quality. Finally, we obtain a novel RIS-CQ dataset with 118,287 images and 13.18 words on average for each query. The key to accurate RIS recognition in our scenario essentially lies in the deep understanding of the underlying semantics of different modalities, because of the intrinsic complex form of textual query and the sophisticated visual content. To this end, we propose a novel **dual-mod**ality **graph** alignment (dubbed **DuMoGa**) model (cf. SS3) to benchmark the RIS-CQ task, where the input sentence and image are represented with the semantic dependency graph [22] and semantic scene graph [23], respectively. Meanwhile, the semantics of the two modalities should be sufficiently interacted for deep comprehension of the inputs. Correspondingly, two levels of cross-modal alignment learning are carried out in DuMoGa to capture the intrinsic correspondence between the input text and vision, including the structural alignment between two semantic graphs and the feature alignment between two semantic representations. On the RIS-CQ datasets our DuMoGa achieves 24.4 in mIoU, outperforming the current state-of-the-art (SoTA) RIS method by more than 200%. To sum up, this work contributes to the following three aspects. **(1)** We construct a novel benchmark dataset, RIS-CQ, which challenges the existing RIS with complex queries, and enables a more realistic scenario of RIS research. **(2)** We present a strong-performing system (DuMoGa) to model the task, which brings a new SoTA results on the RIS-CQ dataset. **(3)** Series of in-depth analyses are shown based on our dataset and the systems, where some important and interesting findings are presented and concluded to shed light on the future exploration of this topic. All our data and resource will be made open later to facilitate the follow-up research. Figure 1: The comparison between the existing Referring Image Segmentation (RIS) and the complex-query RIS proposed in this work. ## 2 Constructing Complex Query for Referring Image Segmentation ### Problem Definition Given an image \(\mathcal{I}\) and a set of textual queries (i.e., long-form sentences with semantically complex expressions) \(\mathcal{Q}=\{\mathbf{p}_{i}\}_{i=1}^{M}\), RIS-CQ aims to predict a set of segmentation masks \(\mathcal{S}=\{\mathbf{s}_{i}\}_{i=1}^{M}\), where each \(\mathbf{s}_{i}\) correspond to each query \(\mathbf{p}_{i}\) that localizes it in the image. Note that \(M\) is the number of referring expressions for a given image \(\mathcal{I}\), which is set as \(M\in[1,10]\). ### Dataset Construction In this section, we elaborate on how to elicit complex and contextualized queries from large language models (_i.e._, ChatGPT) by leveraging diverse structured semantics in images (_e.g._, inter-object relations). First, we extract holistic semantic relations for each image and then select more significant objects that involve diverse interactions with other objects as the candidate objects. Second, we utilize specially tailored prompts to guide ChatGPT in generating complex queries for each candidate object based on rich visual context. Finally, we manually filter out or revise problematic queries (_e.g._, ambiguous references), to ensure the annotation quality. Step-1: Relation extraction.To begin with, we utilize an off-the-shelf scene graph generation model, VC-Tree, to directly extract the objects present in the images along with their relationships. These relationships are stored in the form of triplets, such as <person, cup, holding>, which indicates that the person is holding the cup. Additionally, during this step, we apply a simple filter to exclude objects that have fewer than **two** relationships with other objects. We believe that generating queries to describe these objects would be relatively straightforward, increasing the probability of ambiguous references or incorrect references. Step-2: Complex query generation.In this section, we devise a tailored set of prompts to efficiently leverage the in-context learning capabilities of a large language model. For each object and its corresponding relation triplets, we generate a descriptive text query using the model. For instance, given a set of triplets associated with the target object _person_, <person, cup, holding>, <person, wall, leaning on>, <table, person, next to>, ChatGPT can provide us with the output "the person is next to the table, holding the cup and leaning on the wall." Specifically, we utilize the gpt-3.5-turbo model as our large language model, with the API interface provided by OpenAI. To ensure stable outputs from the model, we set the decoding temperature to 0. The relevant prompts used are detailed in the appendix for reference. An additional noteworthy aspect is that the presence of the large language model entirely liberates us from the manual labor of annotating images for generating language queries. This significantly reduces our workforce costs while ensuring the quality of queries. Moreover, it facilitates the expansion of our dataset to the magnitude of 100k. Step-3: Post-processing.Finally, we manually filter out or revise queries with ambiguous references to ensure a precise one-to-one correspondence between the queries and objects within the image. After multiple iterations in constructing prompts, we have successfully developed prompts that yield high-quality queries generated by ChatGPT. However, it is important to note that since ChatGPT receives input solely from the relation triplets generated by the scene graph model, it lacks certain contextual information from the image. As a result, there may be instances where the generated queries exhibit ambiguous references to the objects being described. Unfortunately, these queries cannot be filtered out automatically in our automated pipeline, necessitating manual intervention for their removal. ### Dataset Statistics Figure 3 presents the statistical analysis of objects and predicates in the RIS-CQ dataset, which consists of a total of 133 object classes and 56 relation classes. The language queries generated are solely based on the objects depicted in Figure 3 along with their corresponding relations. The objects and relations are categorized into 9 groups and 6 groups respectively, with each group containing elements that exhibit certain correlations. The group names represent abstract summaries of the shared attributes among the elements within each group. We analyze the failure cases of current models on RIS-CQ dataset. From Figure 2, we can summarize them into three types: misidentified referent, incorrect object classification, and failure segmentation. ### Dataset Comparison **RefCOCO/ RefCOCO+/ RefCOCOg.** RefCOCO [30], RefCOCO+ [30], and RefCOCOg [20] are three visual grounding datasets with images and referred objects selected from MS COCO [15]. The referred objects are selected from the MS COCO object detection annotations and belong to 80 object classes. RefCOCO [30] has 19,994 images with 142,210 referring expressions for 50,000 object instances. RefCOCO+ has 19,992 images with 141,564 referring expressions for 49,856 Figure 3: The distribution of object and relation categories, organized based on the parent classes. Best viewed by zooming in. Figure 2: Failure cases of Complex-query Referring Image Segmentation. object instances. RefCOCOg has 25,799 images with 95,010 referring expressions for 49,822 object instances. Visual Genome.Visual Genome (VG v1.4) [11] contains \(108,077\) images with \(21\) relationships on average per image, which is split into \(103,077\) training images and \(5,000\) testing images. Ris-Cq.is our proposed referring image segmentation benchmark, which targets the explanation of image contents. The image source for RIS-CQ is the union of VG [11] and COCO [15] datasets. RIS-CQ challenges RIS models to understand the rich object interactions in daily activities. As shown in Table 1, we summarize the details of each dataset compared with our RIS-CQ dataset. ## 3 Proposed Method Unlike classic dense prediction models with complex design in RIS (e.g. LAVT [27]), which take expensive computational power to make inferences, the graph learning based architecture provides efficient training process and promising results. To address the numerous descriptions of surrounding objects and backgrounds within images, images are parsed into scene graphs, which are treated as fine-grained image representations to analyze the relations of these objects with a graph structure. On the other hand, following previous dependency-based RE methods [6], we parse the queries into syntax dependency trees, which can provide information for dual-modality graph alignment. As a result, we propose a graph learning-based method named DuMoGa to align the semantics in queries and the information in images, aiming at efficiently locating the target instances. The whole structure of our proposed DuMoGa is shown in Fig 4. \begin{table} \begin{tabular}{c c c c c c} \hline \hline **Dataset** & **RefCOCO** & **RefCOCO+** & **RefCOCOg** & **Visual Genome** & **RIS-CQ** \\ \hline \# Images & 19,994 & 19,992 & 26,711 & 108,077 & 118,287 \\ \# Text query & 142,209 & 141,564 & 85,474 & 5.4M & 285,781 \\ Avg. query length & 3.61 & 3.53 & 8.43 & 5 & 13.18 \\ Avg. object / query & 1.76 & 1.67 & 3.03 & - & 3.58 \\ Annotation methods & Manual & Manual & Manual & Manual & Auto + Manual \\ \hline \hline \end{tabular} \end{table} Table 1: Statistics of current Referring Image Understanding benchmarks. Figure 4: Our proposed DuMoGa framework, includes three procedures: Dual-modality Graph Representation, Dual-modality Graph Alignment, and Prediction. Best viewed by zooming in. ### Dual-Modality Graph Representation **Semantic Scene Graph Generation for Vision.** We employ a classic SGG model (e.g. VCTree [23; 13]) to parse images into scene graphs. The whole process can be formulated as: \[P_{r}\left(\mathcal{G}\mid\mathcal{I}\right)=P_{r}\left(\mathcal{S},\mathcal{O },\mathcal{R}\mid\mathcal{I}\right). \tag{1}\] An image \(\mathcal{I}\) can be segmented into a set of masks \(\mathcal{S}\). Each mask is associated with an object with class label \(\mathcal{O}\). A set of relations \(\mathcal{R}\) between objects are predicted. We construct the node set \(N_{i}=\{O_{j}|j\in[1,n]\}\) of the scene graph, \(O_{i}\) is the \(i_{th}\) object detected in the image. \(n\) represents the number of detected object, and we set the maximum number of detected object for each image to \(N\). For the edge set \(E_{i}=\left\{R_{a,b}=\left(I_{a},I_{b}\right)|a\in[1,n]\,,b\in[0,n-1]\right\}\), \(R_{a,b}\) denotes the \(a_{th}\) object related with the \(b_{th}\) object. As a result, the scene graph can be formulated as follows: \[G_{i}=\left(N_{i},E_{i}\right). \tag{2}\] **Syntax Dependency Graph for Text.** We use the syntax dependency tree to explore the dependency between words in the sentence. Following [22], we use ELMo to obtain the dependency tree for the input text after which each word from the text is connected by its governor and obtains its related dependency triple. For the node set \(N_{t}=\{T_{j}|j\in[1,l]\}\) of the dependency tree, \(T_{j}\) denotes the \(j_{th}\) token in the sentence, and \(l\) represents the total length of the sentence. For the edge set \(E_{t}=\{G_{a,b}=\left(T_{a}^{*},T_{a}\right)|a\in[1,l]\}\),\(T_{a}^{*}\) denotes the governor of the \(a_{th}\) token, and the graph representation for the sentence can be formulated as follows: \[G_{t}=\left(N_{t},E_{t}\right). \tag{3}\] ### Dual-modality Alignment To accurately locate the target instance using the query sentence, the gap between visual and semantics needs to be bridged. Thus, we propose the graph and feature alignment process to efficiently align visual and language domains. **Graph Alignment.** We approximate the node embedding matrix \(\widetilde{P}\) by factorizing a similarity matrix of the node identities, and align nodes between the above two graphs by greedily matching the most similar embeddings from the other graph. Following [35], we combine \(N_{i}\) and \(N_{t}\), and count both in and out degrees of k-top neighbors for each node. Then we compute the similarity between every two nodes, getting a \(n\times n\) similarity matrix \(\widetilde{P}\). After that we subsets \(P_{1}\) and \(P_{2}\) from \(\widetilde{P}\). The \(P_{1}\) and \(P_{2}\) denote separate representations for nodes in \(G_{i}\) and \(G_{t}\). \[\widetilde{P}_{1},\widetilde{P}_{2}=D\left(N\left(\widetilde{P}\right)\right), \tag{4}\] where \(D\) denotes the dividing operation of \(P\) by the number of nodes in \(N_{i}\) and \(N_{t}\) in order, and \(N\) is normalization operation. When finishing graph structure alignment, we calculate the similarity between node \(i\) from \(\widetilde{P}_{1}\) and node \(j\) from \(\widetilde{P}_{2}\) using the formula below: \[a_{ij}=exp(-\left\|\widetilde{P}_{1}[i]-\widetilde{P}_{2}[j]\right\|_{2}^{2}). \tag{5}\] With the similarities between nodes, the two graphs are transformed into a similarity map \(\alpha\), which can be formulated as: \[\alpha=\left(a_{ij}\right)_{\left|V_{1}\right|\times\left|V_{2}\right|}, \tag{6}\] where \(a_{ij}\) represents the structural similarity between the \(i_{th}\) word of the input text and the \(j_{th}\) object of the input image. **Feature Alignment.** Though graph structure is efficient, the information within is not enough for the task. We take the advantage of visual features from images and word embeddings from sentences to promise a fine-grained searching space. Specifically, for each image \(I\), we get its visual feature \(F_{i}\) from backbone model (e.g. ResNet-50 [7]). For each sentence \(Q\), we get its semantic embedding \(F_{l}\) using BERT [3]. We treat \(F_{i}\) as the query while \(F_{l}\) is treated as the key and value, completing the attention process below: \[R^{a}=\text{Softmax}\left(\frac{qk^{T}}{\sqrt{d}}\right)v, \tag{7}\] where d denotes the dimension of \(F_{l}\). **Feature Fusion and Prediction.** After obtaining the results from graph alignment and feature alignment processes, we fuse them to make the final prediction. \[R^{f}=MLP\left([R^{a};\alpha F_{l}]\right),\quad s=j^{*}=\operatorname*{Argmax} _{1\leq j\leq N}(R_{1}^{f},...R_{j}^{f},...,R_{N}^{f})\,, \tag{8}\] Where \([;]\) denotes concatenation, and we use the MLPs to project the feature to \(N\) dimension. \(s\) denotes the output of the referring objects from the input with the largest probability. ### Training Loss For the \(N\) detected objects in the image, we match their masks with the ground truth mask and obtain the \(g_{th}\) object that share the most intersection over union with the ground truth. To maximize the probability on the \(g_{th}\) dimension of \(R^{f}\), we employ a cross-entropy loss function below: \[L=\sum_{i=1}^{N}[-K^{i}log(R_{i}^{f})-(1-K^{i})log(1-R_{i}^{f})]. \tag{9}\] \(K\) represents a one-hot vector with all zeros except for the \(g_{th}\) dimension, and \(K^{i}\) denotes the \(i_{th}\) dimension of \(K\). As a result, the probability on the \(g_{th}\) dimension of \(R^{f}\) is maximized. ## 4 Experiment ### Experimental Settings We employ a VCTree [23] with ResNet-50 [7] as its backbone for scene graph generation and visual feature extraction. The maximum detected object in one image is set to 10. Specifically, the feature dimension for each of the detected object is 1024. For the query sentence, we append the [CLS] and [SEP] tokens to the beginning and end of the sentence respectively. Then a pre-trained BERT is used to generate sentence embedding and the dimension is set to 768. For our DuMoGa model, it is trained with Adamw optimizer, where we set the base learning rate at 2e-5 and the batch size at 64. ### Evaluation Metrics For the Referring Image Segmentation task, we follow previous works [16; 21] and adopt Precision@\(X\) and IoU to verify the effectiveness. The IoU calculates intersection regions over union regions of the predicted segmentation mask and the ground truth. The Precision@\(X\) measures the percentage of test images with an IoU score higher than the threshold \(X\in\{0.3,0.4,0.5,0.6,0.7\}\), which focuses on the location ability of the method. ### Performance Comparison Table 3 shows the performance comparison with other SoTA methods on the RIS-CQ dataset, which includes the following models: \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multirow{2}{*}{**Backbone Model**} & \multicolumn{6}{c}{**Refer Image Segmentation**} \\ \cline{3-7} & & mIoU & [email protected] & [email protected] & [email protected] & [email protected] & [email protected] \\ \hline MAttNet [31] & ResNet-101 & 8.00 & 9.61 & 7.90 & 6.15 & 5.51 & 4.58 \\ VPD [34] & U-Net & 24.0 & 29.5 & 27.5 & 23.8 & 21.5 & **19.3** \\ LAVT [4] & SWIN-B & 21.2 & 26.6 & 21.6 & 17.1 & 13.7 & 10.9 \\ UNINEXT [26] & ResNet-50 & 19.8 & 22.3 & 21.8 & 21.0 & 19.9 & 19.2 \\ \hline DuMoGa (_GA_) & ResNet-50 & 15.0 & 19.5 & 18.3 & 16.7 & 14.4 & 11.7 \\ DuMoGa (_FA_) & ResNet-50 & 16.4 & 21.1 & 19.4 & 17.8 & 15.5 & 13.4 \\ DuMoGa (_FULL_) & ResNet-50 & **24.4** & **31.8** & **29.7** & **26.8** & **23.1** & **19.3** \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison with state-of-the-art methods in terms of overall IoU on three benchmark datasets. _GA_ represents Graph Alignment, _FA_ represents Feature Alignment, and _Full_ represents Graph Alignment + Feature Alignment. **LAVT**[27] is a Transformer-based framework for referring image segmentation. Unlike traditional approaches that fuse cross-modal information after feature extraction, LAVT incorporates language-aware visual encoding directly into the model architecture; **VPD**[34] proposed a new framework that exploits the semantic information of a pre-trained text-to-image diffusion model in visual perception tasks; **UNINEXT**[26] reformulates diverse instance perception tasks into a unified object discovery and retrieval paradigm and can flexibly perceive different types of objects by simply changing the input prompts; **MAttNet**[31] is a two-stage method that first extracts multiple instances by using an instance segmentation network Mask RCNN [8], then utilizes language features to select the target from the extracted instances. Due to the space limitation, we only list selected SoTA methods, more details can be found in **Supplementary Materials**. As shown in Table 3, our method achieves remarkably superior performance on RIS-CQ dataset compared with other SoTA methods. Also our DuMoGa model is based on ResNet-50 [7], which is not so strong as Swin-B [17], DuMoGa achieves over 100% to 200% performance gain in all metrics compared with LAVT [4] on RIS-CQ dataset. With stronger backbone models, the performance of our DuMoGa model can be further improved. We also provide the visualization of the output of our DuMoGa model in Figure 5. Compared with LAVT [4], our DuMoGa can accurately locate the referring object based on the comprehensive understanding of visual-textual data, rather than response to partial words in the query (such as the _stop sign_ in Figure 5 (a)). ### In-depth Analysis **Q1: How to control the annotation quality?** The annotation process for the RIS-CQ dataset spanned over a period of 6 months and involved the collaboration of 5 undergraduate students. To ensure the production of high-quality annotations, the annotators were supervised and adhered to specific principles. Firstly, all annotators underwent rigorous training before commencing the actual annotation process. Secondly, separate annotators were assigned for query and object selection annotations. Annotators responsible for object selection were instructed to review the quality of queries, ensuring that unclear or poor queries were either rectified or eliminated. This approach simulated the evaluation process, guaranteeing the reasonableness of the queries while avoiding subjective annotations. Lastly, guidelines were provided for the maximal lengths of language queries, specifying limits of over 20 words for queries and over 5 for objects/background labels. We provide more details in the **Supplementary Materials**. **Q2: What is the necessity of complex query? On the one hand**, current RIS models are sensitive to language queries. Even input two different queries which refer to the same object but contain Figure 5: Visualization result of our proposed DuMoGa model on RIS-CQ dataset. It is noteworthy that invalid prediction for LAVT was observed in the case of sample (b). information in different granularities, current RIS models will output different regions. So we need to construct a new RIS dataset with complex queries to validate the robustness of the proposed RIS models. **On the other hand**, it is not the final path to general RIS model if restricted to train models with downstream annotated benchmark datasets, such as RefCOCO with limited object classes and short query length. It's time to make a further step to real applications by combining with large pre-trained models and propose a novel and informative RIS dataset with more relations among objects. ## 5 What To Do Next? With the emergence of large pre-trained models (such as GPT-4 [1] and SAM [10]), the semantic understanding ability of dealing with multi-modal data has been rapidly enhanced. Then, utilizing large pre-trained models is an irresistible trend due to their superiority in open-vocabulary scenarios. In the next step, we can develop lightweight RIS modules (based on the Graph Alignment and Feature Alignment modules in our DuMoGa model) to adapt large pre-trained models in a plug-and-play manner. Besides, on the basis of our proposed RIS-CQ dataset, we can explore more scenarios in real applications, such as referring object understanding in video and audio modality. And evaluate the robustness of proposed RIS models even with none or multiple referring objects according to the complex queries. ## 6 Related Work Referring image segmentation aims to segment a target region (_e_.\(g\)., object or stuff) in an image by understanding a given natural linguistic expression, which was first introduced by [9]. Early works [16; 14; 21] first extracted visual and linguistic features by CNN and LSTM, respectively, and directly concatenated two modalities to obtain final segmentation results by an FCN [18]. In MAttNet [31], Yu et al. proposed a two-stage method that first extracts instances using Mask R-CNN [8], and then adopts linguistic features to choose the target from those instances. Then, a series of works are proposed to adopt the attention mechanism. EFNet [5] designs a co-attention mechanism to use language to refine the multi-modal features progressively, which can promote the consistency of the cross-modal information representation. Some recent works leverage transformer [24] to deal with the RES task with satisfying performance. VLT [4] employs a transformer to build a network with an encoder-decoder attention mechanism for enhancing the global context information. All these methods are trained with RIS datasets with simple queries. ## 7 Conclusion In this paper, we propose a novel benchmark dataset for Referring Image Segmentation with Complex Queries (RIS-CQ). The RIS-CQ dataset is of high quality and large scale, which challenges the existing RIS with complex, specific and informative language queries, and enables a more realistic scenario of RIS research. Besides, we propose a novel SoTA framework to task the RIS-CQ dataset, called dual-modality graph alignment model (DuMoGa), which effectively captures the intrinsic correspondence of the semantics of the two modalities. Experimental results and analyses demonstrate the necessity of our proposed dataset and the model.
2306.17480
Constraints on thermalizing surfaces from infrared observations of supermassive black holes
Infrared observations of Sgr A$^*$ and M87$^*$ are incompatible with the assumption that these sources have physical surfaces in thermal equilibrium with their accreting environments. In this paper we discuss a general parametrization of the energy balance in a horizonless object, which permits to quantify how close a horizonless object is in its behavior to a black hole, and analyze the timescale in which its surface can thermalize. We show that the thermalization timescale is unbounded, growing large for objects that mimic closely the behavior of a black hole (and being infinite for the latter). In particular, the thermalization timescale is proportional to the time that energy spends inside the horizonless object due to propagation and interactions with the bulk. Hence, these observations can be used to quantitatively restrict the dynamical behavior of horizonless objects, without being able to discard the existence of a physical surface.
Raúl Carballo-Rubio, Francesco Di Filippo, Stefano Liberati, Matt Visser
2023-06-30T08:52:41Z
http://arxiv.org/abs/2306.17480v1
# Constraints on thermalizing surfaces from infrared observations ###### Abstract Infrared observations of Sgr A\({}^{*}\) and M87\({}^{*}\) are incompatible with the assumption that these sources have physical surfaces in thermal equilibrium with their accreting environments. In this paper we discuss a general parametrization of the energy balance in a horizonless object, which permits to quantify how close a horizonless object is in its behavior to a black hole, and analyze the timescale in which its surface can thermalize. We show that the thermalization timescale is unbounded, growing large for objects that mimic closely the behavior of a black hole (and being infinite for the latter). In particular, the thermalization timescale is proportional to the time that energy spends inside the horizonless object due to propagation and interactions with the bulk. Hence, these observations can be used to quantitatively restrict the dynamical behavior of horizonless objects, without being able to discard the existence of a physical surface. + Footnote †: preprint: YITP-22-84 Introduction While black holes have been for a long time a central topic in gravitation theory, the fast-pacing advancements in gravitational-wave detection and very-long-baseline interferometry (VLBI) observations have revived the interest in the possibility of probing the inner structure of these purely gravitational objects. Among the most striking consequences of these developments is the possibility to test deviations from the standard solutions of general relativity describing black holes, which are singular and are therefore expected to be regularized by quantum-gravitational effects. Quite remarkably, the viable resulting geometries endowed with an outer horizon where found to belong to basically two families of solutions [1; 2]. Both these families admit as limiting cases horizonless ultracompact configurations (see [3] for details). Similar static solutions for ultra-compact quasi-black hole configurations can be found in the literature independently from the aforementioned limiting procedure (see e.g. [4; 5; 6; 7; 8; 9] and references therein), and as such they appears to be a rather generic class of black hole mimickers and interesting case study for observational constraints. While there is by now a rich literature concerning the theory, phenomenology and viable constraints in different classes of black hole mimickers (see e.g. [10; 11] for comprehensive reviews on this subject), for what concerns constraints on ultra-compact horizonless objects with a physical surface, a special role has been recently played by VLBI observations of supermassive black holes (Sgr A\({}^{*}\) and M87\({}^{*}\)) [12; 13]. Here, we will focus on complementary arguments that constrain the possible existence of a surface using infrared observations of Sgr A\({}^{*}\) and M87\({}^{*}\)[14; 15; 16; 17; 18]. The arguments in the aforementioned papers were groundbreaking in demonstrating that constraining the existence of a surface was within reach with available data from infrared observations. In particular, these papers indicate that observations are incompatible with a physical surface in thermal equilibrium with its environment. Nonetheless, our understanding of black hole mimickers has strongly advanced in recent times, and it is not clear whether thermal equilibrium is reached within a sufficiently short timescale. In what follows we shall show that a more accurate characterization of the physics involved in these exotic objects has a profound impact on the implications of these early analyses, resulting in more complete physical models and thus refined constraints. The present authors have pursued this line of research in previous works, in particular [10] and [19] (see also [20; 21] by other authors). These works have shown that updating the assumptions in [14; 15; 16; 17; 18] can result in sizeable changes in the associated constraints, thus reaffirming the necessity for a critical revision of the underlying assumptions on which the latter are based. We want to stress here that the most critical aspect for the evaluation of these constraints is an adequate parametrization of the energy exchange between the horizonless object and its environment. More specifically, equilibrium requires that incident energy onto the horizonless object is re-emitted, which will generally occur only after a certain re-emission timescale. It is essential to account for this re-emission timescale in analysis that determine whether or not reaching equilibrium is possible. In this work, we study this problem for the first time, building a general parametrization of this energy exchange that includes a temporary absorption coefficient and timescale, and analyze how the equilibrium timescale depends on these parameters. ## II Energy balance in a horizonless object When a black hole is surrounded by matter, all energy that moves across the horizon is absorbed by the black hole, which adjusts dynamically by changing its mass and angular momentum (and possibly electric charge, though this is not particularly relevant in astrophysical situations). Of course, semiclassically black holes can in principle re-emit part of this energy back in the form of Hawking radiation over long times, however for most astrophysical black holes the cosmic microwave background is hot enough to counterbalance this tendency and induces further black hole growth (measured in terms of the horizon area) even in the absence of matter fluxes [22]. For horizonless objects the physics is more complex. In the most general situation, the net absorption associated with a black hole can be replaced by the following channels: 1. **Absorption:** A fraction \(\kappa\) of the incident energy can be permanently absorbed by the internal degrees of freedom of the object, changing the intrinsic state of the latter. 2. **Temporary absorption/Delayed re-emission:** A fraction \(\tilde{\kappa}\) of the incident energy can be re-emitted (inelastically) after a certain amount of time \(\tau_{\tilde{\kappa}}\), with the delay caused by a combination of propagation and interaction effects in the bulk. 3. **Instantaneous re-emission:** A fraction \(\tilde{\Gamma}\) of incident energy can be re-emitted (inelastically) almost instantaneously, after interaction with surface degrees of freedom. 4. **Reflection:** A fraction \(\Gamma\) of incident energy can be reflected (elastically) without being absorbed by the object. 5. **Transmission:** A fraction T of the incident energy can travel freely across the object without any interactions taking place. Conservation of energy implies \(\kappa+\tilde{\kappa}+\tilde{\Gamma}+\Gamma+\text{T}=1\). Note that the coefficient \(\tilde{\kappa}\) can either describe absorption or instantaneous re-emission in the limits \(\tau_{\tilde{\kappa}}\rightarrow\infty\) and \(\tau_{\tilde{\kappa}}\to 0\), respectively. Hence, it can be understood as a more physical realization of these two (idealized) channels. In previous work [10], when applying this parametrization to a discrete model of energy exchange, we only considered these idealized channels (also, we implicitly set \(\text{T}\to 0\)), but here we want to go a step forward. The specific behavior of a given horizonless object is model-dependent. In fact, our knowledge of the dynamics of these objects is not detailed enough to determine which of the channels above is dominant for a given model. Hence, from a phenomenological perspective it is reasonable to consider all of them as equally possible, and cast constraints on the different parameters involved. Given the above five parameters \(\kappa\), \(\tilde{\kappa}\), \(\tilde{\Gamma}\), \(\Gamma\) and T, respectively introduced for the five items listed above, we can easily see that they are sufficient for characterizing a broad class of horizonless black hole mimickers. One of such objects with only \(\kappa\neq 0\) will be the closest in behavior to a black hole. On the other hand, a horizonless object with only \(\tilde{\kappa}\neq 0\) will behave like a black hole for a certain timescale \(\tau_{\tilde{\kappa}}\) that can be very long depending on the model. The remaining limiting cases, only \(\tilde{\Gamma}\neq 0\) and only \(\Gamma\neq 0\) respectively, display more stark deviations with respect to black holes, and could be potentially constrained in VLBI observations [13; 23]. A similar comment applies to objects with only \(\text{T}\neq 0\)[24; 25]. Now that we have introduced our parametrization, let us dwell in the next section in the discuss previous works that have explored the role of these parameters in infrared and VLBI observations of supermassive black holes. ## III Relation to previous work The parametrization introduced in the previous section aimed at being complete regarding the possible types of interactions between the incoming energy and the horizonless object. Previous works in the subject consider a subset of these behaviors, which we briefly review in the following, together with the reasons behind such choices. Most of the works below assumed spherical symmetry (except when otherwise noted below). Hence, we can introduce an effective radius of the object \(R\), together with a dimensionless measure of compactness, \(\mu=(R-2M)/2M\). * The original works [14; 15; 16; 17; 18] assumed an instantaneous remission by the ultra compact object of the incident radiation, i.e. \(\kappa=\tilde{\kappa}=\Gamma=\rm T=0\), and only \(\tilde{\Gamma}\neq 0\). In the argument provided by the authors this follows from the consideration that, in thermal equilibrium, Kirchhoff's law implies that all energy received by the horizonless object is instantly re-emitted. On general grounds this would imply that, if energy is initially distributed among the other channels for a given model of horizonless object, it is the dynamical evolution towards equilibrium that progressively re-distributes it until the energy balance can be adequately described by \(\kappa=\tilde{\kappa}=\Gamma=\rm T=0\). The authors showed then that thermal equilibrium is incompatible with infrared observations. * A further step was taken in [20; 26] with the analysis of the timescale required for equilibrium to be reached, still under the assumption \(\kappa=\tilde{\kappa}=\Gamma=\rm T=0\) so that only \(\tilde{\Gamma}\neq 0\). This analysis showed that gravitational lensing plays an important role in attaining thermal equilibrium, a role previously unaccounted for. Indeed, with increasing compactness there is a closing escaping angle \(\Delta\Omega\) for rays leaving the object surface: for \(\mu\ll 1\) one can show that \(\Delta\Omega/2\pi\approx 27\mu/8+O(\mu^{2})\)[10] (in the following, we will define \(\Delta=\Delta\Omega/2\pi\)). In turn this implies that the timescale in which equilibrium is reached must scale at least as \(1/\mu\). Thus, the equilibrium assumption fails to hold for \(\mu\) small enough. This means that the incompatibility between thermal equilibrium and infrared observations can be translated into a constraint on \(\mu\) (or, equivalently, \(R\)). * The timescale to reach equilibrium was re-analyzed in [10], together with the introduction of non-zero coefficients \(\kappa\neq 0\), \(\Gamma\neq 0\) and \(\tilde{\Gamma}\neq 0\) (the coefficient \(\tilde{\kappa}\) was implicitly considered in the general parametrization introduced, but put to zero for the analysis of equilibrium, together with \(\mathrm{T}=0\)). For this more general situation, the constraints are now formulated as (generically nonlinear) combinations of the available parameters. Of particular importance is the absorption coefficient, being the constraints very sensitive to non-zero values of the latter. * Once rotation is included [27], re-emission is not uniform throughout the surface. This effect increases with spin, and makes previous calculations of the timescale in which equilibrium is reached inapplicable. In particular, the re-emission pattern of equilibrium in the presence of rotation, and the timescale in which this pattern can arise, are unknown. * In [13], an updated account of the original works [14; 15; 16; 17; 18] is provided, also taking into account the aforementioned effect of gravitational lensing [20; 26]. This updated discussion still has \(\kappa=\tilde{\kappa}=\Gamma=\mathrm{T}=0\) and only \(\tilde{\Gamma}\neq 0\), as it focuses on the equilibrium state. That neglecting absorption is questionable was stressed in the follow-up paper [19] stressing again the profound impact that taking it into account can have on the obtainable constraints. As it is apparent in the brief review above, the arguments [14; 15; 16; 17; 18] have generated widespread interest, and further refinements have been published by different groups. A possible point of contention is whether or not a non-zero value of \(\kappa\) is physically reasonable. Let us discuss this in some detail in the next section. For completeness, before focusing on the role of \(\kappa\) and \(\tilde{\kappa}\), we include a list of works that have used (part of) the parametrization above to model VLBI observations of alternatives to black holes. VLBI observations provide complementary constraints to the infrared constraints that are the subject of this paper. The parametrization introduced in [10], which does not include the transmission coefficient, was used in [23] to determine the image features associated with reflection and re-emission, providing an exhaustive exploration of the parameter space spanned by \(R\), \(\Gamma\) and \(\tilde{\Gamma}\). Complementary models in which only non-zero transmission coefficient \(\mathrm{T}\) was included were the focus of [24; 25]. On the other hand, [13] also discussed the features associated with reflection for specific values of the parameters \(R\) and \(\Gamma\). The interplay between absorption and thermal equilibrium It is clear that \(\kappa\neq 0\) prevents equilibrium, in the sense of perfect balance between received and instantaneously re-emitted energy, to be reached. The same comment holds true for any of the other channels (delayed re-emission, reflection and transmission) discussed in Sec. II. Indeed, any energy deposited in any of these channels cannot go into the instantaneous re-emission channel, thus always resulting into a deficit in the re-emission channel with respect to the incident energy. Hence, a possible objection is that assuming that \(\kappa\neq 0\) is incompatible with equilibrium. Note, however, that this is actually what happens if the central object is a classical black hole. A classical black hole can never be in equilibrium with its accreting environment, due to its purely absorptive nature (\(\kappa=1\)), and the fact that all incident energy is stored in internal degrees of freedom. The same comment applies to semiclassical black holes (\(|\kappa-1|\ll 1\)), as the features of re-emission of energy in the form of Hawking radiation are constrained as a function of the black hole mass, and cannot be arbitrarily adjusted to achieve equilibrium with the accreting environment. Horizonless objects are expected to mimic closely the behavior of black holes. Even though the mimicked behaviors are model-dependent, it is reasonable to expect that at least part of the incident energy will be transferred to internal degrees of freedom, and that not all this energy can be re-emitted in arbitrary amounts to achieve equilibrium. A simple argument in this sense consisting in the fact that ultra-compact object must be able to convert at least some of the incident energy in expansion so to avoid to form a trapping horizon as a consequence of the accreting energy [28]. These aspects are certainly dependent on the dynamics of specific models, which is not well understood. It is therefore completely unknown whether it is reasonable to assume that a horizonless object must reach equilibrium with its accreting environment. It may well be that not being able to reach equilibrium with its accreting environment (or not being able to do so on astrophysical relevant timescales) is a feature of horizonless objects. Even if we assume that \(\kappa=0\), there is still the issue that, while all incident energy in this case will be radiated away, the amount of time it takes for the re-emission to take place, \(\tau_{\hat{\kappa}}\), is unknown and also dependent on model-dependent dynamics. Again, for a classical black hole this time is infinite, so one can expect that a good black hole mimicker will have a relatively long timescale for delayed re-emission. This leads to the natural question about how this delay in re-emission impacts on the achievement of thermal equilibrium, in particular on the associated timescale. As the role of absorption \(\kappa\) has been studied in previous papers, we will focus on the role of temporary absorption for the rest of the paper. For the sake of comprehensiveness, we will discuss the energy exchange between a horizonles objects and its accreting environment in full generality, and then focus on the situation in which \(\kappa=\tilde{\Gamma}=\Gamma=\text{T}=0\) but \(\tilde{\kappa}\neq 0\), and analyze amount of time that it takes for a horizonless object to achieve equilibrium with its environment as a function of the timescale of energy release. We will discuss how this model reproduces the behavior analyzed previously in suitable limits (very short and very long re-emission times, respectively), and the new insights that it provides into the problem of equilibrium. ## V A discrete model of energy exchange In this section, we introduce a discrete model to describe the energy exchange between a general horizonless object and its environment. Let us consider a discretization of time such that we use the set of integers \(\{1,...,n\}\) to denote different moments in time. All time intervals have the same size \(\Delta t\), which we take to be roughly proportional to the light-crossing time \(\tau_{\text{S}}=r_{\text{S}}/c\). We assume that there is a uniform energy injection \(x\) in each interval. Also, \(\{x_{i}\}_{i=1}^{n}\) will be the incident energy (that is, the energy that reaches the object from its environment) at different moments, and \(\{\epsilon_{i}\}_{i=1}^{n}\) the energy released by the object at the same time. In App. A we discuss the energy balance for different time intervals in order to derive recursion relations, while Fig. 1 provides a schematic summary. It follows that we can write the total incident energy \(X_{n}\) and the total escaping energy \(E_{n}\) in the interval \(n\leq N\) as \[X_{n}=\sum_{k=1}^{n}x_{k},\qquad E_{n}=\sum_{k=1}^{n}\epsilon_{k}, \tag{1}\] where \[\epsilon_{k}=\Delta\tilde{\Gamma}x_{k},\qquad 1\leq k\leq N, \tag{2}\] while \[x_{1}=x,\qquad x_{2}=(1-\Delta)\tilde{\Gamma}x_{1}, \tag{3}\] and \[x_{k+1}=[\Gamma+(1-\Delta)\tilde{\Gamma}]x_{k},\qquad 2\leq k\leq N. \tag{4}\] Due to temporary absorption, Eqs. (1)-(4) must be completed with the following modifications for \(n\geq N+1\): \[\epsilon_{k}=\Delta(\tilde{\Gamma}x_{k}+\tilde{\kappa}x_{k-N}),\qquad k\geq N +1, \tag{5}\] and \[x_{k}=[\Gamma+(1-\Delta)\tilde{\Gamma}]x_{k-1}+(1-\Delta)\tilde{\kappa}x_{k-N -1}\qquad k\geq N+1. \tag{6}\] Let us now take a closer look to the physics in these recursion relations. Figure 1: Schematic proof of Eqs. (7) and (8), with time in the vertical direction. The quantities \(\epsilon_{k}\) and \(x_{k}\) are the released energy and the incident energy in the interval \(k\), respectively. These quantities can be related to each other and also to the corresponding quantities in the previous time interval \(k-1\), as shown in the figure (see also App. A for a complementary discussion). Role of temporary absorption The recursion relations discussed in the previous section allow to study general situations for arbitrary values of the five parameters \(\kappa\), \(\tilde{\kappa}\), \(\tilde{\Gamma}\), \(\Gamma\) and \(\mathrm{T}\). However, as we want to understand the role played by temporary absorption in the achievement of thermal equilibrium, we will focus here on the simplified case in which only \(\kappa\) and \(\tilde{\kappa}\) are non-zero. The recursion relations are then reduced to \[\epsilon_{k}=\Delta[(1-\kappa-\tilde{\kappa})x_{k}+\tilde{\kappa}x_{k-N}], \qquad k\geq N+1, \tag{7}\] and \[x_{k}=(1-\Delta)(1-\kappa-\tilde{\kappa})x_{k-1}+(1-\Delta)\tilde{\kappa}x_{k -N-1}\qquad k\geq N+1. \tag{8}\] These expressions cannot be summed analytically. Numerical evaluation is always possible, though we have also been able to find an analytical approximation as discussed in the following. For the purposes of finding a suitable analytical approximation, let us consider for a moment \(\tilde{\kappa}=0\) (no temporary absorption), for which the recursion relation can be summed analytically leading to [10; 19]: \[\frac{\dot{E}\left(\tilde{\kappa}=0\right)}{\dot{M}}=\frac{\Delta(1-\kappa)} {\kappa+\Delta(1-\kappa)}\left\{1-(1-\kappa)^{t/\tau_{S}}\left(1-\Delta\right) )^{t/\tau_{S}}\right\}\,. \tag{9}\] From this expressions, it follows that for timescales longer than \[t\gtrsim\min\left(\kappa^{-1},\Delta^{-1}\right)\tau_{S}, \tag{10}\] the outgoing flux reaches a steady state: \[\left.\frac{\dot{E}\left(\tilde{\kappa}=0\right)}{\dot{M}}\right|_{\mathrm{ steady}}\simeq\frac{\Delta(1-\kappa)}{\kappa+\Delta(1-\kappa)}\,. \tag{11}\] Let us now come back to the case in which \(\tilde{\kappa}\neq 0\). Delayed re-emission introduces a delay between two successive bounces on the surface for a fraction of the energy. It is then reasonable to conjecture that considering the expression without re-emission, and replacing the timescale \(\tau_{\rm S}\) with the average timescale between to consecutive bounces for the fraction of energy that eventually escapes the gravitational field, could provide a good analytical approximation. A fraction \(\tilde{\kappa}\) of the energy takes a time \(\tau_{S}+\tau_{\tilde{\kappa}}=(N+1)\tau_{\rm S}\) between two consecutive bounces, whereas the remaining energy takes a time \(\tau_{\rm S}\). Therefore, the average time is given by \[\bar{\tau}=\tilde{\kappa}(N+1)\tau_{\rm S}+(1-\tilde{\kappa})\tau_{\rm S}=( \tilde{\kappa}N+1)\tau_{\rm S}\,, \tag{12}\] and we can make the following analytical guess: \[\frac{\dot{E}_{\rm guess}}{\dot{M}}=\frac{\Delta(1-\kappa)}{\kappa+\Delta(1- \kappa)}\left\{1-(1-\kappa)^{t/\bar{\tau}}\,(1-\Delta)^{t/\bar{\tau}}\right\}. \tag{13}\] It is straightforward to check numerically whether this provides a good approximation for the flux of energy; Fig. 2 shows that this is indeed the case. We can therefore use Eq. (13) as a very good approximation of the outgoing flux of energy. From this result, we can infer that the presence of delayed re-emission does not alter the asymptotic value of the energy flux once the steady state is achieved; rather, it prolongs the time it takes to reach the steady state. In fact, the steady state is now reached for timescales \[t\gtrsim\min\left(\kappa^{-1},\Delta^{-1}\right)(\tilde{\kappa}N+1)\tau_{\rm S} \tag{14}\] We can see that \(N\) plays an important role in the thermalization timescale. While \(\tilde{\kappa}\) is by construction bounded from above by 1, \(N\) can be unbounded. In fact, taking the limit \(N\to\infty\) recovers the behavior of a black hole, which means that larger values of \(N\) yield better black hole mimickers. In fact, even in the absence of absorption, the presence of temporary absorption can significantly weaken the constraint. For instance, in the case of Sgr A*, if we assume \(\kappa=\tilde{\kappa}=0\) the observational constraint [13] \[\frac{\dot{E}}{\dot{M}}<10^{-3}\,, \tag{15}\] (note that [13] provides a tighter constraint of \(10^{-3}\) instead of the \(10^{-2}\) in the original paper [14]) implies \[\frac{\dot{E}}{\dot{M}}\simeq\mu\frac{T}{\tau_{S}}<10^{-3}\,, \tag{16}\] where we have used the Eddington timescale \(T\simeq 3.8\times 10^{8}\) yr to provide an estimation of the typical timescale for the variation of its accretion rate. Note that the argument above requires the stationary of the source to be strictly applicable. Source variability can disrupt equilibrium or delay its onset in a way that is difficult to estimate using the formalism above. In some cases (e.g. [26]), the Hubble time is used instead of the Eddington timescale, which changes the bounds below but can also lead to an overestimation of these constraints due to the non-equilibrium nature of the source. Another aspect to take into account is that the accretion rate used in the equations above is also changing in time and likely higher in the past. Hence, there is some ambiguity on the precise numerical values of these constraints; a definitive solution for these ambiguities would require a more thorough understanding of the evolution of the coupled system composed by the horizonless central object and its accreting environment. Equation (16) implies \[\mu<10^{-18}\,. \tag{17}\] Figure 2: The left panels show the numerical evaluation of the outgoing flux of energy, while the right panels show the difference \(\Delta\dot{E}/M\) between the numerical evaluation and the analytical guess given in Eq. (13). The top panels are obtained fixing \(\Delta=10^{-6}\), \(\tilde{\kappa}=10^{-2}\), \(N=10^{3}\) and varying \(\kappa\). The bottom panels are obtained fixing \(\Delta=10^{-6}\), \(\kappa=10^{-6}\), \(N=10^{2}\) and varying \(\tilde{\kappa}\). On the other hand, when \(\tilde{\kappa}\neq 0\), we get \[\mu<\frac{T}{(\tilde{\kappa}N+1)\tau_{\rm S}}\,, \tag{18}\] This constraint can be much weaker than the one given in Eq. (16) for \(N\) large enough. A fundamental question to answer is therefore the value that \(N\) typically takes for specific models such as gravastars [4; 5; 6] or semiclassical relativistic stars [7; 8; 9]. Unfortunately, the dynamics of these models is not yet understood well enough to extract the value of \(N\). Nevertheless, it is possible to illustrate that \(N\) can become very large for black hole mimickers, due to gravitational time delay associated with propagation effects. Let us consider a very simple toy model, constructed in spherical symmetry by demanding that the Misner-Sharp-Hernandez mass [29; 30] for each sphere is an \(\epsilon\) away from its critical value which would yield the formation of a horizon [31]. The interior of such a stellar structure [32; 33; 34] is approximately described by the metric \[{\rm d}s^{2}=-\epsilon{\rm d}t^{2}+\frac{1}{\epsilon}{\rm d}r^{2}+r^{2}{\rm d }\Omega^{2}, \tag{19}\] where \({\rm d}\Omega^{2}\) is the usual line element on the unit 2-sphere. The re-emission timescale for incident energy can be split as the sum of the time of propagation inside the structure, plus the interaction with the latter. From Eq. (19), we can see that just propagation effects imply for this model that \[N\gtrsim\frac{1}{\epsilon}. \tag{20}\] For \(\epsilon\ll 1\), we then have \[\bar{\tau}\sim\frac{\tilde{\kappa}}{\epsilon}\tau_{\rm S}\simeq\frac{10^{-22} \tilde{\kappa}}{\epsilon}\left(\frac{M}{M_{\odot}}\right)\tau_{\rm H}, \tag{21}\] where \(\tau_{\rm H}\) is the Hubble time and \(M_{\odot}\) the mass of the Sun. Hence, it is not difficult to have ultracompact objects for which the thermalization timescale becomes comparable or even larger than the Hubble time, which means that thermalization is not possible for these objects in practice if \(\epsilon\lesssim 10^{-22}(M/M_{\odot})\tilde{\kappa}\). Let us stress that using the Hubble time is a conservative estimate, and a more realistic estimate would be provided by the variability timescale for a particular astrophysical system (e.g. Sgr A\({}^{*}\)), which should be several orders of magnitude lower than \(\tau_{\rm H}\). ## VII Conclusions Modeling the interactions between horizonless objects and their accreting environments is essential to cast constraints on these alternatives to black holes. In this paper, we have presented a general parametrization of these interactions, and focused on understanding the role that temporary absorption plays in reaching steady state. Temporary absorption is necessary for the horizonless object to be adapt dynamically to its environment and be eventually be able to reach a steady state. This is in particular necessary to avoid the formation of horizons. Hence, a non-zero value of \(\tilde{\kappa}\) seems to be unavoidable based on known physics. The second parameter necessary to describe temporary absorption is the re-emission timescale, which we have parametrized in terms of \(N\). We have shown that this parameter has an important impact on the thermalization timescale and that it can become arbitrarily large, preventing the latter from happening altogether for relatively compact horizonless objects. In summary, that equilibrium is not observed in systems such as Sgr A\({}^{*}\) and M87\({}^{*}\) can be used to place constraints on horizonless objects, ruling out models in which this thermalization timescale is short enough so that expecting equilibrium is reasonable. However, we have shown that simple arguments indicate that ultracompact objects would have thermalization timescales that are too long for equilibrium to be feasible in our universe. Hence, it is possible that supermassive horizonless objects, not in equilibrium with their accreting environments, exist in nature. ## Appendix A Discretized energy exchange Let us discuss in detail the different components at play in the energy exchange between a horizonless objects and its environment, within the discretized model used in the paper. This provides a derivation of the recursion relations in Sec. V. For the first interval, the energy balance is as follows: * There is an injection of energy \(x\) onto the horizonless object. * The total amount of incident energy is \(x_{1}=x\). * A fraction of energy \(\kappa x_{1}\) is permanently absorbed by the object. * A fraction \(\tilde{\kappa}x_{1}\) is temporarily absorbed by the object, and will be re-emitted after a time \(\tau_{\tilde{\kappa}}=N\tau_{\mathrm{S}}\). * A fraction \(\tilde{\Gamma}x_{1}\) is re-emitted instantaneously. From this fraction, an amount \(\Delta\tilde{\Gamma}x_{1}\), where \(\Delta=\Delta\Omega/2\pi\), escapes the gravitational well of the object, while the remaining amount \((1-\Delta)\tilde{\Gamma}x_{1}\) is gravitationally lensed back to the object. Let us define \(\epsilon_{1}=\Delta\tilde{\Gamma}x_{1}\). * A fraction \(\Gamma x_{1}\) is reflected, escaping the gravitational well of the object. * A fraction \(\mathrm{T}x_{1}\) travels across the object without interaction. For the second interval: * There is an injection of energy \(x\) onto the horizonless object. * A fraction of energy \((1-\Delta)\tilde{\Gamma}x_{1}\) returns to the surface after being re-emitted in the first interval. * The total amount of incident energy is \(x_{1}+x_{2}\), where \(x_{2}=(1-\Delta)\tilde{\Gamma}x_{1}\). * A fraction of energy \(\kappa(x_{1}+x_{2})\) is permanently absorbed by the object. * A fraction \(\tilde{\kappa}(x_{1}+x_{2})\) is temporarily absorbed by the object, and will be re-emitted after a time \(N\tau_{\mathrm{S}}\). * A fraction \(\tilde{\Gamma}(x_{1}+x_{2})\) is re-emitted instantaneously. From this fraction, an amount \(\Delta\tilde{\Gamma}(x_{1}+x_{2})\) escapes the gravitational well of the object, while the remaining amount \((1-\Delta)\tilde{\Gamma}(x_{1}+x_{2})\) is gravitationally lensed back to the object. Let us define \(\epsilon_{2}=\Delta\tilde{\Gamma}x_{2}=(1-\Delta)\tilde{\Gamma}\epsilon_{1}\), so that the total energy that escapes is \(\epsilon_{1}+\epsilon_{2}\). * A fraction \(\Gamma(x_{1}+x_{2})\) is reflected. From this fraction, an amount \(\Gamma x_{1}\) escapes the gravitational well of the object, while the remaining amount \(\Gamma x_{2}\) is gravitationally lensed back to the object. * A fraction \(\mathrm{T}(x_{1}+x_{2})\) travels across the object without interaction. For the third interval: * There is an injection of energy \(x\) onto the horizonless object. * A fraction of energy \((1-\Delta)\tilde{\Gamma}(x_{1}+x_{2})=x_{2}+(1-\Delta)\tilde{\Gamma}x_{2}\) returns to the surface after being re-emitted in the previous interval. * A fraction of energy \(\Gamma x_{2}\) returns to the surface after being reflected in the previous interval. * The total amount of incident energy is \(x_{1}+x_{2}+x_{3}\), where \(x_{3}=[\Gamma+(1-\Delta)\tilde{\Gamma}]x_{2}\). * A fraction of energy \(\kappa(x_{1}+x_{2}+x_{3})\) is permanently absorbed by the object. * A fraction \(\tilde{\kappa}(x_{1}+x_{2}+x_{3})\) is temporarily absorbed by the object, and will be re-emitted after a time \(N\tau_{\rm S}\). * A fraction \(\tilde{\Gamma}(x_{1}+x_{2}+x_{3})\) is re-emitted instantaneously. From this fraction, an amount \(\Delta\tilde{\Gamma}(x_{1}+x_{2}+x_{3})\) escapes the gravitational well of the object, while the remaining amount \((1-\Delta)\tilde{\Gamma}(x_{1}+x_{2}+x_{3})\) is gravitationally lensed back to the object. Let us define \(\epsilon_{3}=\Delta\tilde{\Gamma}x_{3}=(1-\Delta)\tilde{\Gamma}\epsilon_{2}\), so that the total energy that escapes is \(\epsilon_{1}+\epsilon_{2}+\epsilon_{3}\). * A fraction \(\Gamma(x_{1}+x_{2}+x_{3})\) is reflected. From this fraction, an amount \(\Gamma x_{1}\) escapes the gravitational well of the object, while the remaining amount \(\Gamma(x_{2}+x_{3})\) is gravitationally lensed back to the object. * A fraction \({\rm T}(x_{1}+x_{2}+x_{3})\) travels across the object without interaction. For the \((N+1)-\)interval: * There is an injection of energy \(x\) onto the horizonless object. * A fraction of energy \((1-\Delta)\tilde{\Gamma}\sum_{k=1}^{N}x_{n}\) returns to the surface after being re-emitted in the previous interval. * A fraction of energy \(\Gamma\sum_{k=2}^{N}x_{n}\) returns to the surface after being reflected in the previous interval. * The total amount of incident energy is \(X_{N+1}=\sum_{k=1}^{N+1}x_{k}\), where \(x_{k}=[\Gamma+(1-\Delta)\tilde{\Gamma}]x_{k-1}\). * A fraction of energy \(\kappa X_{N+1}\) is permanently absorbed by the object. * A fraction \(\tilde{\kappa}X_{N+1}\) is temporarily absorbed by the object, and will be re-emitted after a time \(N\tau_{\rm S}\). * A fraction \(\tilde{\kappa}x_{1}\) is re-emitted after being temporarily absorbed by the object, while a fraction \(\tilde{\Gamma}X_{N+1}\) is re-emitted instantaneously. From these fractions, an amount \(\Delta(\tilde{\Gamma}X_{N+1}+\tilde{\kappa}x_{1})\) escapes the gravitational well of the object, while the remaining amount \((1-\Delta)(\tilde{\Gamma}X_{N+1}+\tilde{\kappa}x_{1})\) is gravitationally lensed back to the object. Let us define \(\epsilon_{N+1}=\Delta(\tilde{\Gamma}x_{N+1}+\tilde{\kappa}x_{1})\), so that the total energy that escapes is \(E_{N+1}=\sum_{k=1}^{N+1}\epsilon_{k}\). * A fraction \(\Gamma X_{N+1}\) is reflected. From this fraction, an amount \(\Gamma x_{1}\) escapes the gravitational well of the object, while the remaining amount \(\Gamma\sum_{k=2}^{N+1}x_{n}\) is gravitationally lensed back to the object. * A fraction T\(X_{N+1}\) travels across the object without interaction. For the \((N+2)-\)interval: * There is an injection of energy \(x\) onto the horizonless object. * A fraction of energy \((1-\Delta)\tilde{\Gamma}\sum_{k=1}^{N+1}x_{k}\) returns to the surface after being re-emitted in the previous interval. * A fraction of energy \(\Gamma\sum_{k=2}^{N+1}x_{k}\) returns to the surface after being reflected in the previous interval. * The total amount of incident energy is \(X_{N+2}=\sum_{k=1}^{N+2}x_{k}\), where \(x_{k}=[\Gamma+(1-\Delta)\tilde{\Gamma}]x_{k-1}\) for \(k\leq N+1\) and \(x_{N+2}=[\Gamma+(1-\Delta)\tilde{\Gamma}]x_{N+1}+(1-\Delta)\tilde{\kappa}x_{1}\). * A fraction of energy \(\kappa X_{N+2}\) is permanently absorbed by the object. * A fraction \(\tilde{\kappa}X_{N+2}\) is temporarily absorbed by the object, and will be re-emitted after a time \(N\tau_{\rm S}\). * A fraction \(\tilde{\kappa}x_{2}\) is re-emitted after being temporarily absorbed by the object, while a fraction \(\tilde{\Gamma}X_{N+2}\) is re-emitted instantaneously. From these fractions, an amount \(\Delta(\tilde{\Gamma}X_{N+2}+\tilde{\kappa}x_{2})\) escapes the gravitational well of the object, while the remaining amount \((1-\Delta)(\tilde{\Gamma}X_{N+2}+\tilde{\kappa}x_{2})\) is gravitationally lensed back to the object. Let us define \(\epsilon_{N+2}=\Delta(\tilde{\Gamma}x_{N+2}+\tilde{\kappa}x_{2})\), so that the total energy that escapes is \(E_{N+2}=\sum_{k=1}^{N+2}\epsilon_{k}\). ###### Acknowledgements. The authors are grateful to Ramesh Narayan for valuable comments on improving this paper and for previous communications on the subject. RCR acknowledges financial support through a research grant (29405) from VILLUM fonden. FDF acknowledges financial support by Japan Society for the Promotion of Science Grants-in-Aid for international research fellow No. 21P21318. SL acknowledges funding from the Italian Ministry of Education and Scientific Research (MIUR) under the grant PRIN MIUR 2017-MB8AEZ. MV was supported by the Marsden Fund, via a grant administered by the Royal Society of New Zealand.
2309.08325
Distributional Inclusion Hypothesis and Quantifications: Probing for Hypernymy in Functional Distributional Semantics
Functional Distributional Semantics (FDS) models the meaning of words by truth-conditional functions. This provides a natural representation for hypernymy but no guarantee that it can be learnt when FDS models are trained on a corpus. In this paper, we probe into FDS models and study the representations learnt, drawing connections between quantifications, the Distributional Inclusion Hypothesis (DIH), and the variational-autoencoding objective of FDS model training. Using synthetic data sets, we reveal that FDS models learn hypernymy on a restricted class of corpus that strictly follows the DIH. We further introduce a training objective that both enables hypernymy learning under the reverse of the DIH and improves hypernymy detection from real corpora.
Chun Hei Lo, Wai Lam, Hong Cheng, Guy Emerson
2023-09-15T11:28:52Z
http://arxiv.org/abs/2309.08325v2
Distributional Inclusion Hypothesis and Quantifications: Probing Hypernymy in Functional Distributional Semantics ###### Abstract Functional Distributional Semantics (FDS) models the meaning of words by truth-conditional functions. This provides a natural representation for hypernymy, but no guarantee that it is learnt when FDS models are trained on a corpus. We demonstrate that FDS models learn hypernymy when a corpus strictly follows the Distributional Inclusion Hypothesis. We further introduce a training objective that allows FDS to handle simple universal quantifications, thus enabling hypernymy learning under the reverse of DIH. Experimental results on both synthetic and real data sets confirm our hypotheses and the effectiveness of our proposed objective. ## 1 Introduction Functional Distributional Semantics (FDS; Emerson and Copestake, 2016; Emerson, 2018) suggests that the meaning of a word can be modelled as a truth-conditional function, whose parameters can be learnt using the distributional information in a corpus (Emerson, 2020; Lo et al., 2023). Aligning with truth-conditional semantics, functional representations of words are linguistically and logically more rigorous than vectors (e.g., Mikolov et al., 2013; Pennington et al., 2014; Levy and Goldberg, 2014; Czarnowska et al., 2019) and distributions (e.g., Vilnis and McCallum, 2015; Brazinskas et al., 2018) as concepts are separated from their referents (for a discussion, see: Emerson, 2020, 2023). On top of its theoretical favour, Lo et al. (2023) also demonstrated FDS models in action and showed that they are very competitive in the semantic tasks of semantic composition and verb disambiguation. Hypernymy, formally defined as the subsumption of extensions between two word senses, can be modelled with truth-conditional functions. Although FDS provides the tools for hypernymy, it is not obvious whether hypernymy can be learnt by merely training an FDS model on a corpus, and if yes, on what corpus and how it is learnt. To acquire hypernymy automatically from a corpus, one way is through the use of distributional information in a corpus. In this class of methods, hypernymy is learnt in an unsupervised manner given certain hypotheses about the distributional properties of the corpus. One such hypothesis is the Distributional Inclusion Hypothesis (DIH) (Weeds et al., 2004; Geffet and Dagan, 2005), which states that the meaning of a word \(r_{1}\) entails another word \(r_{2}\) if and only if all the typical contexts (features) of \(r_{1}\) occur also with \(r_{2}\). In this work, we show that FDS models learn hypernymy when trained on a restricted class of corpus that follows DIH. In this paper, we first give a brief introduction to FDS in SS2 and its connection to truth-conditional semantics. Then, we describe how hypernymy can be represented in FDS in SS3. In SS4, we discuss how existential and universal quantifications support or undermine the Distributional Inclusion Hypothesis (DIH), how FDS can handle both quantifications, and how FDS models can learn hypernymy under the DIH, and the reverse of it when equipped with a new training objective. Finally, we present experimental results of applying FDS models on both synthetic and real data sets in SS5 and SS6 respectively. ## 2 Functional Distributional Semantics Model-theoretic semantics sees meaning in terms of an extensional model structure, which consists of a set of atomic _entities_, and a set of _predicates_, each of which is true or false of the entities. In parallel, Functional Distributional Semantics (FDS) represents an entity by a _pixie_, and predicate by a truth-conditional _semantic function_ that takes pixie(s) as input and returns the probability of truth. In this paper, we follow the implementation of FDS by Lo et al. (2023), which was more scalable and performant over previous models. We briefly describe it here. ### Probabilistic Graphical Models The framework is formalized in terms of a family of probabilistic graphical models. Each of them describes the generative process of predicates in the semantic graph of a sentence. Fig. 1 illustrates the process of generating the words given the argument structure \(R_{1}\xleftarrow{\text{ARG1}}R_{2}\xlongrightarrow{\text{ARG2}}R_{3}\). First, a pixie \(Z_{j}\in\mathbb{R}^{d}\) is generated for each node in the graph, together representing the entities described by the sentence. Then, for each pixie \(Z_{j}\), a truth value \(T_{Z_{j}}^{(r_{i},0)}\) is generated for each predicate \(r_{i}\) in the vocabulary \(\mathcal{V}\); and for each pair of nodes connected as \(R_{j}\xlongrightarrow{\text{ARG}}R_{k}\) whose corresponding pixies are \(Z_{j}\) and \(Z_{k}\), a truth value \(T_{Z_{j},Z_{k}}^{(r_{i},a)}\) is generated for each predicate \(r_{i}\) in the vocabulary. Finally, a single predicate \(R_{j}\) is generated for each pixie \(Z_{j}\) conditioned on the truth values. ### Semantic Functions Instead of treating a predicate as an indicator function, FDS models the probability that it is true of the pixie(s) with unary and binary _semantic functions_: \[P\left(T_{Z_{j}}^{(r_{i},0)}{=}\top\Big{|}\,z_{j}\right) =t^{(r_{i},0)}(z_{j}) \tag{1}\] \[P\left(T_{Z_{j},Z_{k}}^{(r_{i},a)}{=}\top\Big{|}\,z_{j},z_{k} \right) =t^{(r_{i},a)}(z_{j},z_{k}) \tag{2}\] The functions are implemented as linear classifiers: \[t^{(r_{i},0)}(z_{j})=S\left({v^{(r_{i},0)}}^{\top}z_{j}+b^{(r_{i},0)}\right) \tag{3}\] \[t^{(r_{i},a)}(z_{j},z_{k})=\] (4) \[S\left({v_{1}^{(r_{i},a)}}^{\top}z_{j}+{v_{2}^{(r_{i},a)}}^{\top }z_{k}+b^{(r_{i},a)}\right)\] where \(S\) denotes the sigmoid function. ### Model Training FDS models are trained on graphs of Dependency Minimal Recursion Semantics (DMRS; Copestake et al., 2005; Copestake, 2009), which are derived using the English Resource Grammar (ERG; Flickinger, 2000, 2011). Quantifiers and scopal information are removed from the graphs before training, leaving us with just the predicate-argument structure expressed by a sentence. Given an observed DMRS graph \(G\) with \(n\) pixies \(Z_{1}\ldots Z_{n}\), model parameters are optimized in an unsupervised manner to maximize (5), which is reformulated from the \(\beta\)-VAE (Higgins et al., 2017): \[\begin{split}\mathcal{L}=&\sum_{i=1}^{n}\mathcal{C} _{i}+\sum_{r_{i}\xlongrightarrow{\text{ARG}[a]}r_{j}\text{in}\,G}\mathcal{C} _{i,j,a}\\ &-\frac{d}{2}\sum_{i=1}^{n}\beta_{1}\mu_{Z_{i}}^{2}+\beta_{2} \left(\sigma_{Z_{i}}^{2}-\ln\sigma_{Z_{i}}^{2}\right)\end{split} \tag{5}\] where the last term is the regularization term on the means and variances of the inferred pixies distributions, and the first two terms aim to maximize the truthness of observed predicates and the falsehood of the negatively sampled ones \(r^{\prime}\) over the inferred pixie distribution \(q_{\phi}\), by \[\begin{split}\mathcal{C}_{i}&=\ln\mathbb{E}_{q_{ \phi}}\left[t^{(r_{i},0)}(z_{i})\right]\\ &+\sum_{r^{\prime}\in N(i)}\ln\mathbb{E}_{q_{\phi}}\left[1-t^{(r ^{\prime},0)}(z_{i})\right]\end{split} \tag{6}\] \[\begin{split}\mathcal{C}_{i,j,a}&=\ln\mathbb{E}_{q _{\phi}}\left[t^{(r_{i},a)}(z_{i},z_{j})\right]\\ &+\sum_{r^{\prime}\in N(i)}\ln\mathbb{E}_{q_{\phi}}\left[1-t^{(r^ {\prime},a)}(z_{i},z_{j})\right]\end{split} \tag{7}\] The approximate posterior distribution \(q_{\phi}\) is taken to be \(n\) spherical Gaussian distributions, each with mean \(\mu_{Z_{i}}\) and covariance \(\sigma_{Z_{i}}^{2}I\). Such distributions are inferred from both the local predicate-argument structure of each predicate and global topical information in the graph. For instance, the approximate posterior distribution of the pixie \(Z_{1}\) of _postman_ in Fig. 1 is inferred from the direct argument information, \(\xlongrightarrow{\text{ARG1}}\)_deliver_, and the indirect topical predicate, _mail_. This inference method plays an important role in hypernymy learning as we will discuss in SS4.2. Figure 1: Probabilistic graphical model of FDS for generating the words in ‘_postman deliver mail_’. Only \(R_{1}=\textit{postman}\), \(R_{2}=\textit{deliver}\), \(R_{3}=\textit{mail}\) are observed. ## 3 Representing Hypernymy in FDS In truth-conditional semantics, for a set of entities \(D\), \(r_{H}\) is a hypernym of \(r_{h}\) if and only if \[\forall x\in D\colon r_{h}(x)\Longrightarrow r_{H}(x) \tag{8}\] Although FDS provides truth-conditional interpretations of words, it is not straightforward to define hypernymy in FDS where predicates are probabilistic and work over high-dimensional pixies. One way is to translate (8) to probabilistic counterpart for a score on hypernymy \(P\left(T_{Z}^{(r_{H},0)}=\top\left|\,T_{Z}^{(r_{h},0)}=\top\right.\right)\). However, this conditional probability is unavailable since only \(P\left(T_{Z}^{(r_{H},0)}=\top\left|\,z\right.\right)\) and \(P\left(T_{Z}^{(r_{h},0)}=\top\left|\,z\right.\right)\) are modelled by FDS. Another way is to interpret the probability model from a fuzzy set perspective and use fuzzy set containment [22] for representing hypernymy: \[\forall z\colon t^{(r_{H},0)}(z)>t^{(r_{h},0)}(z) \tag{9}\] Note that if we consider all \(z\in\mathbb{R}^{d}\), (9) can only be true when \(v^{(r_{h},0)}=kv^{(r_{H},0)}\) where \(k\neq 0\), which is impossible to be obtained in practice from model training. Therefore, we restrict the pixie space and only consider pixies in a unit hypersphere or hypercube to be meaningful. With (3) and (4), \(r_{H}\) is considered the hypernym of \(r_{h}\) if and only if \(s(r_{h},r_{H})>0\) in (10), where \(p\in\{1,2\}\) (derivations in Appendix A). \[\begin{split} s(r_{h},r_{H})&=b^{(r_{H},0)}-b^{(r_ {h},0)}\\ &\quad-\left\|v^{(r_{H},0)}-v^{(r_{h},0)}\right\|_{p}\end{split} \tag{10}\] Note that the transitivity of (8) is paralleled: \[\begin{split} s(r_{1},r_{2})&>0\wedge s(r_{2},r_{3} )>0\\ &\quad\quad\quad\quad\quad\Longrightarrow\,s(r_{1},r_{3})>0\end{split} \tag{11}\] ## 4 Learning under (Reverse of) Distributional Inclusion Hypothesis Given the power of representing hypernymy in FDS, we explore in this section whether hypernymy can be learnt by FDS models from just text, and if yes, how. ### Quantifications and Distributional Inclusion Hypothesis In this section, we revisit the Distributional Inclusion Hypothesis (DIH) and explain how quantifications support or undermine the hypothesis. DIH asserts that the typical characteristic features of \(r_{h}\) are expected to appear with \(r_{H}\) if and only if \(r_{H}\) is a hypernym of \(r_{h}\). Geffet and Dagan (2005) considers syntax-based context. We suggest that semantics-based ones be more suitable since syntactic difference does not necessarily contribute to semantic ones, e.g., passivization. Consider the simple hierarchy in Fig. 2. Table 1 shows the sentences that are true with respect to the hierarchy. It can be seen that DIH applies to Corpus 1. For example, the set of contexts of dog (\(\{\xleftarrow{\texttt{ARG1}}\;\;\textit{bark}\}\)) is a subset of those of mammal (\(\{\xleftarrow{\texttt{ARG1}}\;\;\textit{bark},\xleftarrow{\texttt{ARG1}}\;\; \textit{fly},\xleftarrow{\texttt{ARG1}}\;\;\textit{flurry}\}\)). However, substituting existential with universal quantifications results in the reverse of DIH (rDIH) in Corpus 2, where the set of contexts of _mammal_ then becomes a subset of that of _dog_. With this simple example, we explain how methods that rely on DIH as a cue for hypernymy can be undermined. We do not discuss further more complex sentence structures because then the entailment conditions of sentences will not be trivial, and such a corpus will not strictly align with DIH or rDIH. For instance, with a restricted relative clause, _every dog that is trained is gentle_ does not entail _every Chihuahua is gentle_. Therefore, _Chiuahua_ may not appear in the context of \(\{\xleftarrow{\texttt{ARG1}}\;\;\textit{gentle}\}\) \begin{table} \begin{tabular}{l l} \hline \hline **Corpus 1 (DIH)** & \multicolumn{1}{c}{**Corpus 2 (rDIH)**} \\ \hline _a dog barks_ & _every dog barks_ \\ _a mammal barks_ & _every dog is flurry_ \\ _a baffies_ & _every bag fires_ \\ _a mammal flies_ & _every bag is flurry_ \\ _an animal flies_ & _every bat grows_ \\ _a mammal is flurry_ & _every mammal is flurry_ \\ _an animal is flurry_ & _every mammal grows_ \\ _an animal grows_ & _every animal grows_ \\ \hline \hline \end{tabular} \end{table} Table 1: Corpora generated from the hierarchy in Fig. 2. Existential and universal quantifications result in two corpora that follow DIH and rDIH respectively. Figure 2: A taxonomic hierarchy of nouns. Next to each noun is the set of contexts that are applicable to the extension of it and those of its descendants (e.g., all dogs are flurry, but not all animals.) even if _Chihuahua_ is a hyponym of _dog_. ### Our Hypothesis: FDS Learns Hypernymy under DIH We hypothesize that the way that FDS models are trained allows hypernymy to be learnt from a corpus that follows DIH. Below is the intuition behind our hypothesis. FDS models are trained following the variational autoencoding method described in SS2.3. Essentially, the approximate posterior distributions of pixies are first inferred from the observed graph. Then, the semantic functions of the observed predicates are optimized to be true of the inferred pixie distributions. This process is analogous to the following process under a model-theoretic approach: the entities described by a sentence are first identified, and then the truth conditions of predicates over the entities are updated as asserted by the sentence. Second, the contexts of nouns are also contexts of their hypernyms in DIH. The local predicate-argument information of nouns, i.e. contexts, is thus repeated for their hypernyms for inference during training. Consequently, the semantic functions of hypernyms are trained to return values higher than those of their hyponyms over the pixie distributions inferred from the same contexts, aligning with (9). ### Universal Quantification in FDS for rDIH FDS assumes that each observed predicate refers to only one point but not a region in the pixie space. This corresponds to the interpretation that all nouns are uniquely existentially quantified (\(\exists\)!), so only a corpus that follows DIH can be handled by FDS. To this end, we propose a method to enable FDS to be trained on simple sentences with universal quantification. Concretely, we want to optimize semantic functions with respect to not a point but a region in the pixie space. We add the following **universal quantification objective** to the original objective proposed by Lo et al. (2023): \[\mathcal{L}_{\forall}=\sum_{r_{j}\in\overset{\text{AR}[a]}{\longleftarrow}r _{i}\text{ in }G}s_{a}(r_{i},r_{j})+\mathcal{U}_{i,j,a} \tag{12}\] where \(r_{j}\) is a predicate whose referent is universally quantified, and \[\begin{split} s_{a}(r_{i},r_{j})&=b^{(r_{i},a)}-b^{ (r_{j},0)}\\ &\quad\quad\quad\quad\quad\quad-\left\|v^{(r_{i},a)}-v^{(r_{j},0) }\right\|_{p}\\ \mathcal{U}_{i,j,a}&=\sum_{r^{\prime}}\min\left(0,-s _{0}(r_{i},r^{\prime})\right)\\ &\quad+\sum_{(r^{\prime\prime},a^{\prime\prime})}\min\left(0,-s_{ a^{\prime\prime}}(r^{\prime\prime},r_{j})\right)\end{split} \tag{13}\] Note that (13) is modified based on (10), previously defined for classifying hypernymy. To explain (12), let's take the sentence _every dog barks_ as an example. The first term inside the summation in (12) enforces that extension of \(r_{j}\) is a subset of that of prototypical argument \(a\) of \(r_{i}\), i.e., the set of dogs should be contained in the set of agents that barks. The second term, described in (14), incorporates negative samples. The negative samples \(r^{\prime}\) for \(r_{j}\) are generated by randomly sampling \(K\) nouns, and \((r^{\prime\prime},a^{\prime\prime})\) for \(r_{i}\) by randomly sampling \(K\) verbs or adjectives, each with an argument role. Then, (14) requires that it is false to universally quantify the referents of the noun \(r^{\prime}\) in \(r^{\prime}\overset{\text{AR}[a]}{\longleftarrow}r_{i}\) and \(r_{j}\) in \(r_{j}\overset{\text{AR}[a^{\prime\prime}]}{\longleftarrow}r^{\prime\prime}\). For the example, it means that the following two sentences are both considered false: _every dog is owned_ and _every cat barks_, where \(r^{\prime}=\text{cat}\), \(r^{\prime\prime}=\text{own}\) and \(a^{\prime\prime}=2\). ## 5 Experiments on Synthetic Data Sets Testing our hypothesis and the effectiveness of the new objective for universal quantifications requires corpora that strictly follow DIH or rDIH, which is impractical for real corpora. Therefore, we create a collection of synthetic data sets and perform experiments on them. ### Synthetic Data Sets under (r)DIH Each of the synthetic data sets consists of a taxonomic hierarchy of nouns and a corpus, created using the following procedure: 1. **Create a taxonomic hierarchy.** Define a set of nouns, the hypernymy relations of them, and the contexts applicable to its extension and those of its hyponyms (as in Fig. 2). 2. **Choose a hypothesis.** DIH or rDIH. 3. **Create a corpus.** Create sentences in the form '_[quantifier] [noun] [context]_' following the chosen hypothesis and the defined hierarchy (as in Table 1). #### 5.1.1 Topology of Hierarchy Different topologies of hierarchy lead to different distributional usage of words, thus possibly varying representations learnt for hypernymy. For example, a noun can have multiple hypernyms (e.g., _dog_ is the hyponym of both _pet_ and _mammal_), or share overlapping contexts with another noun far in the hierarchy (e.g., both _bat_ and _airplane_\(\xleftarrow{\text{ARG1}}\)_fly_). To test the robustness of FDS models for learning hypernymy, we experiment with a range of topologies. Fig. 3 exemplifies the five classes of topologies used. We expect that directed acyclic graphs (\(H_{\text{DAG}}\) and \(H^{\prime}_{\text{DAG}}\)) be harder topologies than trees (\(H_{\text{tree}}\) and \(H^{\prime}_{\text{tree}}\)), and topologies with overlapping contexts (\(H_{\text{tree}}\) and \(H^{\prime}_{\text{DAG}}\)) be harder than those without (\(H_{\text{tree}}\) and \(H_{\text{DAG}}\)). In addition, we test \(H_{\text{chains}}\) with pixie dimensionality \(d=2\). A 2-D pixie space provides adequate expressive power for embedding 4 chains of hypernymy. It also allows lossless visualization of the semantic functions. To test hypernymy learning at scale on an actual hierarchy, we also test our models on a subgraph of WordNet's hierarchy (\(H_{\text{WN}}\)). Every node in the hierarchy consists of a noun and a semantic context. The topology of the \(H_{\text{chains}}\) used in the experiment is exactly as depicted in Fig. 4. \(H_{\text{WN}}\) is created out of the synset _animal.n.01_ in WordNet, which is the root, and its hyponymic synsets. To keep the size of the hierarchy reasonable for pair-wise hypernymy scoring, we keep only the synsets whose shortest distance to the root is less than 6. This results in 982 nodes. For the remaining hierarchies, each of them consists of 153 nodes with a height of 5. For \(H_{\text{tree}}\), the first level is a root node, and a node at the \(h^{\text{th}}\) level has \((h+1)\) direct children. \(H_{\text{tree}^{\prime}}\) is created from \(H_{\text{tree}}\) by choosing 5 pairs of nodes and making each of them share a context set. \(H_{\text{DAG}}\) and \(H^{\prime}_{\text{DAG}}\) are created from \(H_{\text{tree}}\) and \(H_{\text{tree}^{\prime}}\) respectively by choosing 5 pairs of nodes, where the nodes of each pair are at different levels, and make the higher level node the direct parent of the lower level one. ### FDS Models Training We experiment with two variations of FDS training: Fds is trained using the original objective in (5) whereas Fds\({}_{\forall}\) incorporates the universal quantification objective following SS4.3. Each model is trained on every synthetic corpus. We empirically find that setting \(p=1\) in (13) and \(p=2\) in (10) almost always give the best performances, and we only report the results in this setup. Other than the newly introduced training objective, models training largely follows that of Lo et al. (2023) (details described in Appendix B). ### Evaluation on Hypernymy Detection We test if a model trained on the corpus learns to identify hypernymy defined in the hierarchy that generates the corpus. Concretely, a model is asked to give a score of hypernymy between every pair of nouns using (10). Performance is then measured by the area under the receiver operating characteristic curves (AUC). Unlike average precision, AUC values do not reflect changes in the distribution of classes, which is favourable since we are comparing models' performances across varying class distributions generated from different topologies. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline **Model** & \(H_{\text{chain}}\) & \(H_{\text{tree}}\) & \(H^{\prime}_{\text{tree}}\) & \(H_{\text{DAG}}\) & \(H^{\prime}_{\text{DAG}}\) & \(H_{\text{WN}}\) \\ \hline Fds &.992 &.982 &.984 &.986 &.986 &.963 \\ Fds\({}_{\forall}\) &.800 &.211 &.219 &.219 &.228 &.100 \\ \hline \hline \end{tabular} \end{table} Table 2: AUC of models trained on synthetic DIH corpora generated from different topologies of hierarchy. Figure 3: Examples of the topologies of the synthetic taxonomic hierarchies. Table 2 and Table 3 show the results of FDS models on different topologies when trained on a DIH and rDIH corpus respectively. Fds is shown to work on the DIH corpus, and Fds\({}_{\forall}\) on rDIH corpus. Reversing the models on respective corpora yields substantially worse performances. In particular, Fds\({}_{\forall}\) attains AUCs of about 0.2 on the DIH corpus means hypernymy predictions are even largely reversed, which in turn reflects the effectiveness of the universal objective when Fds\({}_{\forall}\) interprets the implication of context sets subsumption on hypernymy based on rDIH. Hierarchies with overlapping contexts and multiple direct hypernyms are not harder cases than those without. Both models are also shown to be robust when scaling up to the larger WordNet hierarchy, with only a slight drop in performances. Fig. 4 visualizes the semantic functions trained on the corpora of \(H_{\text{chains}}\). Training Fds on the DIH corpus and Fds\({}_{\forall}\) on the rDIH corpus results in four nicely divided pixie subspaces, each for one of the four hypernymy chains, as shown in the plots on the left column. In contrast, applying the other models sometimes gives badly learnt semantic functions, e.g., \(t^{(r_{12},0)}\) points to the opposite direction of \(t^{(r_{10},0)}\), \(t^{(r_{11},0)}\) for Fds\({}_{\forall}\) on DIH corpus. ### Evaluation on Distributional Generalization Apart from testing models directly on the mentioned topologies, we also test if distributional generalization power exists in FDS. Intuitively, when the information about _mammal_ being a hypernym of _dog_ is missing, consequently the breakdown of (r)DIH, knowing that both _dogs_ and _foxes_ share the same contexts in a corpus (e.g., \(\{\xleftarrow{\xleftarrow{\xleftarrow{\xleftarrow{\xleftarrow{\xleftarrow{\xleftarrow{ \xleftarrow{\xleftarrow{\xleftarrow{\xleftarrow{\xleftarrow{\xleftarrow{ \xleftarrow{\xleftarrow{\xleftarrow{\xleftarrowleftarrowleftarrowleftarrow {\ vary for each \(\tilde{r}\) and under each hypothesis, we can still directly compare AUCs across settings due to its insensitivity to class distribution. Table 5 shows that both upward and downward distributional generalizations exist when the corpus follows either DIH or rDIH, and to a larger extent on the rDIH corpus. ### Summary The experiments confirm that: (1) hypernymy is learnt by FDS models under DIH, (2) the new training objective for universal quantifications enables FDS models to also learn hypernymy under rDIH, and (3) our approach of hypernymy modelling allows generalization over missing information in a corpus. ## 6 Experiments on Real Data Sets Seeing how FDS performs on synthetic data sets does not immediately tell us more about hypernymy learning on real data sets. Therefore, we perform further experiments to test if FDS models learn hypernymy on open classes of sentences. ### FDS Models Training Training Data.FDS models are trained on Wikiwoods1Flickinger et al. (2010); Solberg (2012), which provide linguistic analyses of 55m sentences (900m tokens) in English Wikipedia. Each of the sentences was parsed by the PET parser Callmeier (2001); Toutanova et al. (2005) using the 1212 version of the ERG, and the parses are ranked by a ranking model trained on WeScience Ytrestol et al. (2009). We extract the DMRS graphs from Wikiwoods using Pydelphin2Copestake et al. (2016). After preprocessing, there are 36m sentences with 254m tokens. Footnote 1: [http://ltr.uio.no/wikiwoods/1212/](http://ltr.uio.no/wikiwoods/1212/) Footnote 2: [https://github.com/delph-in/pydelphin](https://github.com/delph-in/pydelphin) Model Configurations.Although quantifiers are annotated in Wikiwoods, it is not feasible to determine which of the two training objectives to use specifically for each instance. This is because quantifications interact heavily with other semantic components in a complex sentence. For example, processing '_every dog that is excited barks_' requires universal quantification over the intersection of the set of dogs and the set of entities that are excited, but set intersection is not modelled by FDS. In our experiments, we choose one of the models from Fds or Fds\({}_{\forall}\) described in SS5.2, and apply the same objective to every training instance. We also test an additional model Fds\({}_{\forall 2}\) where the universal quantification objective is scaled by 0.5. We only train each of the models for 1 epoch. ### Evaluation Method We only consider hypernymy over nouns but not verbs or adjectives since FDS is trained on DMRS graphs, where only nominals are quantified and accepted by verbs and adjectives as arguments. We test the trained models on four hypernymy data sets for nouns, namely Kotlerman2010 Kotlerman et al. (2010), LEDS Baroni et al. (2012), WBLESS Weeds et al. (2014), and EVALution Santus et al. (2015). Each of them consists of a set of word pairs, each with a label indicating whether the second word is a hypernym of the first word. We removed the out-of-vocabulary instances from all data sets, and non-nouns from EVALution during the evaluation. Table 6 reports the statistics of the test sets data. We report the AUC as in SS5. In addition, we use WBLESS for further performance analysis, which provides categorizations of the negative instances. Each of the negative instances is either a hyponymy pair, co-hyponymy pair, meronymy pair, or random pair. ### Baselines Following Roller et al. (2018), we implemented four distributional hypernymy detection models and trained them on Wikiwoods as baselines. First, we have WeedsPrec Weeds et al. (2004) and invCL Lenci and Benotto (2012), which measures context inclusion of word pairs. They both use \begin{table} \begin{tabular}{l c c} \hline \hline **Test Set** & **\# Positive** & **\# Negative** \\ \hline Kotlerman2010 & 880 [831] & 2058 [1919] \\ LEDS & 1385 [1344] & 1385 [1342] \\ WBLESS & 834 [830] & 834 [813] \\ Evalution & 1592 [1352] & 4561 [3241] \\ \hline \hline \end{tabular} \end{table} Table 6: Class distributions of test sets. In brackets are the numbers after the removal of OOV instances and non-nouns. \begin{table} \begin{tabular}{l c c c} \hline \hline **Model** & **Hypothesis** & **Upward** & **Downward** \\ \hline Fds & DIH &.906 &.690 \\ Fds\({}_{\forall}\) & rDIH &.971 &.993 \\ \hline \hline \end{tabular} \end{table} Table 5: Mean AUC for distributional generalizations. a distributional space that is constructed by first counting co-occurrences of adjacent predicates in the preprocessed DMRS graphs, then the resulting matrix is transformed using positive pointwise mutual information. Each row vector \(v^{(r_{i})}\) represents a predicate \(r_{i}\). WeedsPrec and invCL are computed as: \[\text{WeedsPrec}(r_{1},r_{2})=\frac{\sum_{i}v_{i}^{(r_{1})}\mathds{1}_{v_{i}^{( r_{2})}>0}}{\sum_{i}v_{i}^{(r_{1})}}\] \[\text{invCL}(r_{1},r_{2})=\sqrt{\text{CL}(r_{1},r_{2})(1-\text{CL}(r_{2},r_{1}))}\] \[\text{where CL}(r_{1},r_{2})=\frac{\sum_{i}\min\left(v_{i}^{(r_{1})},v_{i}^{(r_ {2})}\right)}{\sum_{i}v_{i}^{(r_{1})}}\] Apart from the two DIH measures, we also consider SLQS (Santus et al., 2014), a word generality measure that rests on another hypothesis, that general words mostly appear in uninformative contexts: \[\text{SLQS}(r_{1},r_{2})=1-\frac{E_{r_{1}}}{E_{r_{2}}}\] \[\text{where }E_{r_{i}}=\text{median}_{j=1}^{N}[H(c_{j})]\] For each word \(r_{i}\), the median of the entropies of \(N\) most associated contexts (as measured by local mutual information) is computed, where \(H(c_{j})\) is the Shannon entropy of the associated context \(c_{j}\). Then, SLQS compared the generality of two words by the ratio of their respective medians. We also report SLQS-cos, which multiplies the SLQS measure by cosine similarity of \(v^{(r_{1})}\) and \(v^{(r_{2})}\), since the SLQS measure only considers generality but not similarity. \(N\) is chosen to be 50 following Santus et al. (2014). We also include cosine similarity (Cosine) of the row vectors as an extra baseline. ### Results Table 7 shows the results on the four test data sets. The DIH baselines are competitive and nearly outperform all models across the test sets. Fds\({}_{\forall}\) and Fds\({}_{\forall/2}\) both outperform Fds considerably across the test sets. This reflects that including the proposed universal quantification objective in training is useful for extracting hypernymy information in a corpus. Compared to the 2.7-billion-token corpus used by Santus et al. (2014) in training SLQS, we suggest that the Wikiwood's corpus is too small for SLQS to obtain meaningful contexts of the median entropy: setting \(N\) to be small results in frequent contexts that are not representative of the nouns, whilst setting it large would require a disproportionate number of contexts for the infrequent words. Table 8 shows the results on the WBLESS subcategories. It is shown that Fds\({}_{\forall}\) is stronger than the DIH baselines in distinguishing between hyponymy and hypernymy pairs, and between co-hyponymy and hypernymy pairs, while weaker for meronymy or random pairs. Fds\({}_{\forall}\) and Fds\({}_{\forall/2}\) outperform Fds across nearly all sub-categories, with much higher distinguishing power for co-hyponymy and hypernymy. These imply that the universal quantification objective makes FDS more sensitive to the relative generality than the similarity of word pairs. ## 7 Conclusion We discuss how Functional Distributional Semantics (FDS) can provide a truth-conditional representation for hypernymy. We propose a new FDS training objective for handling simple universal quantifications. On synthetic corpora, we confirm that FDS learns hypernymy under the Distributional Inclusion Hypothesis (DIH), and even under the reverse of DIH if the new objective is applied. On a real corpus, the new objective is shown to improve FDS performance on hypernymy detection. We hope that this work provides insights into hypernymy learning from corpora by FDS models. \begin{table} \begin{tabular}{l c c c c} \hline \hline **Model** & Kotterman2010 & LEDS & WBLESS & Evalution \\ \hline Cosine &.70 &.78 &.62 &.53 \\ WeedsPrec &.67 &.90 &.71 &.65 \\ InvCL &.68 &.90 &.71 &.62 \\ SLQS &.49 &.48 &.57 &.53 \\ SLQS-cos &.49 &.48 &.56 &.53 \\ Fds\({}_{\forall 2}\) &.47 &.65 &.51 &.46 \\ Fds\({}_{\forall 2}\) &.56 &.76 &.66 &.60 \\ Fds\({}_{\forall}\) &.56 &.73 &.66 &.55 \\ \hline \hline \end{tabular} \end{table} Table 7: AUC on the test sets. \begin{table} \begin{tabular}{l c c c c} \hline \hline **Model** & Hyponymy & Co-hyponymy & Meronymy & Random \\ \hline Cosine &.51 &.37 &.68 &.92 \\ WeedsPrec &.75 &.62 &.63 &.84 \\ InvCL &.75 &.57 &.65 &.87 \\ SLQS &.61 &.55 &.59 &.52 \\ SLQS-cos &.58 &.53 &.57 &.55 \\ Fds &.60 &.29 &.56 &.58 \\ Fds\({}_{\forall 2}\) &.78 &.59 &.56 &.69 \\ Fds\({}_{\forall}\) &.79 &.63 &.52 &.70 \\ \hline \hline \end{tabular} \end{table} Table 8: AUC on the sub-categories of WBLESS. ### Limitations Hypernymy is established between word senses. However, the proposed representation of hypernymy in FDS compares the semantic functions of DMRS predicate pairs and considers each DMRS predicate to have one sense. Therefore, such representation can fall short of polysemous words. ## Ethics Statement We anticipate no ethical issues directly stemming from our experiments.
2309.04537
The complexity of the greedoid Tutte polynomial
We consider the Tutte polynomial of three classes of greedoids: those arising from rooted graphs, rooted digraphs and binary matrices. We establish the computational complexity of evaluating each of these polynomials at each fixed rational point (x,y). In each case we show that evaluation is #P-hard except for a small number of exceptional cases when there is a polynomial time algorithm. In the binary case, establishing #P-hardness along one line relies on Vertigan's unpublished result on the complexity of counting bases of a matroid. For completeness, we include an appendix providing a proof if this result.
Christopher Knapp, Steven Noble
2023-09-08T18:01:03Z
http://arxiv.org/abs/2309.04537v1
# The Complexity of the Greedoid Tutte Polynomial ###### Abstract We consider the Tutte polynomial of three classes of greedoids: those arising from rooted graphs, rooted digraphs and binary matrices. We establish the computational complexity of evaluating each of these polynomials at each fixed rational point \((x,y)\). In each case we show that evaluation is #P-hard except for a small number of exceptional cases when there is a polynomial time algorithm. In the binary case, establishing #P-hardness along one line relies on Vertigan's unpublished result on the complexity of counting bases of a matroid. For completeness, we include an appendix providing a proof of this result. **Mathematics Subject Classifications:** 05C31, 68Q17, 05B35 ## 1 Introduction Tutte's eponymous polynomial is perhaps the most widely studied two-variable graph and matroid polynomial due to its many specializations, their vast breadth and the richness of the underlying theory. Discussion of the Tutte polynomial and closely related polynomials fills an entire handbook [13]. Tutte first introduced the Tutte polynomial of a graph, as the _dichromate_ in [37]. It is closely related to Whitney's rank generating function [43] which Tutte extended from graphs to matroids in his PhD thesis [38]. Crapo [10] later extended the definition of the Tutte polynomial to matroids. See Farr [14] for more on the early history of the Tutte polynomial. The simplest definition of the Tutte polynomial \(T(G;x,y)\) of a graph \(G\) is probably in terms of the rank function \(r\). Given a graph \(G\) and set \(A\) of its edges, we have \(r(A)=|V(G)|-k(G|A)\), where \(k(G|A)\) is the number of connected components of the graph obtained from \(A\) by deleting the edges in \(E(G)-A\) (and keeping all the vertices). **Definition 1**.: For a graph \(G\) with edge set \(E\), we have \[T(G;x,y)=\sum_{A\subseteq E}(x-1)^{r(E)-r(A)}(y-1)^{|A|-r(A)}.\] By making appropriate substitutions for \(x\) and \(y\), a huge number of graph invariants with connections to diverse areas of mathematics may be obtained. We summarise just a few of these evaluations that are particularly relevant later in this paper. A _spanning subgraph_ of a graph \(G\) is a subgraph including all the vertices of \(G\). * \(T(G;1,1)\) is the number of maximal spanning forests of \(G\). (If \(G\) is connected, then this is the number of spanning trees.) * \(T(G;2,1)\) is the number of spanning forests of \(G\). * \(T(G;1,2)\) is the number of spanning subgraphs of \(G\) having the same number of components as \(G\). * \(T(G;1,0)\) is the number of acyclic orientations of \(G\) with one predefined source vertex per component of \(G\)[21]. Other evaluations (up to a simple pre-factor) include the reliability polynomial, chromatic polynomial and partition function of the \(q\)-state Potts model. For a full list of evaluations see [8, 12, 13]. Given a graph polynomial of this type, a natural question is to determine its complexity, that is to classify the points \((a,b)\) according to whether there is a polynomial time algorithm to evaluate the polynomial at \((a,b)\) or whether the evaluation is computationally intractable. Because of the inherent difficulties of measuring the complexity of algorithms involving arbitrary real numbers, we restrict \(a\) and \(b\) to being rational. This question was completely resolved in a groundbreaking paper by Jaeger, Vertigan and Welsh [23]. A stronger result was obtained by Vertigan and Welsh [41], who proved the theorem below. For \(\alpha\) in \(\mathbb{Q}-\{0\}\), let \(H_{\alpha}=\{(x,y)\in\mathbb{Q}^{2}:(x-1)(y-1)=\alpha\}\), and let \(H_{0}^{x}=\{(1,y):y\in\mathbb{Q}\}\) and \(H_{0}^{y}=\{(x,1):x\in\mathbb{Q}\}\). This family of hyperbolae seems to play a special role in the theory of the Tutte polynomial, both in terms of its evaluations and its complexity. **Theorem 2** (Vertigan, Welsh).: _Evaluating the Tutte polynomial of a bipartite planar graph at any fixed point \((a,b)\) in the rational plane is \(\#\)P-hard apart from when \((a,b)\) lies on \(H_{1}\) or \(H_{2}\), or when \((a,b)\) equals \((-1,-1)\) or \((1,1)\), when there exists a polynomial-time algorithm._ Roughly speaking, the proof of the hardness part of this result (at least without the planar bipartite restriction) proceeds as follows. By exploiting a result of Brylawski [7], one first shows that for most points \((a,b)\), the existence of a polynomial time algorithm to evaluate \(T(G;a,b)\) for every graph \(G\) would imply the existence of a polynomial time algorithm to evaluate \(T(G;x,y)\) at every point \((x,y)\) in \(H_{\alpha}\), where \(\alpha=(a-1)(b-1)\). Given a graph \(G\), let \(G^{k}\) and \(G_{k}\) denote, respectively, the graph obtained by replacing every edge of \(G\) by \(k\) parallel edges and the graph obtained by replacing every non-loop of \(G\) by a path comprising \(k\) edges and every loop by a circuit comprising \(k\) edges. The former is known as the _\(k\)-thickening_ of \(G\) and the latter as the _\(k\)-stretch_ of \(G\). Brylawski gave expressions for the Tutte polynomials of \(G^{k}\) and \(G_{k}\) in terms of the Tutte polynomial of \(G\). By varying \(k\), one may obtain expressions for \(T(G;a_{k},b_{k})\) at a sequence \(\{(a_{k},b_{k})\}\) of points on \(H_{\alpha}\), and then solve for the coefficients of the one-variable polynomial obtained by restricting the domain of \(T\) to \(H_{\alpha}\). There remain several special cases because the sequence \(\{(a_{k},b_{k})\}\) sometimes contains only a small number of distinct points. The second step proceeds by determining a #P-hard point on each curve \(H_{\alpha}\). Many of these come from evaluations of the chromatic polynomial. The Tutte polynomial is essentially a generating function for the number of subsets of the edges of a graph according to their rank and size. Following the work of Jaeger, Vertigan and Welsh, many authors have established corresponding results for a variety of graph polynomials defined in a similar way but using different notions of rank. These include the cover polynomial [3], the Bollobas-Riordan polynomial [4], the interlace polynomial [5], the rank generating function of a graphic 2-polymatroid [32] and the Tutte polynomial of a bicircular matroid [16]. In each case, the proof techniques have some similarities: the bulk of the work is done using a graph operation analogous to the thickening, but there are considerable technical difficulties required to deal with the special cases and to complete the proof. These results provide evidence for Makowsky's Difficult Point Conjecture which states that for an \(n\)-variable graph polynomial \(P\) that may be defined in monadic second order logic, there is a set \(S\) of points with the following properties: 1. For every \({\bf x}\in S\), there is a polynomial time algorithm to evaluate \(P({\bf x})\); 2. For every \({\bf x}\notin S\), it is #P-hard to evaluate \(P({\bf x})\); 3. The set \(S\) is the finite union of algebraic sets in \(\mathbb{C}^{n}\) each having dimension strictly less than \(n\). For full details see [30]. In this paper we prove results analogous to Theorem 2 for two graph polynomials, the Tutte polynomials of a rooted graph and a rooted digraph, and a polynomial of binary matrices, the Tutte polynomial of a binary greedoid. Each of these polynomials is a special case of the Tutte polynomial of a greedoid introduced by Gordon and McMahon [18] and the proofs have considerable commonality. (All the necessary definitions are provided in the next sections.) The graph polynomials are the analogue of the Tutte polynomial for rooted graphs and rooted digraphs, and our results provide further evidence for Makowsky's Difficult Point Conjecture. An overview of the paper is as follows. In Section 2 we provide necessary background on rooted graphs, rooted digraphs, greedoids and computational complexity. In the following section we describe the Tutte polynomial of a greedoid and list some of its evaluations for each of the three classes of greedoid that we work with. Within our hardness proofs we require an analogue of the thickening operation and various other constructions which can be defined for arbitrary greedoids, and may be of independent interest. We describe these in Section 4 and provide analogues of Brylawski's results [7] expressing the Tutte polynomial for these constructions in terms of the Tutte polynomials of their constituent greedoids. In Section 5, we prove the following result completely determining the complexity of evaluating the Tutte polynomial of a rooted graph at a rational point. **Theorem 3**.: _Evaluating the Tutte polynomial of a connected, rooted, planar, bipartite graph at any fixed point \((a,b)\) in the rational \(xy\)-plane is #P-hard apart from when \((a,b)\) equals \((1,1)\) or when \((a,b)\) lies on \(H_{1}\)._ _There are polynomial time algorithms to evaluate the Tutte polynomial of a rooted graph at \((1,1)\) and at any point lying on \(H_{1}\)._ In Section 6, we prove the equivalent result for the Tutte polynomial of a rooted digraph. **Theorem 4**.: _Evaluating the Tutte polynomial of a root-connected, rooted digraph at any fixed point \((a,b)\) in the rational \(xy\)-plane is #P-hard apart from when \((a,b)\) equals \((1,1)\), when \((a,b)\) lies on \(H_{1}\), or when \(b=0\)._ _There are polynomial time algorithms to evaluate the Tutte polynomial of a rooted digraph at \((1,1)\), at any point lying on \(H_{1}\) and at any point \((a,0)\)._ We then determine the complexity of evaluating the Tutte polynomial of a binary greedoid. **Theorem 5**.: _Evaluating the Tutte polynomial of a binary greedoid at any fixed point \((a,b)\) in the rational \(xy\)-plane is #P-hard apart from when \((a,b)\) lies on \(H_{1}\)._ _There is a polynomial time algorithm to evaluate the Tutte polynomial of a binary greedoid at any point lying on \(H_{1}\)._ One special case of this theorem depends on a special case of an unpublished result of Vertigan, who proved that the problem of counting bases of a binary matroid is #P-complete. For completeness, in Appendix A, we provide a proof of this result for all fields. ## 2 Preliminaries ### Rooted graphs and digraphs All our graphs are allowed to have loops and multiple edges. A _rooted graph_ is a graph with a distinguished vertex called the _root_. Most of the graphs we work with will be rooted but occasionally we will work with a graph without a root. For complete clarity, we will sometimes refer to such graphs as _unrooted graphs_. We denote a rooted graph \(G\) with vertex set \(V(G)\), edge set \(E(G)\) and root \(r(G)\) by a triple \((V(G),E(G),r(G))\). We omit the arguments when there is no fear of ambiguity. Many of the standard definitions for graphs can be applied to rooted graphs in the natural way. Two rooted graphs \((V,E,r)\) and \((V^{\prime},E^{\prime},r^{\prime})\) are _isomorphic_ if the unrooted graphs \((V,E)\) and \((V^{\prime},E^{\prime})\) are isomorphic via an isomorphism mapping \(r\) to \(r^{\prime}\). For a subset \(A\) of \(E\), the _rooted spanning subgraph_\(G|A\) is formed from \(G\) by deleting all the edges in \(E-A\) (and keeping all the vertices). The _root component_ of \(G\) is the connected component of \(G\) containing the root. A set \(A\) of edges of \(G\) is _feasible_ if the root component of \(G|A\) is a tree and contains every edge of \(A\). We define the _rank_\(\rho_{G}(A)\) of \(A\) to be \[\rho_{G}(A)=\max\{|A^{\prime}|:A^{\prime}\subseteq A,A\text{ is feasible}\}.\] We omit the subscript when the context is clear. We let \(\rho(G)=\rho(E)\). Observe that a set \(A\) of edges is feasible if and only if \(\rho(A)=|A|\). A feasible set is a _basis_ if \(\rho(A)=\rho(G)\). So \(A\) is a basis of \(G\) if and only if it is the edge set of a spanning tree of the root component of \(G\). A _rooted digraph_ is a digraph with a distinguished vertex called the _root_. We denote a rooted digraph \(D\) with vertex set \(V(D)\), edge set \(E(D)\) and root \(r(D)\) by a triple \((V(D),E(D),r(D))\). Once again we omit the arguments when there is no chance of ambiguity. Two rooted digraphs \((V,E,r)\) and \((V^{\prime},E^{\prime},r^{\prime})\) are _isomorphic_ if the unrooted digraphs \((V,E)\) and \((V^{\prime},E^{\prime})\) are isomorphic via an isomorphism mapping \(r\) to \(r^{\prime}\). We say that the _underlying rooted graph_ of a rooted digraph is the rooted graph we get when we remove all the directions on the edges. For a subset \(A\) of \(E\), the _rooted spanning subdigraph_\(D|A\) is formed from \(D\) by deleting all the edges in \(E-A\). The _root component_ of \(D\) is formed by deleting every vertex \(v\) to which there is no directed path from \(r\) in \(D\), together with its incident edges. The rooted digraph is _root-connected_ if there is a directed path from the root to every other vertex. The rooted digraph \(D\) is an _arborescence rooted at \(r\)_ if \(D\) is root-connected and its underlying rooted graph is a tree. Observe that a set \(A\) of edges of \(D\) is _feasible_ if and only if the root component of \(D|A\) is an arborescence rooted at \(r\) and contains every edge of \(A\). The _rank_\(\rho_{D}(A)\) of \(A\) is defined by \[\rho_{D}(A)=\max\{|A^{\prime}|:A^{\prime}\subseteq A,D|A^{\prime}\text{ is feasible}\}.\] We can omit the subscript when the context is clear. We let \(\rho(D)=\rho(E)\). A set \(A\) of edges is feasible if and only if \(\rho(A)=|A|\). A feasible set is a _basis_ if \(\rho(A)=\rho(D)\). So \(A\) is a basis of \(D\) if and only if it is the edge set of an arborescence rooted at \(r\) that includes every vertex of the root component of \(D\). ### Greedoids Greedoids are generalizations of matroids, first introduced by Korte and Lovasz in 1981 in [26]. The aim was to generalize the characterization of matroids as hereditary set systems on which the greedy algorithm is guaranteed to determine the optimal member of the set system, according to an arbitrary weighting. Most of the information about greedoids which we summarise below can be found in [2] or [29]. **Definition 6**.: A _greedoid_\(\Gamma\) is an ordered pair \((E,\mathcal{F})\) consisting of a finite set \(E\) and a non-empty collection \(\mathcal{F}\) of subsets of \(E\) satisfying the following axioms: 1. \(\emptyset\in\mathcal{F}\). * For all \(F\) and \(F^{\prime}\) in \(\mathcal{F}\) with \(|F^{\prime}|<|F|\) there exists some \(x\in F-F^{\prime}\) such that \(F^{\prime}\cup x\in\mathcal{F}\). The set \(E\) is _ground set_ of \(\Gamma\) and the members of \(\mathcal{F}\) are the _feasible sets_ of \(\Gamma\). The axioms are the first and third of the usual axioms specifying a matroid in terms of its independent sets, so clearly every matroid is a greedoid, but a greedoid does not necessarily satisfy the hereditary property satisfied by the independent sets of a matroid requiring that the collection of independent sets is closed under taking subsets. The _rank_\(\rho_{\Gamma}(A)\) of a subset \(A\) of \(E\) is given by \[\rho_{\Gamma}(A)=\max\{|A^{\prime}|:A^{\prime}\subseteq A,A^{\prime}\in \mathcal{F}\}\] and we let \(\rho(\Gamma)=\rho_{\Gamma}(E)\). We omit the subscript when the context is clear. Notice that a set \(A\) is feasible if and only if \(\rho(A)=|A|\). A feasible set is a _basis_ if \(\rho(A)=\rho(\Gamma)\). We denote the collection of bases of \(\Gamma\) by \(\mathcal{B}(\Gamma)\). Axiom (G2) implies that every basis has the same cardinality. Note that the rank function determines \(\Gamma\) but the collection of bases does not. For example, suppose that a greedoid has ground set \(\{1,2\}\) and unique basis \(\{1,2\}\). Then its collection of feasible sets could either be \(\{\emptyset,\{1\},\{1,2\}\}\), \(\{\emptyset,\{2\},\{1,2\}\}\) or \(\{\emptyset,\{1\},\{2\},\{1,2\}\}\). The rank function of a greedoid can be characterized in a similar way to the rank function of a matroid [27]. **Proposition 7**.: _The rank function \(\rho\) of a greedoid with ground set \(E\) takes integer values and satisfies each of the following._ * _For every subset_ \(A\) _of_ \(E\)_,_ \(0\leq\rho(A)\leq|A|\)_;_ * _For all subsets_ \(A\) _and_ \(B\) _of_ \(E\) _with_ \(A\subseteq B\)_,_ \(\rho(A)\leq\rho(B)\)_;_ * _For every subset_ \(A\) _of_ \(E\)_, and elements_ \(e\) _and_ \(f\)_, if_ \(\rho(A)=\rho(A\cup e)=\rho(A\cup f)\)_, then_ \(\rho(A)=\rho(A\cup e\cup f)\)_._ _Moreover if \(E\) is a finite set and \(\rho\) is a function from the subsets of \(E\) to the integers, then \(\rho\) is the rank function of a greedoid with ground set \(E\) if and only if \(\rho\) satisfies conditions (GR1)-(GR3) above._ The following lemma is easily proved using induction on \(|B|\) and will be useful later. **Lemma 8**.: _Let \((E,\rho)\) be a greedoid specified by its rank function and let \(A\) and \(B\) be subsets of \(E\) such that for all \(b\in B\), \(\rho(A\cup b)=\rho(A)\). Then \(\rho(A\cup B)=\rho(A)\)._ Two greedoids \(\Gamma_{1}=(E_{1},\mathcal{F}_{1})\) and \(\Gamma_{2}=(E_{2},\mathcal{F}_{2})\) are _isomorphic_, denoted by \(\Gamma_{1}\cong\Gamma_{2}\), if there exists a bijection \(f:E_{1}\to E_{2}\) that preserves the feasible sets. The following two examples of greedoids were introduced in [28]. Let \(G\) be a rooted graph. Take \(\Gamma=(E,\mathcal{F})\) so that \(E=E(G)\) and a subset \(A\) of \(E\) is in \(\mathcal{F}\) if and only if the root component of \(G|A\) is a tree containing every edge of \(A\). Then \(\Gamma\) is a greedoid. Any greedoid which is isomorphic to a greedoid arising from a rooted graph in this way is called a _branching greedoid_. The branching greedoid of a rooted graph \(G\) is denoted by \(\Gamma(G)\). Similarly suppose we have a rooted digraph \(D\) and take \(\Gamma=(E,\mathcal{F})\) so that \(E=E(D)\) and a subset \(A\) of \(E\) is in \(\mathcal{F}\) if and only if the root component of \(D|A\) is an arborescence rooted at \(r\) and contains every edge of \(A\). Then \(\Gamma\) is a greedoid. Any greedoid which is isomorphic to a greedoid arising from a rooted digraph in this way is called a _directed branching greedoid_. The directed branching greedoid of a rooted digraph \(D\) is denoted by \(\Gamma(D)\). (There should be no ambiguity with the overload of notation for a branching greedoid and a directed branching greedoid.) Notice that for both rooted graphs and digraphs, the concepts of feasible set, basis and rank are compatible with their definitions for the associated branching greedoid or directed branching greedoid in the sense that a set \(A\) of edges is feasible in a rooted graph \(G\) if and only if it is feasible in \(\Gamma(G)\), and similarly for the other concepts. We now define the class of _binary greedoids_. These are a special case of a much broader class, the _Gaussian elimination greedoids_, introduced by Goecke in [17], motivated by the Gaussian elimination algorithm. Let \(M\) be an \(m\times n\) binary matrix. It is useful to think of the rows and columns of \(M\) as being labelled by the elements of \([m]\) and \([n]\) respectively, where \([n]=\{1,\ldots,n\}\). If \(X\) is a subset of \([m]\) and \(Y\) is a subset of \([n]\), then \(M_{X,Y}\) denotes the matrix obtained from \(M\) by deleting all the rows except those with labels in \(X\) and all the columns except those with labels in \(Y\). Take \(\Gamma=([n],\mathcal{F})\), so that \[\mathcal{F}=\{A\subseteq[n]:\text{ the submatrix }M_{[|A|],A}\text{ is non- singular}\}.\] By convention, the empty matrix is considered to be non-singular. Then \(\Gamma\) is a greedoid. Any greedoid which is isomorphic to a greedoid arising from a binary matrix in this way is called a _binary greedoid_. The binary greedoid of a binary matrix \(M\) is denoted by \(\Gamma(M)\). **Example 9**.: Let \[M=\quad\begin{pmatrix}1&2&3&4\\ 1&0&0&1\\ 0&1&1&0\\ 0&1&1&1\end{pmatrix}.\] The binary greedoid \(\Gamma(M)\) has ground set \(\{1,2,3,4\}\) and feasible sets \[\{\emptyset,\{1\},\{4\},\{1,3\},\{1,4\},\{3,4\},\{1,2,3\},\{1,2,4\},\{2,3,4\}\}.\] The following lemma is clear. **Lemma 10**.: _Let \(E=[n]\), let \(M\) be an \(m\times n\) binary matrix with columns labelled by \(E\) and let \(M^{\prime}\) be obtained from \(M\) by adding row \(i\) to row \(j\), where \(i<j\). Then \(\Gamma(M^{\prime})\cong\Gamma(M)\)._ A consequence of this lemma is that if \(\Gamma\) is a binary greedoid, then there is a binary matrix \(M\) with linearly independent rows so that \(\Gamma=\Gamma(M)\). With this in mind we easily obtain the following result which will be useful later. **Lemma 11**.: _Let \(\Gamma\) be a binary greedoid. Then there is a binary matroid \(M\) so that \(\mathcal{B}(M)=\mathcal{B}(\Gamma)\)._ In contrast with the situation in matroids, where every graphic matroid is binary, it is not the case that every branching greedoid is binary. For example, take \(G\) to be the star with four vertices in which the central vertex is the root. Then \(\Gamma(G)\) is not binary. The same example but with the edges directed away from the root demonstrates that not every directed branching greedoid is binary. An element of a greedoid is a _loop_ if it does not belong to any feasible set. So if \(G\) is a rooted graph then an edge \(e\) is a loop of \(\Gamma(G)\) if it does not lie on any path from the root and if \(G\) is connected then it is just a loop in the normal graph-theoretic sense. Similarly if \(D\) is a directed rooted graph then an edge \(e\) is a loop of \(\Gamma(D)\) if it does not lie on any directed path from the root. As the concepts of loops in greedoids and in rooted graphs and digraphs do not completely coincide, we use the term _greedoid loop_ whenever there is potential for confusion. Let \(\Gamma\) be a greedoid with ground set \(E\) and rank function \(\rho\). Elements \(e\) and \(f\) of \(E\) are said to be _parallel_ in \(\Gamma\) if for all subsets \(A\) of \(E\), \[\rho(A\cup e)=\rho(A\cup f)=\rho(A\cup e\cup f).\] As far as we are aware, the following elementary lemma does not seem to have been stated before. **Lemma 12**.: _Let \(\Gamma\) be a greedoid. Define a relation \(\bowtie\) on the ground set of \(\Gamma\) by \(e\bowtie f\) if \(e\) and \(f\) are parallel in \(\Gamma\). Then \(\bowtie\) is an equivalence relation and if \(\Gamma\) has at least one loop, then one of the equivalence classes of \(\bowtie\) comprises the set of loops._ Proof.: The only part of the lemma that is not immediately obvious is that \(\bowtie\) is transitive. Let \(\rho\) be the rank function of \(\Gamma\) and \(e\), \(f\) and \(g\) be elements of \(\Gamma\), so that \(e\bowtie f\) and \(f\bowtie g\). Then for any subset \(A\) of elements of \(\Gamma\), we have \(\rho(A\cup e)=\rho(A\cup f)=\rho(A\cup e\cup f)\) and \(\rho(A\cup f)=\rho(A\cup g)=\rho(A\cup f\cup g)\). Thus \(\rho(A\cup e)=\rho(A\cup g)\). By applying Lemma 8 to \(A\cup f\) and elements \(e\) and \(g\), we see that \(\rho(A\cup e\cup f\cup g)=\rho(A\cup f)\). Thus, by (GR2), \(\rho(A\cup f)=\rho(A\cup e\cup f\cup g)\geq\rho(A\cup e\cup g)\geq\rho(A\cup e)\). But as \(\rho(A\cup e)=\rho(A\cup f)\), equality must hold throughout, so \(\rho(A\cup e\cup g)=\rho(A\cup e)=\rho(A\cup g)\), as required. ### Complexity We assume some familiarity with computational complexity and refer the reader to one of the standard texts such as [15] or [34] for more background. Given two computational problems \(\pi_{1}\) and \(\pi_{2}\), we say that \(\pi_{2}\) is _Turing reducible_ to \(\pi_{1}\) if there exists a deterministic Turing machine solving \(\pi_{2}\) in polynomial time using an oracle for \(\pi_{1}\), that is a subroutine returning an answer to an instance of \(\pi_{1}\) in constant-time. When \(\pi_{2}\) is Turing reducible to \(\pi_{1}\) we write \(\pi_{2}\propto_{T}\pi_{1}\) and we say that solving problem \(\pi_{1}\) is at least as hard as solving problem \(\pi_{2}\). The relation \(\propto_{T}\) is transitive. Informally, the class #P is the counting analogue of NP, that is, the class of all counting problems corresponding to decision problems in NP. Slightly more precisely, a problem is in #P if it counts the number of accepting computations or "witnesses" of a problem in NP. Consider the decision problem of determining whether a graph has a proper vertex 3-colouring. The obvious non-deterministic algorithm for this problem interprets a "witness" as a colouring of the vertices with 3 colours and verifies that it is a proper colouring. So the corresponding problem in #P would be to determine the number of proper vertex 3-colourings. A computational problem \(\pi\) is said to be #_P-hard_ if \(\pi^{\prime}\propto_{T}\pi\) for all \(\pi^{\prime}\in\)#P, and #_P-complete_ if, in addition, \(\pi\in\)#P. Counting the number of vertex 3-colourings of a graph is an example of an #P-complete problem. The following lemma is crucial in many of our proofs. **Lemma 13**.: _There is an algorithm which when given a non-singular integer \(n\times n\) matrix \(A\) and an integer \(n\)-vector \(b\) such that the absolute value of every entry of \(A\) and \(b\) is at most \(2^{l}\), outputs the vector \(x\) so that \(Ax=b\), running in time bounded by a polynomial in \(n\) and \(l\)._ One algorithm to do this is a variant of Gaussian elimination known as the Bareiss algorithm [1]. Similar ideas were presented by Edmonds [11]. See also [22]. ## 3 The Tutte Polynomial of a Greedoid Extending the definition of the Tutte polynomial of a matroid, McMahon and Gordon defined the Tutte polynomial of a greedoid in [18]. The _Tutte polynomial_ of a greedoid \(\Gamma\) with ground set \(E\) and rank function \(\rho\) is given by \[T(\Gamma;x,y)=\sum_{A\subseteq E}(x-1)^{\rho(\Gamma)-\rho(A)}(y-1)^{|A|-\rho(A )}.\] When \(\Gamma\) is a matroid, this reduces to the usual definition of the Tutte polynomial of a matroid. For a rooted graph \(G\) we let \(T(G;x,y)=T(\Gamma(G);x,y)\), for a rooted digraph \(D\) we let \(T(D;x,y)=T(\Gamma(D);x,y)\) and for a binary matrix \(M\) we let \(T(M;x,y)=T(\Gamma(M);x,y)\). **Example 14**.: 1. Let \(P_{k}\) be the rooted (undirected) path with \(k\) edges in which the root is one of the leaves. Then \[T(P_{k};x,y)=1+\sum_{i=1}^{k}(x-1)^{i}y^{i-1}.\] 2. Let \(S_{k}\) be the rooted (undirected) star with \(k\) edges in which the root is the central vertex. Then \[T(S_{k};x,y)=x^{k}.\] The Tutte polynomial of a greedoid retains many of the properties of the Tutte polynomial of a matroid, for example, it has a delete-contract recurrence, although its form is not as simple as that of the Tutte polynomial of a matroid [18]. Moreover, for a greedoid \(\Gamma\): * \(T(\Gamma;1,1)\) is the number of bases of \(\Gamma\); * \(T(\Gamma;2,1)\) is the number of feasible sets of \(\Gamma\); * \(T(\Gamma;1,2)\) is the number of subsets \(A\) of elements of \(\Gamma\) so that \(\rho(A)=\rho(\Gamma)\). * \(T(\Gamma;2,2)=2^{|E(\Gamma)|}\). But the Tutte polynomial of a greedoid also differs fundamentally from the Tutte polynomial of a matroid, for instance, unlike the Tutte polynomial of a matroid, the Tutte polynomial of a greedoid can have negative coefficents. For example, \(T(\Gamma(P_{2});x,y)=x^{2}y-2xy+x+y\). The Tutte polynomial of a rooted graph has some of the same evaluations as the Tutte polynomial of a unrooted graph. Let \(G\) be a rooted graph with edge set \(E\). * \(T(G;1,1)\) is the number of spanning trees of the root component of \(G\). (When \(G\) is connected, this is just the number of spanning trees of \(G\).) * \(T(G;2,1)\) is the number of subsets \(A\) of \(E\), so that the root component of \(G|A\) is a tree containing all the edges of \(A\). * \(T(G;1,2)\) is the number of subsets \(A\) of \(E\) so that the root component of \(G|A\) includes every vertex of the root component of \(G\). (When \(G\) is connected, this is just the number of subsets \(A\) so that \(G|A\) is connected.) * If no component of \(G\) other than the root component has edges, then \(T(G;1,0)\) is the number of acyclic orientations of \(G\) with a unique source. Otherwise \(T(G;1,0)=0\). We record the following proposition stating that the Tutte polynomial of a connected rooted graph \(G\) coincides with the Tutte polynomial of the corresponding unrooted graph \(G^{\prime}\) along the line \(x=1\). This is easy to prove by noting that \(\rho(G)=r(G^{\prime})\) and a subset \(A\) of the edges of \(G\) satisfies \(\rho(A)=\rho(G)\) if and only if \(r(A)=r(G^{\prime})\). **Proposition 15**.: _Let \(G=(V,E,r)\) be a connected rooted graph and let \(G^{\prime}=(V,E)\) be the corresponding unrooted graph. Then_ \[T(G;1,y)=T(G^{\prime};1,y).\] We list some evaluations of the Tutte polynomial of a digraph. Let \(D\) be a rooted digraph with edge set \(E\) and root \(r\). * \(T(D;1,1)\) is the number of spanning arborescences of the root component of \(D\) rooted at \(r\). (When \(D\) is root-connected, this is just its number of spanning arborescences rooted at \(r\).) * \(T(D;2,1)\) is the number of subsets \(A\) of \(E\), so that the root component of \(D|A\) is an arborescence rooted at \(r\) containing every edge of \(A\). * \(T(D;1,2)\) is the number of subsets \(A\) of \(E\), so that the root component of \(D|A\) includes every vertex of the root component of \(D\). (When \(D\) is connected, this is just the number of subsets \(A\) so that \(D|A\) is root-connected.) * \(T(D;1,0)=1\) if \(D\) is acyclic and root-connected, and \(0\) otherwise. The last evaluation will be discussed in more detail in Section 6. Gordon and McMahon [18] proved that if \(T_{1}\) and \(T_{2}\) are rooted arborescences, then \(T(T_{1};x,y)=T(T_{2};x,y)\) if and only if \(T_{1}\cong T_{2}\). We list some evaluations of the Tutte polynomial of a binary matroid. Let \(M\) be an \(m\times n\) binary matrix with linearly independent rows. * \(T(M;1,1)\) is the number of subsets \(A\) of the columns of \(M\) so that the submatrix of \(M\) corresponding to the columns in \(A\) is non-singular. * \(T(M;2,1)\) is the number of subsets \(A\) of the columns of \(M\) so that the submatrix \(M_{|A|,A}\) is non-singular. * \(T(M;1,2)\) is the number of subsets \(A\) of the columns of \(M\) containing a subset \(A^{\prime}\) so that the submatrix of \(M\) corresponding to the columns in \(A^{\prime}\) is non-singular. If a point \((a,b)\) lies on the hyperbola \(H_{1}\) then we have \((a-1)(b-1)=1\) by definition. Thus the Tutte polynomial of a greedoid \(\Gamma\) evaluated at such a point is given by \[T(\Gamma;a,b) =\sum_{A\subseteq E(\Gamma)}(a-1)^{\rho(\Gamma)-\rho(A)}(b-1)^{| A|-\rho(A)}\] \[=(a-1)^{\rho(\Gamma)}\sum_{A\subseteq E(\Gamma)}\left(\frac{1}{a -1}\right)^{|A|}=(a-1)^{\rho(\Gamma)-|E(\Gamma)|}a^{|E(\Gamma)|}.\] Therefore, given \(|E(\Gamma)|\) and \(\rho(\Gamma)\), it is easy to compute \(T(\Gamma;a,b)\) in polynomial time. For all of the greedoids that we consider, both \(|E(\Gamma)|\) and \(\rho(\Gamma)\) will be either known or easily computed. The _characteristic polynomial_ of a greedoid was first introduced by Gordon and McMahon in [19] and is a generalization of the characteristic or chromatic polynomial of a matroid. For a greedoid \(\Gamma\), the _characteristic polynomial_\(p(\Gamma;\lambda)\) is defined by \[p(\Gamma;\lambda)=(-1)^{\rho(\Gamma)}T(\Gamma;1-\lambda,0). \tag{1}\] ## 4 Greedoid Constructions In this section we introduce three greedoid constructions and give expressions for the Tutte polynomial of greedoids resulting from these constructions. The first construction is just the generalization of the \(k\)-thickening operation introduced by Brylawski [7] from matroids to greedoids. Given a greedoid \(\Gamma=(E,\mathcal{F})\), its \(k\)-thickening is the greedoid \(\Gamma^{k}\) that, informally speaking, is formed from \(\Gamma\) by replacing each element by \(k\) parallel elements. More precisely, \(\Gamma^{k}\) has ground set \(E^{\prime}=E\times[k]\) and collection \(\mathcal{F}^{\prime}\) of feasible sets as follows. Define \(\mu\) to be the projection operator \(\mu:2^{E\times[k]}\to 2^{E}\) so that element \(e\in\mu(A)\) if and only if \((e,i)\in A\) for some \(i\). Now a subset \(A\) is feasible in \(\Gamma^{k}\) if and only if \(\mu(A)\) is feasible in \(\Gamma\) and \(|\mu(A)|=|A|\). The latter condition ensures that \(A\) does not contain more than one element replacing a particular element of \(\Gamma\). It is clear that \(\Gamma^{k}\) is a greedoid and moreover \(\rho_{\Gamma^{k}}(A)=\rho_{\Gamma}(\mu(A))\). In particular \(\rho(\Gamma^{k})=\rho(\Gamma)\). For any element \(e\) of \(\Gamma\) the elements \((e,i)\) and \((e,j)\) are parallel. The effect of the \(k\)-thickening operation on the Tutte polynomial of a greedoid is given in the following theorem, generalizing the expression for the \(k\)-thickening of the Tutte polynomial given by Brylawski [7]. **Theorem 16**.: _Let \(\Gamma\) be a greedoid. The Tutte polynomial of the \(k\)-thickening \(\Gamma^{k}\) of \(\Gamma\) when \(y\neq-1\) is given by_ \[T(\Gamma^{k};x,y)=(1+y+\cdots+y^{k-1})^{\rho_{G}(\Gamma)}T\left(\Gamma;\frac{x +y+\cdots+y^{k-1}}{1+y+\cdots+y^{k-1}},y^{k}\right). \tag{2}\] _When \(y=-1\) we have_ \[T(\Gamma^{k};x,-1)=\begin{cases}(x-1)^{\rho_{G}(\Gamma)}&\text{if $k$ is even;}\\ T(\Gamma;x,-1)&\text{if $k$ is odd.}\end{cases}\] Proof.: Let \(\Gamma^{k}\) be the \(k\)-thickened greedoid, let \(E^{\prime}\) denote its ground set and let \(E\) be the ground set of \(\Gamma\). Then \(E^{\prime}=E\times[k]\). Let \(\mu\) be the mapping defined in the discussion at the beginning of this section. To ensure that we do not divide by zero in our calculations, we prove the case when \(y=1\) separately. For each \(A^{\prime}\subseteq E^{\prime}\) we have \(\rho_{\Gamma^{k}}(A^{\prime})=\rho_{\Gamma}(\mu(A^{\prime}))\) and furthermore \(\rho(\Gamma^{k})=\rho(\Gamma)\). The Tutte polynomial of \(\Gamma^{k}\) when \(y\notin\{-1,1\}\) is thus given by \[T(\Gamma^{k};x,y) =\sum_{A^{\prime}\subseteq E^{\prime}}(x-1)^{\rho(\Gamma^{k})-\rho _{\Gamma^{k}}(A^{\prime})}(y-1)^{|A^{\prime}|-\rho_{\Gamma^{k}}(A^{\prime})}\] \[=\sum_{A\subseteq E}\sum_{\begin{subarray}{c}A^{\prime}\subseteq E ^{\prime}:\\ \mu(A^{\prime})=A\end{subarray}}(x-1)^{\rho(\Gamma)-\rho_{\Gamma}(\mu(A^{ \prime}))}(y-1)^{|A^{\prime}|-\rho_{\Gamma}(\mu(A^{\prime}))} \tag{3}\] \[=\sum_{A\subseteq E}(x-1)^{\rho(\Gamma)-\rho_{\Gamma}(A)}(y-1)^{- \rho_{\Gamma}(A)}\sum_{\begin{subarray}{c}A^{\prime}\subseteq E^{\prime}:\\ \mu(A^{\prime})=A\end{subarray}}(y-1)^{|A^{\prime}|}\] \[=\sum_{A\subseteq E}(x-1)^{\rho(\Gamma)-\rho_{\Gamma}(A)}(y-1)^{- \rho_{\Gamma}(A)}(y^{k}-1)^{|A|}\] \[=(1+y+\cdots+y^{k-1})^{\rho(\Gamma)}\sum_{A\subseteq E}\left( \frac{(x-1)(y-1)}{y^{k}-1}\right)^{\rho(\Gamma)-\rho_{\Gamma}(A)}(y^{k}-1)^{| A|-\rho_{\Gamma}(A)}\] \[=(1+y+\cdots+y^{k-1})^{\rho(\Gamma)}T\left(\Gamma;\frac{x+y+ \cdots+y^{k-1}}{1+y+\cdots+y^{k-1}},y^{k}\right).\] When \(y=1\) we get non-zero terms in Equation 3 if and only if \(|A^{\prime}|=\rho_{\Gamma}(\mu(A^{\prime}))\), which implies that \(|A^{\prime}|=|A|\). For each \(A\subseteq E\) there are \(k^{|A|}\) choices for \(A^{\prime}\) such that \(\mu(A^{\prime})=A\) and \(|A^{\prime}|=|A|\). Therefore we have \[T(\Gamma^{k};x,1) =\sum_{\begin{subarray}{c}A\subseteq E:\\ \rho_{\Gamma}(A)=|A|\end{subarray}}(x-1)^{\rho(\Gamma)-\rho_{\Gamma}(A)}\sum_ {\begin{subarray}{c}A^{\prime}\subseteq E^{\prime}:\\ \mu(A^{\prime})=A,\\ |A^{\prime}|=|A|\end{subarray}}1=\sum_{\begin{subarray}{c}A\subseteq E:\\ \rho_{\Gamma}(A)=|A|\end{subarray}}(x-1)^{\rho(\Gamma)-\rho_{\Gamma}(A)}k^{ \rho_{\Gamma}(A)}\] \[=\sum_{\begin{subarray}{c}A\subseteq E:\\ \rho_{\Gamma}(A)=|A|\end{subarray}}\left(\frac{x-1}{k}\right)^{\rho(\Gamma)- \rho_{\Gamma}(A)}k^{\rho(\Gamma)}=k^{\rho(\Gamma)}T\left(\Gamma;\frac{x+k-1}{k },1\right)\] which agrees with Equation 2 when \(y=1\). When \(y=-1\) we have \[T(\Gamma^{k};x,-1) =\sum_{A\subseteq E}\sum_{\begin{subarray}{c}A^{\prime}\subseteq E ^{\prime}:\\ \mu(A^{\prime})=A\end{subarray}}(x-1)^{\rho(\Gamma)-\rho_{\Gamma}(\mu(A^{ \prime}))}(-2)^{|A^{\prime}|-\rho_{\Gamma}(\mu(A^{\prime}))}\] \[=\sum_{A\subseteq E}(x-1)^{\rho(\Gamma)-\rho_{\Gamma}(A)}(-2)^{- \rho_{\Gamma}(A)}\sum_{\begin{subarray}{c}A^{\prime}\subseteq E^{\prime}:\\ \mu(A^{\prime})=A\end{subarray}}(-2)^{|A^{\prime}|}\] \[=\sum_{A\subseteq E}(x-1)^{\rho(\Gamma)-\rho_{\Gamma}(A)}(-2)^{- \rho_{\Gamma}(A)}((-1)^{k}-1)^{|A|}\] \[=\left\{\begin{array}{ll}(x-1)^{\rho(\Gamma)}&\text{if $k$ is even;}\\ T(\Gamma;x,-1)&\text{if $k$ is odd.}\end{array}\right.\] Note that the only contribution to \(T(\Gamma^{k};x,-1)\) when \(k\) is even is from the empty set. The second construction is a little more involved. To motivate it we first describe a natural construction operation on rooted graphs. Let \(G\) and \(H\) be disjoint rooted graphs with \(G\) being connected. Then the \(H\)_-attachment_ of \(G\), denoted by \(G\sim H\), is formed by taking \(G\) and \(\rho(G)\) disjoint copies of \(H\), and identifying each vertex of \(G\) other than the root with the root vertex of one of the copies of \(H\). The root of \(G\sim H\) is the root of \(G\). See Figure 1 for an illustration of the attachment operation. Suppose that \(V(G)=\{r,v_{1},\ldots,v_{\rho(G)}\}\), where \(r\) is the root of \(G\), let \(E_{0}\) be the edge set of \(G\) and let \(E_{i}\) be the edge set of the copy of \(H\) attached at \(v_{i}\). A set \(F\) is feasible in \(\Gamma(G\sim H)\) if and only if each of the following conditions holds. 1. \(F\cap E_{0}\) is feasible in \(\Gamma(G)\). 2. For all \(i\) with \(1\leq i\leq\rho(G)\), \(F\cap E_{i}\) is feasible in \(\Gamma(H)\). 3. For all \(i\) with \(1\leq i\leq\rho(G)\), if \(v_{i}\) is not in the root component of \(G|(F\cap E_{0})\), then \(F\cap E_{i}=\emptyset\). In order to extend these ideas to general greedoids, we begin by describing the notion of a closed set, which was first defined for greedoids by Korte and Lovasz [26]. Let \(\Gamma\) be a greedoid with ground set \(E\) and rank function \(\rho\). Given a subset \(A\) of \(E\), its _closure_\(\sigma_{\Gamma}(A)\) is defined by \(\sigma_{\Gamma}(A)=\{e:\rho(A\cup e)=\rho(A)\}\). We will drop the dependence on \(\Gamma\) whenever the context is clear. Note that it follows from the definition that \(A\subseteq\sigma(A)\). Moreover Lemma 8 implies that \(\rho(\sigma(A))=\rho(A)\). Furthermore if \(e\notin\sigma(A)\), then \(\rho(A\cup e)>\rho(A)\), so axiom (GR2) implies that \(\rho(\sigma(A)\cup e)>\rho(\sigma(A))\) and hence \(\sigma(\sigma(A))=\sigma(A)\). A subset \(A\) of \(E\) satisfying \(A=\sigma(A)\) is said to be _closed_. Every subset of \(E\) of the form \(\sigma(X)\) for some \(X\) is closed. We now introduce what we call an attachment function. Let \(\Gamma\) be a greedoid with rank function \(\rho\). A function \(f:\mathcal{F}\to 2^{[\rho(\Gamma)]}\) is called a \(\Gamma\)_-attachment function_ if it satisfies both of the following. 1. For each feasible set \(F\), we have \(|f(F)|=\rho(F)\). 2. If \(F_{1}\) and \(F_{2}\) are feasible sets and \(F_{1}\subseteq\sigma(F_{2})\) then \(f(F_{1})\subseteq f(F_{2})\). Figure 1: An example of the attachment operation. The following property of attachment functions is needed later. **Lemma 17**.: _Let \(\Gamma\) be a greedoid and \(f\) be a \(\Gamma\)-attachment function. Let \(A\) be a subset of the elements of \(\Gamma\) and let \(F_{1}\) and \(F_{2}\) be maximal feasible subsets of \(A\). Then \(f(F_{1})=f(F_{2})\)._ Proof.: It follows from the axioms for the feasible sets of a greedoid that all maximal feasible subsets of \(A\) have the same size. Thus \(\rho(F_{1})=\rho(F_{2})=\rho(A)\). For every element \(e\) of \(A\), \(\rho(F_{1})\leq\rho(F_{1}\cup e)\leq\rho(A)\). As \(\rho(F_{1})=\rho(A)\), equality must hold throughout. Thus \(e\in\sigma(F_{1})\). Hence \(A\subseteq\sigma(F_{1})\), so \(F_{2}\subseteq\sigma(F_{1})\). By symmetry, \(F_{1}\subseteq\sigma(F_{2})\). The result then follows from the second condition satisfied by a \(\Gamma\)-attachment function. Given greedoids \(\Gamma_{1}\) and \(\Gamma_{2}\) with disjoint ground sets, and \(\Gamma_{1}\)-attachment function \(f\), we define the _\(\Gamma_{2}\)-attachment of \(\Gamma_{1}\)_, denoted by \(\Gamma_{1}\sim_{f}\Gamma_{2}\) as follows. The ground set \(E\) is the union of the ground set \(E_{0}\) of \(\Gamma_{1}\) together with \(\rho=\rho(\Gamma_{1})\) disjoint copies \(E_{1},\ldots,E_{\rho}\) of the ground set of \(\Gamma_{2}\). In the following we abuse notation slightly by saying that for \(i>0\), a subset of \(E_{i}\) is feasible in \(\Gamma_{2}\) if the corresponding subset of the elements of \(\Gamma_{2}\) is feasible. A subset \(F\) of \(E\) is feasible if and only if each of the following conditions holds. 1. \(F\cap E_{0}\) is feasible in \(\Gamma_{1}\). 2. For all \(i\) with \(1\leq i\leq\rho\), \(F\cap E_{i}\) is feasible in \(\Gamma_{2}\). 3. For all \(i\) with \(1\leq i\leq\rho\), if \(i\notin f(F\cap E_{0})\) then \(F\cap E_{i}=\emptyset\). **Proposition 18**.: _For any greedoids \(\Gamma_{1}\) and \(\Gamma_{2}\), and \(\Gamma_{1}\)-attachment function \(f\), the \(\Gamma_{2}\)-attachment of \(\Gamma_{1}\) is a greedoid._ Proof.: We use the notation defined above to describe the ground set of \(\Gamma_{1}\sim_{f}\Gamma_{2}\). Clearly the empty set is feasible in \(\Gamma_{1}\sim_{f}\Gamma_{2}\). Suppose that \(F_{1}\) and \(F_{2}\) are feasible sets in \(\Gamma_{1}\sim_{f}\Gamma_{2}\) with \(|F_{2}|>|F_{1}|\). If there is an element \(e\) of \(F_{2}\cap E_{0}\) which is not in \(\sigma_{\Gamma_{1}}(F_{1}\cap E_{0})\) then \((F_{1}\cap E_{0})\cup e\) is feasible in \(\Gamma_{1}\). Moreover \(F_{1}\cap E_{0}\subseteq\sigma_{\Gamma_{1}}((F_{1}\cap E_{0})\cup e)\), so \(f(F_{1}\cap E_{0})\subseteq f((F_{1}\cap E_{0})\cup e)\). Consequently \(F_{1}\cup e\) is feasible in \(\Gamma_{1}\sim_{f}\Gamma_{2}\). On the other hand, suppose that \(F_{2}\cap E_{0}\subseteq\sigma_{\Gamma_{1}}(F_{1}\cap E_{0})\). Then \(f(F_{2}\cap E_{0})\subseteq f(F_{1}\cap E_{0})\). Moreover, as there is no element \(e\) of \((F_{2}\cap E_{0})-(F_{1}\cap E_{0})\) such that \((F_{1}\cap E_{0})\cup e\) is feasible, we have \(|F_{2}\cap E_{0}|\leq|F_{1}\cap E_{0}|\). So for some \(i\) in \(f(F_{2}\cap E_{0})\), we have \(|F_{2}\cap E_{i}|>|F_{1}\cap E_{i}|\). Thus there exists \(e\in(F_{2}-F_{1})\cap E_{i}\) such that \((F_{1}\cap E_{i})\cup e\) is feasible in \(\Gamma_{2}\). As \(i\in f(F_{2}\cap E_{0})\), we have \(i\in f(F_{1}\cap E_{0})\). Hence \(F_{1}\cup e\) is feasible in \(\Gamma_{1}\sim_{f}\Gamma_{2}\). Every greedoid \(\Gamma\) has an attachment function formed by setting \(f(F)=[|F|]\) for each feasible set \(F\). However there are other examples of attachment functions. Let \(G\) be a connected rooted graph in which the vertices other than the root are labelled \(v_{1},\ldots,v_{\rho}\). There is an attachment function \(f\) defined on \(\Gamma(G)\) as follows. For every feasible set \(F\), define \(f(F)\) so that \(i\in f(F)\) if and only if \(v_{i}\) is in the root component of \(G|F\). It is straightforward to verify that \(f\) is indeed an attachment function. Furthermore if \(H\) is another rooted graph then \(\Gamma(G\sim H)=\Gamma(G)\sim_{f}\Gamma(H)\). We now consider the rank function of \(\Gamma=\Gamma_{1}\sim_{f}\Gamma_{2}\). We keep the same notation as above for the elements of \(\Gamma\). Let \(A\) be a subset of \(E(\Gamma)\) and let \(F\) be a maximal feasible subset of \(A\cap E_{0}\). Then \[\rho_{\Gamma}(A)=\rho_{\Gamma_{1}}(A\cap E_{0})+\sum_{i\in f(F)}\rho_{\Gamma_{ 2}}(A\cap E_{i}). \tag{4}\] Observe that the number of subsets of \(E(\Gamma)\) with specified rank, size and intersection with \(E_{0}\) does not depend on the choice of \(f\). Consequently the Tutte polynomial of \(\Gamma_{1}\sim_{f}\Gamma_{2}\) does not depend on \(f\). We now make this idea more precise by establishing an expression for the Tutte polynomial of an attachment. **Theorem 19**.: _Let \(\Gamma_{1}\) and \(\Gamma_{2}\) be greedoids, and let \(f\) be an attachment function for \(\Gamma_{1}\). Then the Tutte polynomial of \(\Gamma_{1}\sim_{f}\Gamma_{2}\) is given by_ \[T(\Gamma_{1}\sim_{f}\Gamma_{2};x,y)=T(\Gamma_{2};x,y)^{\rho(\Gamma_{1})}T \Big{(}\Gamma_{1};\frac{(x-1)^{\rho(\Gamma_{2})+1}y^{|E(\Gamma_{2})|}}{T( \Gamma_{2};x,y)}+1,y\Big{)},\] _providing \(T(\Gamma_{2};x,y)\neq 0\)._ Proof.: Let \(\Gamma=\Gamma_{1}\sim_{f}\Gamma_{2}\). We use the notation defined above to describe the ground set of \(\Gamma\). It is useful to extend the definition of the attachment function \(f\) to all subsets of \(E_{0}\) by setting \(f(A)\) to be equal to \(f(F)\) where \(F\) is a maximal feasible set of \(A\). Lemma 17 ensures that extending \(f\) in this way is well-defined. It follows from Equation 4 that \(\rho(\Gamma)=\rho(\Gamma_{1})(\rho(\Gamma_{2})+1)\). We have \[T(\Gamma;x,y) =\sum_{A\subseteq E(\Gamma)}(x-1)^{\rho(\Gamma)-\rho_{\Gamma}(A) }(y-1)^{|A|-\rho(A)}\] \[=\sum_{A_{0}\subseteq E_{0}}(x-1)^{\rho(\Gamma_{1})-\rho_{\Gamma _{1}}(A_{0})}(y-1)^{|A_{0}|-\rho_{\Gamma_{1}}(A_{0})}\cdot\prod_{i\notin f(A_{ 0})}\sum_{A_{i}\subseteq E_{i}}(x-1)^{\rho(\Gamma_{2})}(y-1)^{|A_{i}|}\] \[\quad\cdot\prod_{i\in f(A_{0})}\sum_{A_{i}\subseteq E_{i}}(x-1)^{ \rho(\Gamma_{2})-\rho_{\Gamma_{2}}(A_{i})}(y-1)^{|A_{i}|-\rho_{\Gamma_{2}}(A_ {i})}\] \[=\sum_{A_{0}\subseteq E_{0}}(x-1)^{\rho(\Gamma_{1})-\rho_{\Gamma _{1}}(A_{0})}(T(\Gamma_{2};x,y))^{\rho_{\Gamma_{1}}(A_{0})}\] \[\quad\cdot\big{(}(x-1)^{\rho(\Gamma_{2})}y^{|E(\Gamma_{2})|} \big{)}^{\rho(\Gamma_{1})-\rho_{\Gamma_{1}}(A_{0})}(y-1)^{|A_{0}|-\rho_{\Gamma _{1}}(A_{0})}\] \[=(T(\Gamma_{2};x,y))^{\rho(\Gamma_{1})}\sum_{A_{0}\subseteq E_{0} }(y-1)^{|A_{0}|-\rho_{\Gamma_{1}}(A_{0})}\Big{(}\frac{(x-1)^{\rho(\Gamma_{2})+ 1}y^{|E(\Gamma_{2})|}}{T(\Gamma_{2};x,y)}\Big{)}^{\rho(\Gamma_{1})-\rho_{ \Gamma_{1}}(A_{0})}\] \[=T(\Gamma_{2};x,y)^{\rho(\Gamma_{1})}T\Big{(}\Gamma_{1};\frac{(x- 1)^{\rho(\Gamma_{2})+1}y^{|E(\Gamma_{2})|}}{T(\Gamma_{2};x,y)}+1,y\Big{)}.\] The third construction is called the full rank attachment. Given greedoids \(\Gamma_{1}=(E_{1},\mathcal{F}_{1})\) and \(\Gamma_{2}=(E_{2},\mathcal{F}_{2})\) with disjoint ground sets, the _full rank attachment of \(\Gamma_{2}\) to \(\Gamma_{1}\)_ denoted by \(\Gamma_{1}\approx\Gamma_{2}\) has ground set \(E_{1}\cup E_{2}\) and a set \(F\) of elements is feasible if either of the two following conditions holds. 1. \(F\in\mathcal{F}_{1}\); 2. \(F\cap E_{1}\in\mathcal{F}_{1}\), \(F\cap E_{2}\in\mathcal{F}_{2}\) and \(\rho_{\Gamma_{1}}(F\cap E_{1})=\rho(\Gamma_{1})\). It is straightforward to prove that \(\Gamma_{1}\approx\Gamma_{2}\) is a greedoid. Suppose that \(\Gamma=\Gamma_{1}\approx\Gamma_{2}\) and that \(A\) is a subset of \(E(\Gamma)\). Then \[\rho(A)=\begin{cases}\rho(A\cap E_{1})&\text{if }\rho(A\cap E_{1})<\rho( \Gamma_{1}),\\ \rho(A\cap E_{1})+\rho(A\cap E_{2})&\text{if }\rho(A\cap E_{1})=\rho(\Gamma_{1}). \end{cases}\] This observation enables us to prove the following identity for the Tutte polynomial. **Theorem 20**.: _Let \(\Gamma_{1}\) and \(\Gamma_{2}\) be greedoids, and let \(\Gamma=\Gamma_{1}\approx\Gamma_{2}\). Let \(E\), \(E_{1}\) and \(E_{2}\) denote the ground sets of \(\Gamma\), \(\Gamma_{1}\) and \(\Gamma_{2}\) respectively. Then_ \[T(\Gamma_{1}\approx\Gamma_{2};x,y)=T(\Gamma_{1};x,y)(x-1)^{\rho(\Gamma_{2})}y ^{|E_{2}|}+T(\Gamma_{1};1,y)(T(\Gamma_{2};x,y)-(x-1)^{\rho(\Gamma_{2})}y^{|E_{ 2}|}).\] Proof.: We have \[T(\Gamma_{1} \approx\Gamma_{2};x,y)\] \[=\sum_{A\subseteq E}(x-1)^{\rho(\Gamma)-\rho_{\Gamma}(A)}(y-1)^{| A|-\rho_{\Gamma}(A)}\] \[=\sum_{\begin{subarray}{c}A_{1}\subseteq E_{1}:\\ \rho_{\Gamma_{1}}(A_{1})<\rho(\Gamma_{1})\end{subarray}}(x-1)^{\rho(\Gamma_{ 1})-\rho_{\Gamma_{1}}(A_{1})}(y-1)^{|A_{1}|-\rho_{\Gamma_{1}}(A_{1})}\sum_{A_{ 2}\subseteq E_{2}}(x-1)^{\rho(\Gamma_{2})}(y-1)^{|A_{2}|}\] \[\quad+\sum_{\begin{subarray}{c}A_{1}\subseteq E_{1}:\\ \rho_{\Gamma_{1}}(A_{1})=\rho(\Gamma_{1})\end{subarray}}(y-1)^{|A_{1}|-\rho_{ \Gamma_{1}}(A_{1})}\sum_{A_{2}\subseteq E_{2}}(x-1)^{\rho(\Gamma_{2})-\rho_{ \Gamma_{2}}(A_{2})}(y-1)^{|A_{2}|-\rho_{\Gamma_{2}}(A_{2})}\] \[=\sum_{A_{1}\subseteq E_{1}}(x-1)^{\rho(\Gamma_{1})-\rho_{\Gamma_ {1}}(A_{1})}(y-1)^{|A_{1}|-\rho_{\Gamma_{1}}(A_{1})}(x-1)^{\rho(\Gamma_{2})}y ^{|E_{2}|}\] \[\quad+\sum_{\begin{subarray}{c}A_{1}\subseteq E_{1}:\\ \rho_{\Gamma_{1}}(A_{1})=\rho(\Gamma_{1})\end{subarray}}(y-1)^{|A_{1}|-\rho_{ \Gamma_{1}}(A_{1})}\] \[\quad\cdot\Big{(}\sum_{A_{2}\subseteq E_{2}}(x-1)^{\rho(\Gamma_{ 2})-\rho_{\Gamma_{2}}(A_{2})}(y-1)^{|A_{2}|-\rho_{\Gamma_{2}}(A_{2})}-(x-1)^{ \rho(\Gamma_{2})}y^{|E_{2}|}\Big{)}\] \[=T(\Gamma_{1};x,y)(x-1)^{\rho(\Gamma_{2})}y^{|E_{2}|}+T(\Gamma_{1} ;1,y)\big{(}T(\Gamma_{2};x,y)-(x-1)^{\rho(\Gamma_{2})}y^{|E_{2}|}\big{)}.\] This construction will be useful later in Section 7 when \(\Gamma_{1}\) and \(\Gamma_{2}\) are binary greedoids with \(\Gamma_{1}=\Gamma(M_{1})\) and \(\Gamma_{2}=\Gamma(M_{2})\), where \(M_{1}\) has full row rank. Then \(\Gamma_{1}\approx\Gamma_{2}=\Gamma(M)\) where \(M\) has the form \[M=\left(\begin{array}{c|c}M_{1}&0\\ \hline 0&M_{2}\end{array}\right).\] Rooted Graphs Throughout the remainder of the paper we focus on three computational problems. Let \(\mathbb{G}\) denote either the class of branching greedoids, directed branching greedoids or binary greedoids. Our first problem is computing all the coefficients of the Tutte polynomial for a greedoid in the class \(\mathbb{G}\). \(\pi_{1}[\mathbb{G}]\) : #Rooted Tutte Polynomial **Input:**\(\Gamma\in\mathbb{G}\). **Output:** The coefficients of \(T(\Gamma;x,y)\). The second problem involves computing the Tutte polynomial along a plane algebraic curve \(L\). We restrict our attention to the case where \(L\) is a rational curve given by the parametric equations \[x(t)=\frac{p(t)}{q(t)}\quad\text{ and }\quad y(t)=\frac{r(t)}{s(t)},\] where \(p\), \(q\), \(r\) and \(s\) are polynomials over \(\mathbb{Q}\). More precisely, we compute the coefficients of the one-variable polynomial obtained by restricting \(T\) to the curve \(L\). \(\pi_{2}[\mathbb{G},L]\) : #Rooted Tutte Polynomial Along \(L\) **Input:**\(\Gamma\in\mathbb{G}\). **Output:** The coefficients of the rational function of \(t\) given by evaluating \(T(\Gamma;x(t),y(t))\) along \(L\). Most of the time, \(L\) will be one of the hyperbolae \(H_{\alpha}\). We will frequently make a slight abuse of notation by writing \(L=H_{\alpha}\). The final problem is the evaluation of the Tutte polynomial at a fixed rational point \((a,b)\). \(\pi_{3}[\mathbb{G},a,b]\) : #Rooted Tutte Polynomial At \((a,b)\) **Input:**\(\Gamma\in\mathbb{G}\). **Output:**\(T(\Gamma;a,b)\). It is straightforward to see that for each possibility for \(\mathbb{G}\), we have \[\pi_{3}[\mathbb{G},a,b]\propto_{T}\pi_{2}[\mathbb{G},H_{(a-1)(b-1)}]\propto_{ T}\pi_{1}[\mathbb{G}].\] Our results in the remainder of the paper will determine when the opposite reductions hold. In this section we prove Theorem 3. We let \(\mathcal{G}\) be the class of branching greedoids of connected, rooted, planar, bipartite graphs and take \(\mathbb{G}=\mathcal{G}\). It is, however, more convenient to take the input to each problem to be a connected, rooted, planar, bipartite graph rather than its branching greedoid. We begin by reviewing the exceptional points of Theorem 3. If a point \((a,b)\) lies on the hyperbola \(H_{1}\) then, following the remarks at the end of Section 3, \(T(G;a,b)\) is easily computed. We noted in Section 3 that for a connected rooted graph \(G\), \(T(G;1,1)\) is equal to the number of spanning trees of \(G\). That this can be evaluated in polynomial time follows from Kirchhoff's Matrix-Tree theorem [25]. Hence there are polynomial time algorithms to evaluate the Tutte polynomial of a connected rooted graph at \((1,1)\) and at any point lying on \(H_{1}\). It is easy to extend this to all rooted graphs because every edge belonging to a component that does not include the root is a loop in the corresponding branching greedoid. We will now review the hard points of Theorem 3. A key step in establishing the hardness part of Theorem 3 for points lying on the line \(y=1\) is to strengthen a result of Jerrum [24]. Given an unrooted graph \(G=(V,E)\), a _subtree_ of \(G\) is a subgraph of \(G\) which is a tree. (We emphasize that the subgraph does not have to be an induced subgraph.) Jerrum [24] showed that the following problem is #P-complete. #Subtrees **Input:** Planar unrooted graph \(G\). **Output:** The number of subtrees of \(G\). Consider the restriction of this problem to bipartite planar graphs. #Bisubtrees **Input:** Bipartite, planar unrooted graph \(G\). **Output:** The number of subtrees of \(G\). We shall show that #Bisubtrees is #P-complete. We say that an edge of a graph \(G\) is _external_ in a subtree \(T\) of \(G\) if it is not contained in \(E(T)\). Let \(t_{i,j}(G)\) be the number of subtrees of \(G\) with \(i\) external edges having precisely one endvertex in \(T\) and \(j\) external edges having both endvertices in \(T\). Recall that the \(k\)-stretch of an unrooted graph \(G\) is obtained by replacing each loop by a circuit with \(k\) edges and every other edge by a path of length \(k\). Let \(t(G)\) denote the number of subtrees of \(G\). **Proposition 21**.: _For every unrooted graph \(G\), the number of subtrees of the \(k\)-stretch \(G_{k}\) of \(G\) is given by_ \[t(G_{k})=\left(\sum_{i,j\geq 0}t_{i,j}(G)k^{i}{k+1\choose 2}^{j}\right)+\frac {k(k-1)|E|}{2}.\] Proof.: Let \(E(G)=\{e_{1},e_{2},\ldots,e_{m}\}\) and let \(E_{t}\) be the set of edges replacing \(e_{t}\) in \(G_{k}\) for \(1\leq t\leq m\). Thus \(E(G_{k})=\bigcup_{t=1}^{m}E_{t}\). We can think of the vertices of \(G_{k}\) as being of two types: those corresponding to the vertices of \(G\) and the extra ones added when \(G_{k}\) is formed. We construct a function \(f\) that maps every subtree \(T\) of \(G_{k}\) to a graph \(T^{\prime}\) which is either a subtree of \(G\) or an empty graph with no vertices or edges. We let \(V(T^{\prime})\) comprise all the vertices of \(V(T)\) corresponding to vertices in \(G\). The edge set \(E(T^{\prime})\) is defined so that \(e_{t}\in E(T^{\prime})\) if and only if \(E_{t}\subseteq E(T)\). Let \(T^{\prime}\) be a subtree of \(G\) with at least one vertex, \(i\) external edges having precisely one endvertex in \(T^{\prime}\) and \(j\) external edges having both endvertices in \(T^{\prime}\). If \(T\in f^{-1}(T^{\prime})\) then it must contain all of the edges in \(G_{k}\) that replace the edges in \(E(T^{\prime})\). Suppose there is an edge \(e=v_{1}v_{2}\) in \(G\) that is external in \(T^{\prime}\) with \(v_{1}\in V(T^{\prime})\) and \(v_{2}\notin V(T^{\prime})\). Then there are \(k\) possibilities for the subset of \(E_{t}\) appearing in \(T\). Now suppose there exists an edge \(e_{t}=v_{1}v_{2}\) in \(G\) that is external in \(T^{\prime}\) with \(v_{1},v_{2}\in V^{\prime}\). Then there are \(\binom{k+1}{2}\) choices for the subset of \(E_{t}\) appearing in \(T\). Therefore, \[|f^{-1}(T^{\prime}_{i,j})|=k^{i}\binom{k+1}{2}^{j}.\] It remains to count the subtrees of \(G_{k}\) mapped by \(f\) to a graph with no vertices. Such a subtree does not contain any vertices corresponding to vertices in \(G\). There are \((k-1)|E(G)|\) subtrees of \(G_{k}\) comprising a single vertex not in \(V(G)\) and no edges, and \(\binom{k-1}{2}|E(G)|\) subtrees of \(G_{k}\) with at least one edge but not containing any vertex in \(V(G)\). Hence \[t(G_{k})=\left(\sum_{i,j\geq 0}t_{i,j}(G)k^{i}\binom{k+1}{2}^{j}\right)+\frac {k(k-1)}{2}|E(G)|.\] We can now show that Bisubtrees is #P-complete. **Proposition 22**.: _The problem Bisubtrees is #P-complete._ Proof.: It is clear that Bisubtrees belongs to #P. To establish hardness, first note that \(G_{2},\ldots,G_{4|E(G)|+2}\) are all bipartite and may be constructed from \(G\) in polynomial time. We have \(\max_{i,j\geq 0}\{i+2j:t_{i,j}(G)>0\}\leq\max_{i,j\geq 0}\{i+2j:i+j\leq|E(G)| \}=2|E(G)|\). Therefore, by Proposition 21, \(t(G_{k})\) is a polynomial in \(k\) of degree at most \(2|E(G)|\). So we can write \[t(G_{k})=\sum_{p=0}^{2|E(G)|}a_{p}k^{p}.\] Thus, if we compute \(t(G_{k})\) for \(k=2,\ldots,4|E(G)|+2\), then we can apply Lemma 13 to recover \(a_{i}\) for all \(i\) and then determine \(t(G)=t(G_{1})\) in polynomial time. Therefore we have shown that Subtrees \(\propto_{T}\) Bisubtrees. We now present three propositions which together show that at most fixed rational points \((a,b)\), evaluating the Tutte polynomial of a connected, bipartite, planar, rooted graph at \((a,b)\) is just as hard as evaluating it along the curve \(H_{(a-1)(b-1)}\). The \(k\)-thickening operation is crucial. Notice that \(\Gamma(G^{k})\cong(\Gamma(G))^{k}\), so we may apply Theorem 16 to obtain an expression for \(T(G^{k})\). The first proposition deals with the case when \(a\neq 1\) and \(b\notin\{-1,0,1\}\). **Proposition 23**.: _Let \(L=H_{\alpha}\) for some \(\alpha\in\mathbb{Q}-\{0\}\). Let \((a,b)\) be a point on \(L\) such that \(b\notin\{-1,0\}\). Then_ \[\pi_{2}[\mathcal{G},L]\propto_{T}\pi_{3}[\mathcal{G},a,b].\] Proof.: For a point \((x,y)\) on \(L\) we have \(y\neq 1\). Therefore \(z=y-1\neq 0\) and so \(\alpha/z=x-1\). Let \(G\) be in \(\mathcal{G}\). Along \(L\) the Tutte polynomial of \(G\) has the form \[T(G;x,y)=T(G;1+\alpha/z,1+z)=\sum_{A\subseteq E(G)}\left(\frac{\alpha}{z} \right)^{\rho(G)-\rho(A)}z^{|A|-\rho(A)}=\sum_{i=-\rho(G)}^{|E(G)|}t_{i}z^{i},\] for some \(t_{-\rho(G)},\ldots,t_{|E(G)|}\). We now show that we can determine all of the coefficients \(t_{i}\) from the evaluations \(T(G^{k};a,b)\) for \(k=1,\ldots,|E(G)|+\rho(G)+1\) in time polynomial in \(|E(G)|\). For each such \(k\), \(G^{k}\) may be constructed from \(G\) in time polynomial in \(|E(G)|\) and is bipartite, planar and connected. By Theorem 16, we have \[T(G^{k};a,b)=(1+b+\ldots+b^{k-1})^{\rho(G)}T\left(G;\frac{a+b+\ldots+b^{k-1}}{ 1+b+\ldots+b^{k-1}},b^{k}\right).\] Since \(b\neq-1\), we have \(1+b+\ldots+b^{k-1}\neq 0\). Therefore we may compute \[T\left(G;\frac{a+b+\ldots+b^{k-1}}{1+b+\ldots+b^{k-1}},b^{k}\right)\] from \(T(G^{k};a,b)\). The point \(\left(\frac{a+b+\ldots+b^{k-1}}{1+b+\ldots+b^{k-1}},b^{k}\right)\) will also be on the curve \(L\) since \[\left(\frac{a+b+\ldots+b^{k-1}}{1+b+\ldots+b^{k-1}}-1\right)(b^{k}-1)=(a-1)(b -1).\] As \(b\notin\{-1,0,1\}\), for \(k=1,2,\ldots,|E(G)|+\rho(G)+1\), the points \(\left(\frac{a+b+\ldots+b^{k-1}}{1+b+\ldots+b^{k-1}},b^{k}\right)\) are pairwise distinct. Therefore by evaluating \(T(G^{k};a,b)\) for \(k=1,\ldots,|E(G)|+\rho(G)+1\), we obtain \(\sum_{i=-\rho(G)}^{|E(G)|}t_{i}z^{i}\) for \(|E(G)|+\rho(G)+1\) distinct values of \(z\). This gives us \(|E(G)|+\rho(G)+1\) linear equations for the coefficients \(t_{i}\). The matrix of the equations is a Vandermonde matrix and clearly non-singular. So, we may apply Lemma 13 to compute \(t_{i}\) for all \(i\) in time polynomial in \(|E(G)|\). The next proposition deals with the case when \(a=1\). Recall \(H_{0}^{x}=\{(1,y):y\in\mathbb{Q}\}\) and \(H_{0}^{y}=\{(x,1):x\in\mathbb{Q}\}\). **Proposition 24**.: _Let \(L=H_{0}^{x}\) and let \(b\in\mathbb{Q}-\{-1,0,1\}\). Then_ \[\pi_{2}[\mathcal{G},L]\propto_{T}\pi_{3}[\mathcal{G},1,b].\] Proof.: Let \(G\) be in \(\mathcal{G}\). Along \(L\) the Tutte polynomial of \(G\) has the form \[T(G;1,y)=\sum_{\begin{subarray}{c}A\subseteq E(G):\\ \rho(A)=\rho(G)\end{subarray}}(y-1)^{|A|-\rho(G)}=\sum_{i=-\rho(G)}^{|E(G)|}t_{ i}y^{i},\] for some \(t_{-\rho(G)},\ldots,t_{|E(G)|}\). The proof now follows in a similar way to that of Proposition 23 by computing \(T(G^{k};1,b)\) for \(k=1,\ldots,|E(G)|+\rho(G)+1\) and then determining each coefficient \(t_{i}\) in time polynomial in \(|E(G)|\) The following proposition deals with the case when \(b=1\). **Proposition 25**.: _Let \(L=H_{0}^{y}\) and \(a\in\mathbb{Q}-\{1\}\). Then_ \[\pi_{2}[\mathcal{G},L]\propto_{T}\pi_{3}[\mathcal{G},a,1].\] Proof.: Let \(G\) be in \(\mathcal{G}\). Along \(L\) the Tutte polynomial of \(G\) has the form \[T(G;x,1)=\sum_{\begin{subarray}{c}A\subseteq E(G):\\ \rho(A)=|A|\end{subarray}}(x-1)^{\rho(G)-\rho(A)}=\sum_{i=0}^{\rho(G)}t_{i}x^{i},\] for some \(t_{0},\ldots,t_{\rho(G)}\). We now show that we can determine all of the coefficients \(t_{i}\) from the evaluations \(T(G^{k};a,1)\) for \(k=1,\ldots,\rho(G)+1\) in time polynomial in \(|E(G)|\). For each such \(k\), \(G^{k}\) may be constructed from \(G\) in time polynomial in \(|E(G)|\) and is bipartite, planar and connected. By Theorem 16, we have \[T(G^{k};a,1)=k^{\rho(G)}T\left(G;\frac{a+k-1}{k},1\right).\] Therefore we may compute \(T\left(G;\frac{a+k-1}{k},1\right)\) from \(T(G^{k};a,1)\). Clearly \(\left(\frac{a+k-1}{k},1\right)\) lies on \(H_{0}^{y}\). Since \(a\neq 1\), the points \(\left(\frac{a+k-1}{k},1\right)\) are pairwise distinct for \(k=1,2,\ldots,\rho(G)+1\). Therefore by evaluating \(T(G^{k};a,1)\) for \(k=1,\ldots,\rho(G)+1\), we obtain \(\sum_{i=0}^{|\rho(G)|}t_{i}z^{i}\) for \(\rho(G)+1\) distinct values of \(z\). This gives us \(\rho(G)+1\) linear equations for the coefficients \(t_{i}\). Again the matrix of the equations is a Vandermonde matrix and clearly non-singular. So, we may apply Lemma 13 to compute \(t_{i}\) for all \(i\) in time polynomial in \(|E(G)|\). We now summarize the three preceding propositions. **Proposition 26**.: _Let \(L\) be either \(H_{0}^{x}\), \(H_{0}^{y}\), or \(H_{\alpha}\) for \(\alpha\in\mathbb{Q}-\{0\}\). Let \((a,b)\) be a point on \(L\) such that \((a,b)\neq(1,1)\) and \(b\notin\{-1,0\}\). Then_ \[\pi_{2}[\mathcal{G},L]\propto_{T}\pi_{3}[\mathcal{G},a,b].\] We now consider the exceptional case when \(b=-1\). For reasons that will soon become apparent, we recall from Example 14 that \(T(P_{2};x,y)=x^{2}y-2xy+x+y\) and \(T(S_{k};x,y)=x^{k}\). **Proposition 27**.: _Let \(L\) be the line \(y=-1\). For \(a\notin\{\frac{1}{2},1\}\) we have_ \[\pi_{2}[\mathcal{G},L]\propto_{T}\pi_{3}[\mathcal{G},a,-1].\] Proof.: Let \(G\) be in \(\mathcal{G}\) and let \(z=x-1\). Along \(L\) the Tutte polynomial of \(G\) has the form \[T(G;x,-1)=\sum_{A\subseteq E(G)}z^{\rho(G)-\rho(A)}(-2)^{|A|-\rho(A)}=\sum_{i =0}^{\rho(G)}t_{i}z^{i}\] for some \(t_{0},\ldots,t_{\rho(G)}\). We now show that, apart from a few exceptional values of \(a\), we can determine all of the coefficients \(t_{i}\) in polynomial time from \(T(G\sim S_{k};a,-1)\), for \(k=0,1,\ldots,\rho(G)\), in time polynomial in \(|E(G)|\). For each such \(k\), \(G\sim S_{k}\) may be constructed from \(G\) in time polynomial in \(|E(G)|\) and is bipartite, planar and connected. By Theorem 19 we have \[T(G\sim S_{k};a,-1)=a^{k\rho(G)}T\left(G;\frac{(a-1)^{k+1}(-1)^{k}}{a^{k}}+1,- 1\right).\] Providing \(a\neq 0\) we may compute \(T\left(G;\frac{(a-1)^{k+1}(-1)^{k}}{a^{k}}+1,-1\right)\) from \(T(G\sim S_{k};a,-1)\). For \(a\notin\{\frac{1}{2},1\}\) the points \(\left(\frac{(a-1)^{k+1}(-1)^{k}}{a^{k}}+1,-1\right)\) are pairwise distinct for \(k=0,1,2,\ldots,\rho(G)\). Therefore by evaluating \(T(G\sim S_{k};a,-1)\) for \(k=0,1,2,\ldots,\rho(G)\) where \(a\notin\{0,\frac{1}{2},1\}\), we obtain \(\sum_{i=0}^{\rho(G)}t_{i}z^{i}\) for \(\rho(G)+1\) distinct values of \(z\). This gives us \(\rho(G)+1\) linear equations for the coefficients \(t_{i}\). Again the matrix corresponding to these equations is a Vandermonde matrix and clearly non-singular. So, we may apply Lemma 13 to compute \(t_{i}\) for all \(i\) in time polynomial in \(|E(G)|\). Hence for \(a\notin\{0,\frac{1}{2},1\}\), \(\pi_{2}[\mathcal{G},L]\propto\pi_{3}[\mathcal{G},a,-1]\). We now look at the case when \(a=0\). Note that \(T(P_{2};0,-1)=-1\). Applying Theorem 19 to \(G\) and \(P_{2}\) gives \[T(G\sim P_{2};0,-1)=(-1)^{\rho(G)}T\left(G;\frac{(-1)^{3}(-1)^{2}}{-1}+1,-1 \right)=(-1)^{\rho(G)}T(G;2,-1).\] Therefore we have the reductions \[\pi_{2}[\mathcal{G},L]\propto_{T}\pi_{3}[\mathcal{G},2,-1]\propto_{T}\pi_{3}[ \mathcal{G},0,-1].\] Since the Turing reduction relation is transitive, this implies that evaluating the Tutte polynomial at the point \((0,-1)\) is at least as hard as evaluating it along the line \(y=-1\). This completes the proof. We now begin to classify the complexity of \(\pi_{3}\). The next results will establish hardness for a few special cases, namely when \(b\in\{-1,0,1\}\). **Proposition 28**.: _The problem \(\pi_{3}[\mathcal{G},1,b]\) is \(\#\)P-hard apart from when \(b=1\), in which case it has a polynomial time algorithm._ Proof.: The hardness part follows directly from Theorem 2 and Proposition 15. We have already noted the existence of a polynomial time algorithm to solve \(\pi_{3}[\mathcal{G},1,1]\). **Proposition 29**.: _The problem \(\pi_{3}[\mathcal{G},a,-1]\) is \(\#\)P-hard apart from when \(a=1/2\), in which case it has a polynomial time algorithm._ Proof.: First note that there is a polynomial time algorithm for \(\pi_{3}[\mathcal{G},a,-1]\) because \((\frac{1}{2},-1)\) lies on \(H_{1}\). Now let \(L\) be the line \(y=-1\). By Proposition 27 we have \[\pi_{2}[\mathcal{G},L]\propto_{T}\pi_{3}[\mathcal{G},a,-1]\] for \(a\notin\{\frac{1}{2},1\}\). So \[\pi_{3}[\mathcal{G},1,-1]\propto_{T}\pi_{3}[\mathcal{G},a,-1]\] for \(a\neq 1/2\). By Proposition 28 we know that \(\pi_{3}[\mathcal{G},1,-1]\) is #P-hard. So the result follows. **Proposition 30**.: _The problem \(\pi_{3}[\mathcal{G},a,0]\) is #P-hard apart from when \(a=0\), in which case it has a polynomial time algorithm._ Proof.: Let \(G\) be in \(\mathcal{G}\). First note that evaluating the Tutte polynomial of \(G\) at the point \((0,0)\) is easy since \((0,0)\) lies on the hyperbola \(H_{1}\). The rooted graph \(G\sim S_{1}\) may be constructed from \(G\) in time polynomial in \(|E(G)|\) and is bipartite, planar and connected. Applying Theorem 19 to \(G\) and \(S_{1}\) gives \[T(G\sim S_{1};a,0)=a^{\rho(G)}T(G;1,0).\] Since \(a\neq 0\) we may compute \(T(G;1,0)\) from \(T(G\sim S_{1};a,0)\). Therefore \(\pi_{3}[\mathcal{G},1,0]\propto\pi_{3}[\mathcal{G},a,0]\). By Proposition 28, \(\pi_{3}[\mathcal{G},1,0]\) is #P-hard, and the result follows. Recall from Equation 1 that along \(y=0\) the Tutte polynomial of a rooted graph specializes to the characteristic polynomial. Therefore we have the following corollary. **Corollary 31**.: _Computing the characteristic polynomial \(p(G;k)\) of a connected rooted graph \(G\) is #P-hard for all \(k\in\mathbb{Q}-\{1\}\). When \(k=1\), there is a polynomial time algorithm._ Proof.: Let \(k\) be in \(\mathbb{Q}\). We have \[p(G;k)=(-1)^{\rho(G)}T(G;1-k,0).\] By Proposition 30 evaluating \(T(G;1-k,0)\) is #P-hard providing \(k\neq 1\). Furthermore when \(k=1\) we have \[p(G;1)=(-1)^{\rho(G)}T(G;0,0)=\left\{\begin{array}{ll}1&\text{ if $G$ is edgeless;}\\ 0&\text{ otherwise,}\end{array}\right.\] and so it is easy to compute (as expected since \((0,0)\) lies on \(H_{1}\)). We now consider points along the line \(y=1\). **Proposition 32**.: _The problem \(\pi_{2}[\mathcal{G},a,1]\) is #P-hard when \(a\neq 1\)._ Proof.: Let \(G\) be a connected, planar, bipartite, unrooted graph with \(V(G)=\{v_{1},\ldots,v_{n}\}\). Now for \(1\leq j\leq n\), let \(G_{j}\) be the graph in \(\mathcal{G}\) obtained from \(G\) by choosing \(v_{j}\) to be the root. Let \(\rho_{j}\) denote the rank function of \(G_{j}\) and \(a_{i}(G_{j})\) be the number of subsets \(A\) of the edges of \(G_{j}\) having size \(i\) so that the root component of \(G|A\) is a tree. Then \[T(G_{j};x,1)=\sum_{\begin{subarray}{c}A\subseteq E:\\ \rho_{j}(A)=|A|\end{subarray}}(x-1)^{\rho(G_{j})-|A|}=\sum_{i=0}^{\rho(G_{j})} a_{i}(G_{j})(x-1)^{\rho(G_{j})-i}.\] Let \(a_{i}(G)\) denote the number of subtrees of \(G\) with \(i\) edges. Then \[a_{i}(G)=\sum_{j=1}^{n}\frac{a_{i}(G_{j})}{i+1}.\] This is because every subtree \(T\) of \(G\) with \(i\) edges has \(i+1\) vertices and its edge set is one of the sets \(A\) contributing to \(a_{i}(G_{j})\) for the \(i+1\) choices of \(j\) corresponding to its vertices. Given an oracle for \(\pi_{2}[\mathcal{G},H_{0}^{y}]\), we can compute \(a_{i}(G_{j})\) for \(i=0,\ldots,|E(G)|\) and \(1\leq j\leq n\) in time polynomial in \(|E(G)|\). So we can compute \(a_{i}(G)\) and consequently the number of trees of \(G\) in time polynomial in \(|E(G)|\). Thus \[\#\text{SUBTREES}\propto_{T}\pi_{3}[\mathcal{G},H_{0}^{y}].\] By Proposition 26 we have \[\#\text{SUBTREES}\propto_{T}\pi_{3}[\mathcal{G},H_{0}^{y}]\propto_{T}\pi_{2 }[\mathcal{G},a,1]\] for \(a\neq 1\). The result now follows from Proposition 22. We now summarize our results and prove Theorem 3. Proof of Theorem 3.: Let \((a,b)\) be a point on \(H_{\alpha}\) for some \(\alpha\) in \(\mathbb{Q}-\{0,1\}\). By Proposition 26 we have \(\pi_{2}[\mathcal{G},H_{\alpha}]\propto_{T}\pi_{3}[\mathcal{G},a,b]\) providing \(b\notin\{-1,0\}\). The hyperbola \(H_{\alpha}\) crosses the \(x\)-axis at the point \((1-\alpha,0)\). By Proposition 30 the problem \(\pi_{3}[\mathcal{G},1-\alpha,0]\) is \(\#\)P-hard since \(\alpha\neq 1\). This gives us a \(\#\)P-hard point on each of these curves and therefore implies \(\pi_{2}[\mathcal{G},H_{\alpha}]\) is \(\#\)P-hard for \(\alpha\in\mathbb{Q}-\{0,1\}\). Hence \(\pi_{3}[\mathcal{G},a,b]\) is \(\#\)P-hard for \((a,b)\in H_{\alpha}\) with \(\alpha\in\mathbb{Q}-\{0,1\}\) and \(b\neq-1\). The rest of the proof now follows directly by Propositions 28, 29 and 32, and the discussion concerning the easy points at the beginning of the section. ## 6 Rooted Digraphs In this section we let \(\mathbb{G}\) be the class of directed branching greedoids of root-connected rooted digraphs, a class we denote by \(\mathcal{D}\). We consider the same three problems as in the previous section. Again, it is more convenient to think of the input as being a root-connected rooted digraph rather than its directed branching greedoid. We present analogous results to those in the previous section by finding the computational complexity of evaluating the Tutte polynomial of a root-connected digraph at a fixed rational point, eventually proving Theorem 4. We begin the proof by examining the easy points. Let \(D\) be a rooted digraph with edge set \(E\) and rank function \(\rho\). If a point \((a,b)\) lies on the hyperbola \(H_{1}\) then, following the remarks at the end of Section 3, \(T(D;a,b)\) is easily computed. We now show that evaluating \(T(D;a,0)\) is easy for all \(a\in\mathbb{Q}\). A _sink_ in a digraph is a non-isolated vertex with no outgoing edges. Suppose that \(D\) is a root-connected, rooted digraph with \(s\) sinks. Then Gordon and McMahon [20] have shown that its characteristic polynomial \(p\) satisfies the following. \[p(D;\lambda)=\begin{cases}(-1)^{\rho(D)}(1-\lambda)^{s}&\text{if $D$ is acyclic;}\\ 0&\text{if $D$ has a directed cycle.}\end{cases}\] Using the relation \(T(D;1-\lambda,0)=(-1)^{\rho(D)}p(D;\lambda)\) we see that \[T(D;x,0)=\begin{cases}x^{s}&\text{if $D$ is acyclic;}\\ 0&\text{if $D$ has a directed cycle.}\end{cases}\] It is easy to count the sinks in a digraph so the problem \(\pi_{3}[\mathcal{D},a,0]\) can be solved in polynomial time for any \(a\in\mathbb{Q}\). Every edge of a component of a rooted digraph other than the root component is a greedoid loop, so if \(D\) has such an edge then \(T(D;1-\lambda,0)=0\). Furthermore, the addition or removal of isolated vertices makes no difference to \(T(D)\). So \(T(D;a,0)\) can be computed in polynomial time for the class of all rooted digraphs. We noted in Section 3 that \(T(D;1,1)\) is the number of spanning arborescences of the root component of \(D\) rooted at \(r\). This can be computed in polynomial time using the Matrix-Tree theorem for directed graphs [6, 36]. We now move on to consider the hard points. The \(k\)-thickening operation will again be crucial: the \(k\)_-thickening_\(D^{k}\) of a root-connected digraph \(D\) is obtained by replacing every edge \(e\) in \(D\) by \(k\) parallel edges that have the same direction as \(e\). We have \(\Gamma(D^{k})\cong(\Gamma(D))^{k}\), so Theorem 16 can be applied to give an expression for \(T(D^{k})\). The proof of the following proposition is omitted as it is analogous to that of Proposition 26. **Proposition 33**.: _Let \(L\) be either \(H_{0}^{x},H_{0}^{y}\), or \(H_{\alpha}\) for \(\alpha\in\mathbb{Q}-\{0\}\). Let \((a,b)\) be a point on \(L\) such that \((a,b)\neq(1,1)\) and \(b\notin\{-1,0\}\). Then_ \[\pi_{2}[\mathcal{D},L]\propto_{T}\pi_{3}[\mathcal{D},a,b].\] We let \(\overrightarrow{P_{k}}\) be the root-connected directed path of length \(k\) with the root being one of the leaves and \(\overrightarrow{S_{k}}\) be the root-connected directed star with \(k\) edges emanating from the root. Then \(T(\overrightarrow{P_{k}};x,y)=1+\sum_{i=1}^{k}(x-1)^{i}y^{i-1}\) and \(T(\overrightarrow{S_{k}};x,y)=x^{k}\). The proof of the following proposition is analogous to that of Proposition 27 with \(\overrightarrow{P_{k}}\) and \(\overrightarrow{S_{k}}\) playing the roles of \(P_{k}\) and \(S_{k}\). **Proposition 34**.: _Let \(L\) be the line \(y=-1\). For \(a\notin\{\frac{1}{2},1\}\) we have_ \[\pi_{2}[\mathcal{D},L]\propto_{T}\pi_{3}[\mathcal{D},a,-1].\] Next we classify the complexity of \(\pi_{3}[\mathcal{D},1,b]\) for \(b\notin\{0,1\}\). Suppose we have a root-connected digraph \(D\) and generate a random subgraph \((D,p)\) of \(D\) by deleting each edge with probability \(p\) independently of all the other edges. Let \(g(D;p)\) denote the probability that \((D,p)\) is root-connected and let \(g_{j}\) be the number of subsets \(A\) of \(E(D)\) with size so that \(D|A\) is root-connected. Notice that \(g_{j}\) is equal to the number of subsets \(A\) of \(E\) with \(|A|=j\) and \(\rho(A)=\rho(E)\). Then \[g(D;p)=\sum_{j=0}^{|E(D)|}g_{j}p^{|E(D)|-j}(1-p)^{j}.\] Provan and Ball [35] showed that the following problem is #P-complete for each rational \(p\) with \(0<p<1\), and computable in polynomial time when \(p=0\) or \(p=1\). #Connectedness Reliability **Input:**\(D\in\mathcal{D}\). **Output:**\(g(D;p)\). Note that we have restricted the input digraph to being root-connected which Provan and Ball did not, but this does not make a difference to the complexity, because if \(D\) is not root-connected then clearly \(g(D;p)=0\). We now use this result to classify the complexity of points along the line \(x=1\). **Proposition 35**.: _The computational problem \(\pi_{3}[\mathcal{D},1,b]\) is #P-hard for \(b>1\)._ Proof.: Let \(D\) be a root-connected digraph with edge set \(E\) and rank function \(\rho\). Then for \(0<p<1\) we have \[g(D;p) =\sum_{\begin{subarray}{c}A\subseteq E(D):\\ \rho(A)=\rho(D)\end{subarray}}p^{|E(D)|-|A|}(1-p)^{|A|}=p^{|E(D)|-\rho(D)}(1-p )^{\rho(D)}\sum_{\begin{subarray}{c}A\subseteq E(D):\\ \rho(A)=\rho(D)\end{subarray}}\left(\frac{1-p}{p}\right)^{|A|-\rho(A)}\] \[=p^{|E(D)|-\rho(D)}(1-p)^{\rho(D)}T\left(D;1,\frac{1}{p}\right).\] Evaluating \(g(D;p)\) is therefore Turing-reducible to evaluating \(T(D;1,\frac{1}{p})\) for \(0<p<1\). Therefore, \(\pi_{3}[\mathcal{D},1,b]\) is #P-hard for \(b>1\). In order to determine the complexity of the point \(\pi_{3}[\mathcal{D},1,-1]\), we introduce a new operation on root-connected digraphs which we call the \(k\)_-digon-stretch_. We define a _tailed \(k\)-digon_ from \(u\) to \(v\) to be the digraph defined as follows. The vertex set is \(\{w_{0}=u,w_{1},\ldots,w_{k},w_{k+1}=v\}\). There is an edge \(w_{0}w_{1}\) and a directed cycle of length \(2\) on \(w_{i}\) and \(w_{i+1}\) for each \(i\) with \(1\leq i\leq k\). An example of a tailed \(k\)-digon is shown in Figure 2. (The labelling of the edges will be needed later.) For a root-connected digraph \(D\), the \(k\)-digon-stretch of \(D\) is constructed by replacing every directed edge \(uv\) in \(D\) by a tailed \(k\)-digon from \(u\) to \(v\). We denote the \(k\)-digon-stretch of \(D\) by \(D_{k}\). **Theorem 36**.: _Let \(D\) be a root-connected digraph. Then_ \[T(D_{k};1,y)=(k+1)^{|E(D)|-\rho(D)}y^{k|E(D)|}T\left(D;1,\frac{k+y}{k+1}\right).\] Proof.: Let \(S\) be a subset of edges of a tailed \(k\)-digon from \(u\) to \(v\). If \(S\) contains all the edges on the unique directed \(uv\)-path through the \(k\)-tailed digon, then \(S\) is said to _admit a \(uv\)-dipath_. Let \(A\) be a subset of \(E(D_{k})\) and \(P(A)\) be the set of edges \(uv\) in \(D\) for which \(A\) admits a \(uv\)-dipath. We have \(\rho(A)=\rho(D_{k})\) if and only if (i) for each directed edge \(uv\) of \(D\) and each vertex \(w\) of the corresponding tailed \(k\)-digon from \(u\) to \(v\) in \(D_{k}\), \(A\) includes the edges of a path in the \(k\)-tailed digon from either \(u\) or \(v\) to \(w\), and (ii) \(\rho(P(A))=\rho(D)\). Note that \(\rho(D_{k})=k|E(D)|+\rho(D)\). We can write \(A\) as the disjoint union \(A=\bigcup_{e\in E(D)}A_{e}\) where \(A_{e}\) is the set of edges of \(A\) belonging to the tailed \(k\)-digon corresponding to \(e\). The Tutte polynomial of \(D_{k}\) along the line \(x=1\) is given by \[T(D_{k};1,y)=\sum_{\begin{subarray}{c}A\subseteq E(D_{k}):\\ \rho(A)=\rho(D_{k})\end{subarray}}(y-1)^{|A|-\rho(D_{k})}=\sum_{ \begin{subarray}{c}B\subseteq E(D):\\ \rho(B)=\rho(D)\end{subarray}}\sum_{\begin{subarray}{c}A\subseteq E(D_{k}):\\ \rho(A)=\rho(D_{k})\\ P(A)=B\end{subarray}}(y-1)^{|A|-\rho(D_{k})}\] \[=\sum_{\begin{subarray}{c}B\subseteq E(D):\\ \rho(B)=\rho(D)\end{subarray}}\sum_{\begin{subarray}{c}A\subseteq E(D_{k}):\\ \rho(A)=\rho(D_{k}),\\ P(A)=B\end{subarray}}\underbrace{\left(\prod_{\begin{subarray}{c}e\in E(D):\\ e\notin P(A)\end{subarray}}(y-1)^{|A_{e}|-k}\right)}_{(1)}\underbrace{\left( \prod_{\begin{subarray}{c}e\in E(D):\\ e\in P(A)\end{subarray}}(y-1)^{|A_{e}|-(k+1)}\right)}_{(2)}(y-1)^{|P(A)|-\rho(D )}. \tag{5}\] Consider a tailed \(k\)-digon from \(u\) to \(v\) with vertex set labelled as described just before the statement of the theorem. For \(0\leq i\leq k\), let \(p_{i}\) denote the edge \(w_{i}w_{i+1}\); for \(1\leq i\leq k\), let \(q_{i}\) denote the edge \(w_{i+1}w_{i}\). In the first product above we are considering edges \(e=uv\) for which \(e\notin P(A)\). Thus \(A_{e}\) does not contain all of \(p_{0}\),..., \(p_{k}\). Let \(j\) be the smallest integer such that \(p_{j}\notin A_{e}\). As we are only interested in sets \(A\) with \(\rho(A)=\rho(D_{k})\), each of \(q_{j+1}\),..., \(q_{k}\) belongs to \(A_{e}\). Thus \(|A_{e}|\geq k\). Moreover each of \(p_{j+1}\),..., \(p_{k}\) and \(q_{1}\),..., \(q_{j}\) may or may not belong to \(A_{e}\). As there are \(k+1\) possibilities for \(j\), summing \[\prod_{\begin{subarray}{c}e\in E(D):\\ e\notin P(A)\end{subarray}}(y-1)^{|A_{e}|-k}\] Figure 2: A tailed \(k\)-digon. over all possible choices of \(A_{e}\) for \(e\notin P(A)\) gives \(\left((k+1)y^{k}\right)^{|E(D)|-|P(A)|}\). In the second product above we are considering edges \(e=uv\) for which \(e\in P(A)\). Thus \(A_{e}\) contains all of \(p_{0}\),..., \(p_{k}\). So \(|A_{e}|\geq k+1\). Moreover each of \(q_{1}\),..., \(q_{k}\) may or may not belong to \(A_{e}\). Summing \[\prod_{\begin{subarray}{c}e\in E(D):\\ e\in P(A)\end{subarray}}(y-1)^{|A_{e}|-(k+1)}\] over all possible choices of \(A_{e}\) for \(e\in P(A)\) gives \(y^{k|P(A)|}\). Thus the right side of Equation 5 becomes \[\sum_{\begin{subarray}{c}B\subseteq E(D):\\ \rho(B)=\rho(D)\end{subarray}}y^{k|B|}\left((k+1)y^{k}\right)^{|E(D)|-|B|}(y-1 )^{|B|-\rho(D)}\] \[\qquad=y^{k|E(D)|}\sum_{\begin{subarray}{c}B\subseteq E(D):\\ \rho(B)=\rho(D)\end{subarray}}(k+1)^{\rho(B)-|B|+|E(D)|-\rho(D)}(y-1)^{|B|- \rho(B)}\] \[\qquad=y^{k|E(D)|}(k+1)^{|E(D)|-\rho(D)}T\left(D;1,\frac{y+k}{k+1 }\right).\] We now complete the classification of complexity for points on the line \(H_{0}^{x}\). **Proposition 37**.: _The problem \(\pi_{3}[\mathcal{D},1,b]\) is #P-hard for \(b\notin\{0,1\}\)._ Proof.: For \(b\notin\{-1,0,1\}\) the result follows immediately from Propositions 33 and 35. By Theorem 36, if \(D\) is root-connected, then \[T(D_{2};1,-1)=3^{|E(D)|-\rho(D)}T\left(D;1,\frac{1}{3}\right).\] As \(D_{2}\) is root-connected and can be constructed from \(D\) in polynomial time, \(\pi_{3}(\mathcal{D},1,\frac{1}{3})\propto\pi_{3}(\mathcal{D},1,-1)\), so \(\pi_{3}(\mathcal{D},1,-1)\) is #P-hard. We now show that evaluating the Tutte polynomial of a root-connected digraph at most points on the hyperbola \(H_{\alpha}\) for \(\alpha\neq 0\) is at least as hard as evaluating it at the point \((1+\alpha,2)\). **Proposition 38**.: _Let \(\alpha\) be in \(\mathbb{Q}-\{0\}\) and \((a,b)\) be a point on \(H_{\alpha}\) with \(b\notin\{-1,0\}\), then_ \[\pi_{3}[\mathcal{D},1+\alpha,2]\propto_{T}\pi_{3}[\mathcal{D},a,b].\] Proof.: For \(\alpha\) in \(\mathbb{Q}-\{0\}\), the hyperbola \(H_{\alpha}\) crosses the line \(y=2\) at the point \((1+\alpha,2)\). By Proposition 33, we know that for any point \((a,b)\) on \(H_{\alpha}\) with \(b\notin\{-1,0\}\) we have \(\pi_{3}[\mathcal{D},1+\alpha,2]\propto_{T}\pi_{2}[\mathcal{D},H_{\alpha}] \propto_{T}\pi_{3}[\mathcal{D},a,b]\) We will now show that evaluating the Tutte polynomial of a root-connected digraph at most of the points on the line \(y=2\) is #P-hard. This will enable us to classify the complexity of most points lying on the hyperbola \(H_{\alpha}\) for all \(\alpha\in\mathbb{Q}-\{0\}\). **Proposition 39**.: _The problem \(\pi_{3}[\mathcal{D},a,2]\) is #P-hard for \(a\neq 2\)._ Proof.: We begin by proving that when \(L\) is the line \(y=2\) we have \[\pi_{2}[\mathcal{D},L]\propto_{T}\pi_{3}[\mathcal{D},a,2]\] for \(a\notin\{1,2\}\). Let \(D\) be a root-connected digraph and let \(z=x-1\). Along \(L\) the Tutte polynomial of \(D\) has the form \[T(D;x,2)=\sum_{A\subseteq E(D)}z^{\rho(D)-\rho(A)}=\sum_{i=0}^{\rho(D)}t_{i}z^ {i}\] for some \(t_{0},t_{1},\ldots,t_{\rho(D)}\). We will now show that for most values of \(a\), we may determine all of the coefficients \(t_{i}\) in polynomial time from \(T(D\sim\overrightarrow{S_{k}};a,2)\) for \(k=0,1,\ldots,\rho(D)\). For each such \(k\), \(D\sim\overrightarrow{S_{k}}\) is root-connected and can be constructed in polynomial time. By Theorem 19, we have \[T(D\sim\overrightarrow{S_{k}};a,2)=a^{k\rho(D)}T\left(D;\frac{2^{k}(a-1)^{k+1} }{a^{k}}+1,2\right).\] Therefore we may compute \(T\left(D;\frac{2^{k}(a-1)^{k+1}}{a^{k}}+1,2\right)\) from \(T(D\sim\overrightarrow{S_{k}};a,2)\) when \(a\neq 0\). For \(a\notin\{0,\frac{2}{3},1,2\}\) the values of \(\left(\frac{2^{k}(a-1)^{k+1}}{a^{k}}+1,2\right)\) are pairwise distinct for \(k=0,1,\ldots,\rho(D)\). Therefore by evaluating \(T(D\sim\overrightarrow{S_{k}};a,2)\) for \(k=0,1,\ldots,\rho(D)\) where \(a\notin\{0,\frac{2}{3},1,2\}\), we obtain \(\sum_{i=0}^{\rho(D)}t_{i}z^{i}\) for \(\rho(D)+1\) distinct values of \(z\). This gives us \(\rho(D)+1\) linear equations for the coefficients \(t_{i}\), and so by Lemma 13, they may be recovered in polynomial time. Hence evaluating the Tutte polynomial of a root-connected digraph along the line \(y=2\) is Turing-reducible to evaluating it at the point \((a,2)\) for \(a\notin\{0,\frac{2}{3},1,2\}\). We now consider the cases where \(a=0\) or \(a=\frac{2}{3}\). The digraph \(D\sim\overrightarrow{P_{2}}\) is root-connected and may be constructed in polynomial time. By Theorem 19, we have \[T(D\sim\overrightarrow{P_{2}};0,2)=2^{\rho(D)}T\left(D;\frac{(-1)^{3}2^{2}}{ 2}+1,2\right)\,=2^{\rho(D)}T(D;-1,2).\] Therefore \(\pi_{3}[\mathcal{D},-1,2]\propto_{T}\pi_{3}[\mathcal{D},0,2]\). Similarly we have \[T\left(D\sim\overrightarrow{P_{2}};\frac{2}{3},2\right)=2^{\rho(D)}T\left(D; \frac{(-\frac{1}{3})^{3}2^{2}}{2}+1,2\right)=2^{\rho(D)}T\left(D;\frac{25}{27 },2\right).\] Therefore \(\pi_{3}[\mathcal{D},25/27,2]\propto_{T}\pi_{3}[\mathcal{D},2/3,2]\). Putting all this together we get \(\pi_{2}[\mathcal{D},L]\propto_{T}\pi_{3}[\mathcal{D},a,2]\) for all \(a\) in \(\mathbb{Q}-\{1,2\}\). Consequently \(\pi_{3}[\mathcal{D},1,2]\propto_{T}\pi_{3}[\mathcal{D},a,2]\), for all \(a\) in \(\mathbb{Q}-\{2\}\). By Proposition 37, we know that \(\pi_{3}[\mathcal{D},1,2]\) is #P-hard. This completes the proof. **Theorem 40**.: _Let \(\alpha\) be in \(\mathbb{Q}-\{0,1\}\) and \((a,b)\) be a point on \(H_{\alpha}\) with \(b\neq 0\). Then \(\pi_{3}[\mathcal{D},a,b]\) is \(\#\)P-hard._ Proof.: Suppose first that \(b\neq-1\). By Proposition 38, \(\pi_{3}[\mathcal{D},1+\alpha,2]\propto_{T}\pi_{3}[\mathcal{D},a,b]\). As \(\alpha\neq 1\), Proposition 39, implies \(\pi_{3}[\mathcal{D},a,b]\) is \(\#\)P-hard. Now suppose that \(b=-1\). As \((a,b)\notin H_{1}\), we have \(a\neq\frac{1}{2}\). So by Proposition 34, \(\pi_{3}[\mathcal{D},1,-1]\propto_{T}\pi_{3}[\mathcal{D},a,-1]\). By Proposition 37, \(\pi_{3}[\mathcal{D},1,-1]\) is \(\#\)P-hard. Therefore \(\pi_{3}[\mathcal{D},a,-1]\) is \(\#\)P-hard. The only remaining points we need to classify are those lying on the line \(y=1\). To do this we prove that the problem of evaluating the Tutte polynomial of a root-connected digraph at most fixed points along this line is at least as hard as the analogous problem for rooted graphs. **Theorem 41**.: _The problem \(\pi_{3}[\mathcal{D},a,1]\) is \(\#\)P-hard for \(a\) in \(\mathbb{Q}-\{1\}\)._ Proof.: Let \(G\) be a connected rooted graph with root \(r\). Construct a rooted graph \(D\) with root \(r\) by replacing every edge of \(G\) by a pair of oppositely directed edges. Then \(D\) is root-connected and can be constructed from \(G\) in polynomial time. We can define a natural map \(f:2^{E(D)}\to 2^{E(G)}\) so that \(f(A)\) is the set of edges of \(G\) for which at least one corresponding directed edge is included in \(A\). If \(\rho_{G}(A)=|A|\) then the root component of \(G|A\) is a tree and includes all the edges of \(A\). Similarly if \(\rho_{D}(A^{\prime})=|A^{\prime}|\) then the root component of \(D|A^{\prime}\) is an arborescence rooted at \(r\) and includes all the edges of \(A^{\prime}\). For every subset \(A\) of \(E\) with \(\rho_{G}(A)=|A|\), there is precisely one choice of \(A^{\prime}\) with \(\rho_{D}(A^{\prime})=|A^{\prime}|\) and \(f(A^{\prime})=A\), obtained by directing all the edges of \(A\) away from \(r\). Thus there is a one-to-one correspondence between subsets \(A\) of \(E\) with \(\rho_{G}(A)=|A|\) and subsets \(A^{\prime}\) of \(E(D)\) with \(\rho_{D}(A^{\prime})=|A^{\prime}|\), and this correspondence preserves the sizes of the sets. Therefore we have \[T(D;x,1)=\sum_{\begin{subarray}{c}A^{\prime}\subseteq E(D):\\ |A^{\prime}|=\rho_{D}(A^{\prime})\end{subarray}}(x-1)^{\rho(D)-|A^{\prime}|}= \sum_{\begin{subarray}{c}A\subseteq E:\\ |A|=\rho_{G}(A)\end{subarray}}(x-1)^{\rho(G)-|A|}=T(G;x,1).\] So \(\pi_{3}[\mathcal{G},a,1]\propto_{T}\pi_{3}[\mathcal{D},a,1]\). So by Proposition 32, we deduce that \(\pi_{3}[\mathcal{D},a,1]\) is \(\#\)P-hard for \(a\neq 1\). ## 7 Binary Greedoids In our final section we let \(\mathbb{G}\) be the class of binary greedoids. We present analogous results to those in the previous section by finding the computational complexity of evaluating the Tutte polynomial of a binary greedoid at a fixed rational point, eventually proving Theorem 5. As before, it is convenient to think of the input as being a binary matrix rather than its binary greedoid. We begin by examining the easy points of Theorem 5. Let \(\Gamma\) be a binary greedoid with element set \(E\) and rank function \(\rho\). If a point \((a,b)\) lies on the hyperbola \(H_{1}\) then, following the remarks at the end of Section 3\(T(\Gamma;a,b)\) is easily computed. We now focus on the hard points. The \(k\)-thickening operation will again be crucial. Given a binary matrix \(M\), the \(k\)-thickening \(M^{k}\) of \(M\) is obtained by replacing each column of \(M\) by \(k\) copies of the column. We have \(\Gamma(M^{k})=(\Gamma(M))^{k}\), so Theorem 16 can be applied to compute the \(T(M^{k})\) in terms of \(T(M)\). Let \(I_{k}\) denote the \(k\times k\) identity matrix. Then \(\Gamma(I_{k})\cong\Gamma(P_{k})\), so \(T(I_{k})=T(P_{k})=1+\sum_{j=1}^{k}(x-1)^{j}y^{j-1}\). The proof of the following proposition is analogous to that of Proposition 26, thus we omit it. **Proposition 42**.: _Let \(L\) be either \(H_{0}^{x},H_{0}^{y}\), or \(H_{\alpha}\) for \(\alpha\in\mathbb{Q}-\{0\}\). Let \((a,b)\) be a point on \(L\) such that \((a,b)\neq(1,1)\) and \(b\notin\{-1,0\}\). Then_ \[\pi_{2}[\mathcal{B},L]\propto_{T}\pi_{3}[\mathcal{B},a,b].\] A binary matroid is a matroid that can be represented over the finite field \(\mathbb{Z}_{2}\). Every graphic matroid is also binary, so Theorem 2 and Lemma 11 imply that \(\pi_{2}[\mathcal{B},1,b]\) is \(\#\)P-hard providing \(b\neq 1\). This immediately gives the following. **Proposition 43**.: _The problem \(\pi_{3}[\mathcal{B},1,b]\) is \(\#\)P-hard for all \(b\) in \(\mathbb{Q}-\{1\}\)._ The following result has been announced by Vertigan in [9] and slightly later in [42], but up until now no written proof has been published. For completeness, we provide a proof in Appendix A. **Theorem 44** (Vertigan).: _Evaluating the Tutte polynomial of a binary matroid is \(\#\)P-hard at the point \((1,1)\)._ Using this result, we are able to fill in the missing point \((1,1)\) from the previous result and also establish hardness along the line \(y=1\). **Proposition 45**.: _The problem \(\pi_{3}[\mathcal{B},a,1]\) is \(\#\)P-hard for all \(a\)._ Proof.: By Proposition 42 we have \(\pi_{2}[\mathcal{B},H_{0}^{y}]\propto_{T}\pi_{3}[\mathcal{B},a,1]\) for \(a\neq 1\). The result now follows Theorem 44. **Proposition 46**.: _Let \(\Gamma\) be a binary greedoid and let \(\Gamma^{\prime}=\Gamma(I_{k})\). Then_ \[T(\Gamma\approx\Gamma^{\prime};x,y)=T(\Gamma;x,y)(x-1)^{k}y^{k}+T(\Gamma;1,y) \Big{(}1+\sum_{j=1}^{k}(x-1)^{j}y^{j-1}-(x-1)^{k}y^{k}\Big{)}.\] Proof.: The proof follows immediately from Theorem 20. We now classify the complexity of \(\pi_{3}[\mathcal{B},a,b]\) when \(b=0\) or \(b=-1\). **Proposition 47**.: _The problem \(\pi_{3}[\mathcal{B},a,0]\) is \(\#\)P-hard for all \(a\neq 0\)._ Proof.: Let \(M\) be a binary matrix with linearly independent rows. Then from Theorem 20, we have \(T(M\approx I_{1};a,0)=aT(M;1,0)\). Therefore when \(a\neq 0\) we have \(\pi_{2}[\mathcal{B},1,0]\propto_{T}\pi_{2}[\mathcal{B},a,0]\). The result now follows from Proposition 43. **Proposition 48**.: _The problem \(\pi_{3}[\mathcal{B},a,-1]\) is \(\#P\)-hard for all \(a\neq\frac{1}{2}\)._ Proof.: Let \(M\) be a binary matrix with linearly independent rows. We have \[(2a-1)T(M;1,-1)=T(M\approx I_{1};a,-1)+(a-1)T(M;a,-1).\] Thus, \(\pi_{3}[\mathcal{B},1,-1]\propto_{T}\pi_{3}[\mathcal{B},a,-1]\). By using Proposition 43, we deduce that \(\pi_{0}[\mathcal{B},a,-1]\) is \(\#\)P-hard. Our final result completes the proof of Theorem 5. **Theorem 49**.: _Let \((a,b)\) be a point in \(H_{\alpha}\) for \(\alpha\in\mathbb{Q}-\{0,1\}\) with \(b\neq-1\). Then \(\pi_{3}[\mathcal{B},a,b]\) is \(\#\)P-hard._ Proof.: For \(\alpha\in\mathbb{Q}-\{0,1\}\), the hyperbola \(H_{\alpha}\) crosses the \(x\)-axis at the point \((1-\alpha,0)\). By Proposition 42 since \(b\neq-1\) and \((a,b)\neq(1,1)\) we have \(\pi_{3}[\mathcal{B},1-\alpha,0]\propto_{T}\pi_{3}[\mathcal{B},a,b]\). The result now follows from Proposition 47. ## Appendix A Counting bases in a represented matroid In this appendix, we present a proof that counting the number of bases of a represented matroid is \(\#\)P-complete. More precisely, we consider the following family of counting problems. Let \(\mathbb{F}\) be a field. Counting Bases of \(\mathbb{F}\)-Represented Matroids **Input:** A \((0,1)\)-matrix \(A\). **Output:** The number of bases of \(M(A)\), the matroid represented by \(A\) over the field \(\mathbb{F}\). **Theorem 50**.: _For every field \(\mathbb{F}\), Counting Bases of \(\mathbb{F}\)-Represented Matroids is \(\#\)P-complete._ A proof of this result was announced nearly 30 years ago by Dirk Vertigan -- it first seems to have been referred to in [9] and slightly later in [42], where it is described as an unpublished manuscript -- but no written proof has been circulated. Sketches of the proof have been presented by Vertigan in talks, for example, at the Conference for James Oxley in 2019 [40]. The second author was present at this meeting and the material in this section has been produced from his incomplete recollection of the talk. All the key ideas are due to Vertigan but the details including any errors, omissions or unnecessary complications are due to the authors. As pointed out to us by Dillon Mayhew [31], Vertigan's proof presented in [40] introduced an intermediate step involving weighted bases; our proof does not require this intermediate step but this comes at the cost of introducing a larger matrix in the reduction. We provide the proof, partly as a service to the community because we know of several colleagues who have tried to recreate it, but primarily because a referee has pointed out the undesirability of relying on an unpublished result. Although our original aim was only to establish the special case of Theorem 50 relevant for our work, it turns out that little extra effort is required to prove Theorem 50 in full generality. We require very little matroid theory other than basic notions such as rank, circuits and the closure operator. As we work exclusively with matroids having representations drawn from a specific family of matrices considered over different fields, the claims we make about the associated matroids can easily be checked by considering the representing matrices. For background on matroids see [33]. A graph is _simple_ if it has no loops or parallel edges. To prove hardness, we give a reduction from counting perfect matchings in a simple graph, a problem which is well-known to be #P-complete [39]. Clearly, it makes no difference to the complexity of counting perfect matchings if we forbid our graphs from having isolated vertices. Given such a graph \(G\) with \(n\) vertices, we construct a family of matrices \(\{A_{i}:1\leq i\leq\lfloor n/2\rfloor+1\}\) with entries in \(\{0,1\}\). By considering these matrices as being defined over different fields, we obtain two corresponding families of matroids. Which family arises depends on whether the field has characteristic two. Thus the proof of Theorem 50 splits into two parts depending on whether the characteristic of the underlying field is two. We shall generally think of matrices as coming with sets indexing their rows and columns. If \(A\) is a matrix with sets \(X\) and \(Y\) indexing its rows and columns respectively, then we say that \(A\) is an \(X\times Y\) matrix. For non-empty subsets \(X^{\prime}\) and \(Y^{\prime}\) of \(X\) and \(Y\), respectively, \(A[X^{\prime},Y^{\prime}]\) is the submatrix of \(A\) obtained by deleting the rows indexed by elements of \(X-X^{\prime}\) and the columns indexed by elements of \(Y-Y^{\prime}\). Suppose that \(G\) is a simple graph without isolated vertices having vertex set \(\{v_{1},\ldots,v_{n}\}\) and edge set \(\{e_{1},\ldots,e_{m}\}\). Let \(k\) be a strictly positive integer. Let \[X=\{v_{1},\ldots,v_{n},e_{1},\ldots,e_{m}\}\cup\{f_{i,j}:1\leq i\leq m,1\leq j \leq k\}\] and \[Y=\{v_{1},\ldots,v_{n},e_{1},\ldots,e_{m}\}\cup\{w_{i,j},x_{i,j},y_{i,j},z_{i, j}:1\leq i\leq m,1\leq j\leq k\}.\] Here both \(X\) and \(Y\) include all the vertices and edges of \(G\), together with several new elements. The matrix \(A_{k}\) is an \(X\times Y\) matrix. To specify its entries suppose that \(e_{i}\) has endvertices \(v_{a}\) and \(v_{b}\) with \(a<b\). Then for each \(j\) with \(1\leq j\leq k\), taking \(X^{\prime}=\{v_{a},v_{b},f_{i,j}\}\) and \(Y^{\prime}=\{v_{a},v_{b},e_{i},w_{i,j},x_{i,j},y_{i,j},z_{i,j}\}\), we let \[A_{k}[X^{\prime},Y^{\prime}]=\begin{array}{cccccc}v_{a}&v_{b}&e_{i}&w_{i,j} &x_{i,j}&y_{i,j}&z_{i,j}\\ v_{a}&v_{b}\!\left[\begin{array}{cccccc}1&0&1&0&1&0&1\\ 0&1&1&0&0&1&1\\ 0&0&0&1&1&1&1\end{array}\right]\!.\end{array}\] We complete the definition of \(A_{k}\) by setting every as yet unspecified entry to zero. Fix \(\mathbb{F}\) and let \(N_{k}=M(A_{k})\), that is, the matroid with element set \(Y\) represented by \(A_{k}\) considered over \(\mathbb{F}\). Taking \(Y^{\prime}\) as in the previous paragraph, if \(\mathbb{F}\) has characteristic two, then \(N_{k}|Y^{\prime}\) is isomorphic to the Fano matroid \(F_{7}\) and otherwise \(N_{k}|Y^{\prime}\) is isomorphic to the non-Fano matroid \(F_{7}^{-}\) obtained from \(F_{7}\) by relaxing the circuit-hyperplane \(\{e_{i},x_{i,j},y_{i,j}\}\) Now let \(M_{k}=N_{k}\setminus(V\cup E)\). Note that \(r(M_{k})=r(N_{k})=|V|+|E|k\) and that for each vertex \(v\) and edge \(e\) of \(G\), \(N_{k}\) contains elements \(e\) and \(v\), but \(M_{k}\) contains neither. We shall show that for each \(k\), every basis of \(M_{k}\) corresponds to what we call a _feasible template_ of \(G\), that is, a subgraph of \(G\) in which some edges are directed (possibly in both directions) and some are labelled, satisfying certain properties which we describe below. In particular, we will see that the bidirected edges in a feasible template form a matching in \(G\). Furthermore, the number of bases of \(M_{k}\) corresponding to each feasible template depends only on \(k\) and the numbers of edges directed and labelled in each possible way, and is easily computed. By varying \(k\) and counting the number of bases of \(M_{k}\), we can recover the number of feasible templates with each possible number of bidirected edges. The number of feasible templates with \(n/2\) bidirected edges is equal to the number of perfect matchings of \(G\). Let \(G\) be a simple graph without isolated vertices, having vertex set \(V=\{v_{1},\ldots,v_{n}\}\) and edge set \(E=\{e_{1},\ldots,e_{m}\}\). A _template_ of \(G\) is a spanning subgraph of \(G\) in which edges may be bidirected, that is, two arrows are affixed one pointing to each endvertex, (uni)directed or undirected, and are labelled according to the following rules. * Every bidirected edge is unlabelled. * A (uni)directed edge \(e=v_{a}v_{b}\) with \(a<b\) is labelled either \(wx\) or \(yz\) if \(e\) is directed towards \(a\) and is labelled either \(wy\) or \(xz\) if \(e\) is directed towards \(b\). * An undirected edge is labelled either \(wz\) or \(xy\). Even though the matroid \(M_{k}\) itself depends on whether \(\mathbb{F}\) has characteristic two, the proofs of the two cases have a great deal in common. To prevent repetition we describe the common material here, before finishing the two cases separately. For \(1\leq i\leq m\) and \(1\leq j\leq k\), let \(F_{i,j}=\{w_{i,j},x_{i,j},y_{i,j},z_{i,j}\}\) and for \(1\leq i\leq m\), let \(F_{i}=\bigcup_{1\leq j\leq k}F_{i,j}\). For all \(i\) and \(j\), the set \(F_{i,j}\) is a circuit and \(r(M_{k}\setminus F_{i,j})<r(M_{k})\). Let \(B\) be a basis of \(M_{k}\). Then \(1\leq|B\cap F_{i,j}|\leq 3\). Moreover, for all \(i\), \(r(F_{i})=k+2\) and \(r(M_{k}\setminus F_{i})\leq r(M_{k})-k\), so \(k\leq|B\cap F_{i}|\leq k+2\). The main idea in the proof is to use templates to classify each basis \(B\) of \(M_{k}\) by specifying \(|B\cap F_{i}|\) for each \(i\) and when \(|B\cap F_{i}|=k+1\), implying that \(|B\cap F_{i,j}|=2\) for precisely one value \(j^{*}\) of \(j\), additionally specifying \(B\cap F_{i,j^{*}}\). Suppose edge \(e_{i}\) joins vertices \(v_{a}\) and \(v_{b}\) in \(G\) and \(a<b\). If \(|B\cap F_{i}|=k\), then \(\operatorname{cl}_{N_{k}}(B\cap F_{i})-E(M_{k})=\emptyset\), and if \(|B\cap F_{i}|=k+2\), then \(\operatorname{cl}_{N_{k}}(B\cap F_{i})-E(M_{k})=\{v_{a},v_{b},e_{i}\}\). If \(|B\cap F_{i}|=k+1\), then \(|B\cap F_{i,j}|=2\) for precisely one value \(j^{*}\) of \(j\) and \(\operatorname{cl}_{N_{k}}(B\cap F_{i})-E(M_{k})\) depends on \(B\cap F_{i,j^{*}}\). * If \(B\cap F_{i,j^{*}}\) is \(\{w,x\}\) or \(\{y,z\}\), then \(\operatorname{cl}_{N_{k}}(B\cap F_{i})-E(M_{k})=\{v_{a}\}\). * If \(B\cap F_{i,j^{*}}\) is \(\{w,y\}\) or \(\{x,z\}\), then \(\operatorname{cl}_{N_{k}}(B\cap F_{i})-E(M_{k})=\{v_{b}\}\). * If \(B\cap F_{i,j^{*}}\) is \(\{w,z\}\), then \(\operatorname{cl}_{N_{k}}(B\cap F_{i})-E(M_{k})=\{e_{i}\}\). * If \(B\cap F_{i,j^{*}}\) is \(\{x,y\}\), then \(\operatorname{cl}_{N_{k}}(B\cap F_{i})-E(M_{k})\) is \(\{e_{i}\}\) when \(\mathbb{F}\) has characteristic two and is empty otherwise. To each subset \(S\) of \(E(M_{k})\), such that for all \(i\), \(S\cap F_{i}\) is independent and for all \(i\) and \(j\), \(|S\cap F_{i,j}|\geq 1\), we associate a template \(T(S)\) of \(G\), by starting with an edgeless graph with vertex set \(V(G)\) and doing the following for each edge \(e_{i}\) of \(G\) such that \(|S\cap F_{i,j}|>1\) for some \(j\). Suppose that \(e_{i}=v_{a}v_{b}\) with \(a<b\). We first consider whether to add \(e_{i}\) to \(T(S)\) and whether to direct it. * If \(\operatorname{cl}_{N_{k}}(S\cap F_{i})-E(M_{k})=\{v_{a},v_{b},e_{i}\}\), then add \(e_{i}\) to \(T(S)\) and bidirect it. * If \(\operatorname{cl}_{N_{k}}(S\cap F_{i})-E(M_{k})=\{v_{a}\}\), then add \(e_{i}\) to \(T(S)\) and direct it from \(v_{b}\) to \(v_{a}\). * If \(\operatorname{cl}_{N_{k}}(S\cap F_{i})-E(M_{k})=\{v_{b}\}\), then add \(e_{i}\) to \(T(S)\) and direct it from \(v_{a}\) to \(v_{b}\). * If \(\operatorname{cl}_{N_{k}}(S\cap F_{i})-E(M_{k})\subseteq\{e_{i}\}\), then add \(e_{i}\) to \(T(S)\) (and do not direct it). In the last three cases above, we also label \(e_{i}\). To do this let \(j^{*}\) be the unique value of \(j\) such that \(|S\cap F_{i,j}|=2\). Then label \(e_{i}\) with the elements of \(S\cap F_{i,j^{*}}\), but with their subscripts omitted. In this way the edge \(e_{i}\) is given two labels from the set \(\{w,x,y,z\}\). ### \(\mathbb{F}\) has characteristic two We now focus on the case when \(\mathbb{F}\) has characteristic two. The following result is the key step in the proof. **Proposition 51**.: _A subset \(B\) of \(E(M_{k})\) is a basis of \(M_{k}\) if and only if all of the following conditions hold._ 1. _For all_ \(i\)_,_ \(B\cap F_{i}\) _is independent._ 2. _For all_ \(i\) _and_ \(j\)_,_ \(|B\cap F_{i,j}|\geq 1\)_._ 3. _The subgraph of_ \(T(B)\) _induced by its undirected edges is acyclic._ 4. _It is possible to direct the undirected edges of_ \(T(B)\) _so that every vertex has indegree one._ Proof.: We first show that the conditions are collectively sufficient. Suppose that \(B\) satisfies each of the conditions and that \(T(B)\) has \(b\) bidirected edges, \(r\) (uni)directed edges and \(u\) undirected edges. Then the last condition implies that \(2b+r+u=n\). Combining this with the first two conditions gives \(|B|=km+2b+r+u=km+n=r(M_{k})\). So, it is sufficient to prove that \(r(B)=r(M_{k})\). We will show that the last two conditions imply that \(v_{i}\in\operatorname{cl}_{N_{k}}(B)\) for \(i=1,\ldots,n\). Then the second condition ensures that \(\operatorname{cl}_{N_{k}}(B)=E(N_{k})\) and consequently \(r(B)=r(N_{k})=r(M_{k})\) as required. Consider a vertex \(v\) of \(G\). The last two conditions imply that there is a (possibly empty) path \(P\) in \(T(B)\) between \(v\) and a vertex \(v^{\prime}\) having indegree one and comprising only undirected edges. Suppose that the vertices of \(P\) in order are \(v_{j_{1}}=v^{\prime},v_{j_{2}},\ldots,v_{j_{l}}=v\) and that for \(1\leq h\leq l-1\), the edge joining \(v_{j_{h}}\) and \(v_{j_{h+1}}\) in \(P\) is \(e_{i_{h}}\). Then \(\operatorname{cl}_{N_{k}}(B)\). As \(v_{j_{h}}\in\operatorname{cl}_{N_{k}}(\{v_{j_{h-1}},e_{i_{h-1}}\})\) for \(h=2,\ldots,l\), we see that \(v=v_{j_{l}}\in\operatorname{cl}_{N_{k}}(B)\), as required. Thus the conditions are sufficient. To show that each condition is necessary we suppose that \(B\) is a basis of \(M_{k}\). Clearly the first condition is necessary. We observed earlier that for all \(i\) and \(j\), \(r(E(M_{k})-F_{i,j})<r(M_{k})\), so the second condition is also necessary. Suppose, without loss of generality, that edges \(e_{1},\ldots,e_{l}\) are undirected and form a circuit in \(T(B)\). Then the corresponding elements \(e_{1},\ldots,e_{l}\) form a circuit in \(N_{k}\). Because each of \(e_{1},\ldots,e_{l}\) is undirected, \(e_{i}\in\operatorname{cl}_{N_{k}}(B\cap F_{i})\) for \(i=1,\ldots,l\). Thus \[e_{l}\in\operatorname{cl}_{N_{k}}(\{e_{1},\ldots,e_{l-1}\})\subseteq \operatorname{cl}_{N_{k}}\bigg{(}\bigcup_{i=1}^{l-1}(B\cap F_{i})\bigg{)}.\] So there is a circuit of \(N_{k}\) contained in \(\{e_{l}\}\cup(B\cap F_{l})\) and another contained in \(\{e_{l}\}\cup\bigcup_{i=1}^{l-1}(B\cap F_{i})\). Hence there is a circuit of \(N_{k}\) and consequently of \(M_{k}\) contained in \(\bigcup_{i=1}^{l}(B\cap F_{i})\), contradicting the fact that \(B\) is a basis. Thus the third condition is necessary. Finally, suppose that \(T(B)\) has \(b\) bidirected edges, \(r\) (uni)directed edges and \(u\) undirected edges. Then, as \(km+2b+r+u=|B|=r(M_{k})=km+n\), we have \(2b+r+u=n\). Observe that if the undirected edges are assigned a direction, then the sum of the indegrees of the vertices will become \(n\). Suppose that it is impossible to direct the undirected edges of \(T(B)\) so that each vertex has indegree one. Then, before directing the undirected edges, there must either be a vertex \(z\) with indegree at least two, or two vertices \(x\) and \(y\) both having indegree at least one and joined by a path \(P\) of undirected edges. In either case the aim is to establish a contradiction by showing that there is some vertex \(v\) such that \(B\cup\{v\}\) contains two distinct circuits in \(N_{k}\). Then \(B\) contains a circuit of \(N_{k}\) and consequently of \(M_{k}\). In the former case there are distinct edges \(e_{i}\) and \(e_{j}\) directed towards (and possibly away from as well) \(z\) in \(T(B)\). So \(z\in\operatorname{cl}_{N_{k}}(B\cap F_{i})\cap\operatorname{cl}_{N_{k}}(B\cap F _{j})\) implying that \((B\cap F_{i})\cup\{z\}\) and \((B\cap F_{j})\cup\{z\}\) both contain circuits of \(N_{k}\) including \(z\). But then \((B\cap F_{i})\cup(B\cap F_{j})=B\cap(F_{i}\cap F_{j})\) contains a circuit of \(N_{k}\) and consequently of \(M_{k}\), contradicting the fact that \(B\) is a basis of \(M_{k}\). So we may assume that the latter case holds. Suppose that, without loss of generality, the vertices of \(P\) in order are \(v_{1}=x,v_{2},\ldots,v_{l}=y\). Suppose, again without loss of generality, that for \(i=2,\ldots,l\), the edge joining \(v_{i-1}\) and \(v_{i}\) in \(P\) is \(e_{i}\), that \(e_{1}\) is directed towards \(x=v_{1}\) in \(T(B)\) and \(e_{l+1}\) is directed towards \(y=v_{l}\) in \(T(B)\). Then \(y\in\operatorname{cl}_{N_{k}}(B\cap F_{l+1})\) and \(x\in\operatorname{cl}_{N_{k}}(B\cap F_{1})\). Furthermore, for each \(i=2,\ldots,l\), \(e_{i}\in\operatorname{cl}_{N_{k}}(B\cap F_{i})\), so \(v_{i}\in\operatorname{cl}_{N_{k}}\bigg{(}\bigcup_{j=1}^{l}(B\cap F_{j})\bigg{)}\). In particular, \(y\in\operatorname{cl}_{N_{k}}\bigg{(}\bigcup_{j=1}^{l}(B\cap F_{j})\bigg{)}\). So, there is a circuit of \(N_{k}\) contained in \(\{y\}\cup\{B\cap F_{l+1}\}\) and another contained in \(\{y\}\cup\bigcup_{j=1}^{l}(B\cap F_{j})\). Hence there is a circuit of \(N_{k}\) and consequently of \(M_{k}\) contained in \(\bigcup_{j=1}^{l+1}(B\cap F_{j})\), contradicting the fact that \(B\) is a basis. It follows that it possible to direct the undirected edges of each component of \(T(B)\) so that every vertex has indegree one, establishing the necessity of the final condition. We say that a template \(T\) is _feasible_ if it satisfies the last two conditions in the previous result, that is, if the subgraph induced by its undirected edges is acyclic and every vertex of the graph obtained from \(T\) by contracting the undirected edges has indegree equal to one. **Proposition 52**.: _Let \(G\) be a simple graph without isolated vertices and let \(T\) be a feasible template of \(G\) with \(b\) bidirected edges. Then the number of bases of \(M_{k}\) with template \(T\) is_ \[4^{km}\Big{(}\frac{k}{4}\Big{)}^{n}\Big{(}\frac{4}{k}+12\Big{)}^{b}.\] Proof.: If follows from the definition of feasibility that if a feasible template contains \(b\) bidirected edges, then it has \(n-2b\) edges which are either (uni)directed or undirected. Furthermore \(G\) has \(m-n+b\) edges which are not in \(T\). Suppose that \(B\) is a basis with template \(T\). We count the number of choices for \(B\). Suppose that \(e_{i}\) is an edge of \(G\) which is not present in \(T\). Then for \(j=1,\ldots,k\), we have \(|F_{i,j}\cap B|=1\), so there are \(4^{k}\) choices for \(B\cap F_{i}\). Now suppose that \(e_{i}\) is either (uni)directed or undirected in \(T\). Then for all but one choice of \(j\) in \(1,\ldots,k\), we have \(|F_{i,j}\cap B|=1\) and for the remaining possibility for \(j\), \(|F_{i,j}\cap B|=2\), with the choice of elements of \(F_{i,j}\) specified by the labelling of the edge \(e_{i}\). Thus there are \(k\cdot 4^{k-1}\) choices for \(B\cap F_{i}\). Finally suppose that \(e_{i}\) is a bidirected edge. Then there are two subcases to consider. Either \(|F_{i,j}\cap B|=3\) for one value of \(j\) and \(|F_{i,j}\cap B|=1\) for all other values of \(j\), or \(|F_{i,j}\cap B|=2\) for two values of \(j\) and \(|F_{i,j}\cap B|=1\) for all other values of \(j\). Suppose that \(|F_{i,j^{\prime}}\cap B|=|F_{i,j^{\prime\prime}}\cap B|=2\) for \(j^{\prime}\neq j^{\prime\prime}\). Then we also require that \(\operatorname{cl}_{N_{k}}(B\cap F_{i,j^{\prime}})-F_{i}\neq\operatorname{cl}_{ N_{k}}(B\cap F_{i,j^{\prime}})-F_{i}\). Thus there are \(k\cdot 4^{k}+\binom{k}{2}\cdot 6\cdot 4\cdot 4^{k-2}\) choices for \(B\cap F_{i}\). So the number of bases of \(M_{k}\) with template \(T\) is \[(4^{k})^{m-n+b}(k\cdot 4^{k-1})^{n-2b}(k\cdot 4^{k}+\binom{k}{2}\cdot 6\cdot 4 \cdot 4^{k-2})^{b}=4^{km}\Big{(}\frac{k}{4}\Big{)}^{n}\Big{(}\frac{4}{k}+12 \Big{)}^{b}.\] **Theorem 53**.: _If \(\mathbb{F}\) is a field with characteristic two, then the problem Counting Bases of \(\mathbb{F}\)-Represented Matroids is \(\#\)P-complete._ Proof.: It is clear that Counting Bases of \(\mathbb{F}\)-Represented Matroids belongs to \(\#\)P. To prove hardness, we give a reduction from counting perfect matchings. Let \(G\) be a simple graph with \(n\) vertices and \(m\) edges. We may assume that \(G\) has no isolated vertices and \(n\) is even. We can construct representations of the matroids \(M_{1},\ldots,M_{n/2+1}\) in time polynomial in \(n\) and \(m\). For \(k=1,\ldots,n/2+1\), let \(b_{k}\) denote the number of bases of \(M_{k}\) and for \(j=0,\ldots,n/2\), let \(t_{j}\) denote the number of feasible templates of \(G\) with \(j\) bidirected edges. Then for \(k=1,\ldots,n/2+1\), by Proposition 52, we have \[b_{k}=\sum_{j=0}^{n/2}4^{km}\Big{(}\frac{k}{4}\Big{)}^{n}\Big{(}\frac{4}{k}+12 \Big{)}^{j}t_{j}.\] Given \(b_{1},\ldots,b_{n/2+1}\), we may recover \(t_{0},\ldots,t_{n/2}\) in time polynomial in \(n\) and \(m\). In particular, we may recover \(t_{n/2}\). But feasible templates with \(n/2\) bidirected edges are in one-to-one correspondence with perfect matchings of \(G\). As counting perfect matching is \(\#\)P-complete by [39], we deduce that when \(\mathbb{F}\) has characteristic two, Counting Bases of \(\mathbb{F}\)-Represented Matroids is \(\#\)P-complete. ### \(\mathbb{F}\) does not have characteristic two When \(\mathbb{F}\) does not have characteristic two, we can proceed in a similar way, but the proof is a little more complicated, as we need to consider more carefully circuits of undirected edges in a template. We say that a circuit of a template comprising only undirected edges is _good_ if it has an odd number of edges labelled \(wz\). The following lemma gives us the key property of circuits of undirected edges in the template of a basis. **Lemma 54**.: _Let \(G\) be a simple graph without isolated vertices and let \(M_{k}\) and \(N_{k}\) be the associated matrices. Let \(C\) be a circuit of \(G\) and select a set \(Z\) of \(2|C|\) elements of \(M_{k}\) as follows. For each \(i\) such that \(e_{i}\) is an edge of \(C\), choose \(j\) with \(1\leq j\leq k\) and add either \(w_{i,j}\) and \(z_{i,j}\), or \(x_{i,j}\) and \(y_{i,j}\) to \(Z\). To simplify notation we omit the second subscript and for each \(i\) denote the elements added to \(Z\) by either \(w_{i}\) and \(z_{i}\), or \(x_{i}\) and \(y_{i}\). Then both of the following hold._ 1. _If_ \(|\{i:\{w_{i},z_{i}\}\subseteq Z\}|\) _is odd then_ \(Z\) _is independent in_ \(M_{k}\) _(and_ \(N_{k}\)_) and for each vertex_ \(v\) _of_ \(C\)_,_ \(v\in\mathrm{cl}_{N_{k}}(Z)\)_._ 2. _If_ \(|\{i:\{w_{i},z_{i}\}\subseteq Z\}|\) _is even then_ \(Z\) _is a circuit in_ \(M_{k}\) _(and_ \(N_{k}\)_)._ Proof.: For an edge \(e_{i}\) of \(C\), we say that \(e_{i}\) is a \(wz\)-edge if \(\{w_{i},z_{i}\}\subseteq Z\), and otherwise we say that it is an \(xy\)-edge. We first prove that \(Z\) is either independent or a circuit, depending on the parity of \(|\{i:\{w_{i},z_{i}\}\subseteq Z\}|\). Consider the submatrix \(A\) of \(A_{k}\) containing just the columns indexed by members of \(Z\) and consider the coefficients of a non-trivial linear combination of these columns summing to zero. As each row of \(A\) is either zero or contains two non-zero entries, both equal to one, we may assume that the non-zero coefficients are all \(\pm 1\). Furthermore, for every \(wz\)-edge \(e_{i}\), the coefficients of \(w_{i}\) and \(z_{i}\) must sum to zero, and similarly for every \(xy\)-edge \(e_{i}\), the coefficients of \(x_{i}\) and \(y_{i}\) must sum to zero. Now consider two adjacent edges \(e_{i}\) and \(e_{j}\) in \(C\), and let \(v\) be their common endvertex. As the row indexed by \(v\) contains one non-zero entry in a column indexed by an element of \(\{w_{i},x_{i},y_{i},z_{i}\}\cap Z\) and also one in a column indexed by an element of \(\{w_{j},x_{j},y_{j},z_{j}\}\cap Z\), we deduce that the coefficients of \(\{w_{i},x_{i},y_{i},z_{i}\}\cap Z\) are non-zero if and only those of \(\{w_{j},x_{j},y_{j},z_{j}\}\cap Z\) are non-zero. Consequently all the coefficients in a non-trivial linear combination of the columns of \(A\) are non-zero. Now imagine traversing \(C\) in \(G\) and suppose that \(e_{i}\) and \(e_{j}\) are consecutive (not necessarily adjacent) \(wz\)-edges. Then it is not difficult to see that the coefficients of \(w_{i}\) and \(w_{j}\) (and of \(z_{i}\) and \(z_{j}\)) have opposite signs. Thus, if there is an odd number of \(wz\)-edges, then no non-trivial linear combination of the columns of \(A\) sums to zero and \(Z\) is independent. Alternatively, if there is an even number of \(wz\)-edges, then one can assign coefficients \(\pm 1\) to columns indexed by \(w_{i}\) or \(z_{i}\) meeting the necessary conditions we have established, and then it is not difficult to check that non-zero coefficients may be assigned to all the remaining columns in order to give a non-trivial linear combination of the columns of \(A\) summing to zero. Thus \(Z\) is dependent, and as we have shown that all coefficients of a non-trivial linear combination of the columns of \(A\) summing to zero must be non-zero, we deduce that \(Z\) is a circuit. Finally, suppose that there is an odd number of \(wz\)-edges and let \(V(C)\) denote the vertex set of \(C\). Then \(r_{N_{k}}(Z\cup V(C))=2|C|=r_{N_{k}}(Z)\), so for each vertex \(v\) of \(C\), \(v\in\operatorname{cl}_{N_{k}}(Z)\). The analogue of Proposition 51 is as follows. **Proposition 55**.: _A subset \(B\) of \(E(M_{k})\) is a basis of \(M_{k}\) if and only if all of the following conditions hold._ * _For all_ \(i\)_,_ \(B\cap F_{i}\) _is independent._ * _For all_ \(i\) _and_ \(j\)_,_ \(|B\cap F_{i,j}|\geq 1\)_._ * _Every circuit of_ \(T(B)\) _comprising only undirected edges is good._ * _It is possible to direct the undirected edges of_ \(T(B)\) _so that every vertex has indegree one._ Proof.: Most of the proof follows that of Proposition 51. The main difference concerns circuits of \(T(B)\) comprising undirected edges. To prove the sufficiency of the conditions we modify the last part of the sufficiency argument of Proposition 51. Suppose that \(B\) satisfies each of the conditions. The key step involves showing that for every vertex \(v\) in \(G\), we have \(v\in\operatorname{cl}_{N_{k}}(B)\). The last two conditions imply that for a vertex \(v\) of \(G\), there is a (possibly empty) path \(P\) in \(T(B)\), comprising only undirected edges, between \(v\) and a vertex \(v^{\prime}\), which either has indegree one or belongs to a good circuit. Using Lemma 54 for the latter case, we see that in either case \(v^{\prime}\in\operatorname{cl}_{N_{k}}(B)\) and the proof may continue in the same way as that of Proposition 51. To show that each condition is necessary we suppose that \(B\) is a basis of \(M_{k}\). The necessity of the first two conditions follows in the same way as in the proof of Proposition 51 and the necessity of the third follows from Lemma 54. The necessity of the final condition follows from a similar argument to that used in the proof of Proposition 51, but there are more cases to consider. Notice that the necessity of the third condition implies that every undirected edge of \(T(B)\) belongs to at most one circuit comprising only undirected edges. If it is not possible to direct the undirected edges of \(T(B)\) so that each edge has indegree one, then before directing the undirected edges one of the following must occur. 1. There is a vertex \(z\) of \(T(B)\) with indegree at least two. 2. There is a vertex \(z\) belonging to two edge-disjoint good circuits. 3. There is a vertex \(z\) of \(T(B)\) with indegree one which belongs to a good circuit. 4. There are vertices \(x\) and \(y\) of \(T(B)\) not belonging to the same good circuit and joined by a path \(P\) comprising undirected edges and so that each of \(x\) and \(y\) either has indegree one or belongs to a good circuit. To show that each possibility leads to a contradiction, the aim is again to show that there is a vertex \(v\) of \(B\) such that \(B\cup\{v\}\) contains two distinct circuits of \(N_{k}\). The first case is the same as in the proof of Proposition 51. The second and third follow similarly with the aid of Lemma 54 and the final one follows in a similar way to the analogous case in Proposition 51, noting first that by Lemma 54, if necessary, there are disjoint subsets \(B_{x}\) and \(B_{y}\) of \(B\) with \(x\in\operatorname{cl}_{N_{k}}(B_{x})\) and \(y\in\operatorname{cl}_{N_{k}}(B_{y})\) and then deducing that \(y\) (and in fact every vertex of \(P\)) belongs to \(\operatorname{cl}_{N_{k}}(B_{x})\). We amend the definition of feasibility to say that a template \(T\) is _feasible_ if it satisfies the last two conditions in the previous result, that is, if the subgraph induced by its undirected edges contains no circuits including an even number of edges labelled \(wz\) and it is possible to direct the undirected edges of \(T(B)\) so that every vertex has indegree one. **Proposition 56**.: _Let \(G\) be a simple graph without isolated vertices and let \(T\) be a feasible template of \(G\) with \(b\) bidirected edges. Then the number of bases of \(M_{k}\) with template \(T\) is_ \[4^{km}\Big{(}\frac{k}{4}\Big{)}^{n}\Big{(}\frac{3}{k}+13\Big{)}^{b}.\] Proof.: The proof is very similar to that of Proposition 52. The key difference is counting the number of choices for \(F_{i}\) when \(i\) is a bidirected edge and \(|F_{i,j^{\prime}}\cap B|=|F_{i,j^{\prime\prime}}\cap B|=2\) for \(j^{\prime}\neq j^{\prime\prime}\). There are now 26 ways to choose \(F_{i,j^{\prime}}\) and \(F_{i,j^{\prime\prime}}\) compared with 24 when \(\mathbb{F}\) has characteristic two. So the number of bases of \(M_{k}\) with template \(T\) is \[(4^{k})^{m-n+b}(k\cdot 4^{k-1})^{n-2b}(k\cdot 4^{k}+\binom{k}{2}\cdot 26\cdot 4 ^{k-2})^{b}=4^{km}\Big{(}\frac{k}{4}\Big{)}^{n}\Big{(}\frac{3}{k}+13\Big{)}^{ b}.\] **Theorem 57**.: _If \(\mathbb{F}\) is a field with characteristic other than two, then the problem Counting Bases of \(\mathbb{F}\)-Represented Matroids is \(\#\)P-complete._ The proof is identical to that of Theorem 53. ## Acknowledgement We thank Mark Jerrum and Dillon Mayhew for helpful suggestions concerning Appendix A.
2309.14724
Reconciling results of 2019 and 2020 stellar occultations on Pluto's atmosphere. New constraints from both the 5 September 2019 event and consistency analysis
A stellar occultation by Pluto on 5 September 2019 yielded positive detections at two separate stations. Using an approach consistent with comparable studies, we derived a surface pressure of $11.478 \pm 0.55~\mathrm{\mu bar}$ for Pluto's atmosphere from the observations of this event. In addition, to avoid potential method inconsistancies highlighted by Sicardy et al. when comparing with historical pressure measurements, we reanalyzed the data by 15 August 2018 and 17 July 2019 events, respectively. All the new measurements provide a bridge between the two different perspectives on the pressure variation since 2015: a rapid pressure drop from previous studies of the 15 August 2018 and 17 July 2019 events and a plateau phase from that of the 6 June 2020 event. The pressure measurement from the 5 September 2019 event aligns with those from 2016, 2018, and 2020, supporting the latter perspective. While the measurements from the 4 June 2011 and 17 July 2019 events suggest probable V-shaped pressure variations unaccounted for by the volatile transport model (VTM) from Meza et al., the VTM remains applicable on average. And, the validity of the V-shaped variations is debatable due to the stellar faintness of the 4 June 2011 event and the grazing single-chord geometry of the 17 July 2019 event. To reveal and understand all significant pressure variations of Pluto's atmosphere, it is essential to provide constraints on both short-term and long-term evolutions of the interacting atmosphere and surface by continuous pressure monitoring through occultation observations, whenever possible, complemented by frequent spectroscopy and photometry of the surface.
Ye Yuan, Fan Li, Yanning Fu, Jian Chen, Wei Tan, Shuai Zhang, Wei Zhang, Chen Zhang, Qiang Zhang, Jiahui Ye, Delai Li, Yijing Zhu, Zhensen Fu, Ansheng Zhu, Yue Chen, Jun Xu, Yang Zhang
2023-09-26T07:35:12Z
http://arxiv.org/abs/2309.14724v2
# Reconciling results of 2019 and 2020 stellar occultations on Pluto's atmosphere ###### Abstract A stellar occultation by Pluto on 5 September 2019 yielded positive detections at two separate stations. Using an approach consistent with comparable studies, we derived a surface pressure of \(11.478\pm 0.55\) ubar for Pluto's atmosphere from the observations of this event. In addition, to avoid potential method inconsistencies when comparing with historical pressure measurements, we reanalyzed the data for the 15 August 2018 and 17 July 2019 events. All the new measurements provide a bridge between the two different perspectives on the pressure variation since 2015: a rapid pressure drop from previous studies of the 15 August 2018 and 17 July 2019 events and a plateau phase from that of the 6 June 2020 event. The pressure measurement from the 5 September 2019 event aligns with those from 2016, 2018, and 2020, supporting the latter perspective. While the measurements from the 4 June 2011 and 17 July 2019 events suggest probable V-shaped pressure variations that are unaccounted for by the volatile transport model (VTM), the VTM remains applicable on average. Furthermore, the validity of the V-shaped variations is debatable given the stellar faintness of the 4 June 2011 event and the grazing single-chord geometry of the 17 July 2019 event. To reveal and understand all of the significant pressure variations of Pluto's atmosphere, it is essential to provide constraints on both the short-term and long-term evolution of the interacting atmosphere and surface by continuous pressure monitoring through occultation observations whenever possible, and to complement these with frequent spectroscopy and photometry of the surface. ## 1 Introduction Pluto's atmosphere was discovered during the 1985 stellar occultation (Brosch 1995), and since then, stellar occultations have played a crucial role in studying its structure, composition, and evolution over time (Hubbard et al. 1988; Elliot et al. 1989, 2003; Yelle & Elliot 1997; Sicardy et al. 2003, 2011, 2016, 2021; Pasachoff et al. 2005, 2017; Young et al. 2008, 2021; Rannou & Durry 2009; Person et al. 2013, 2021; Olkin et al. 2015; Bosh et al. 2015; Gulbis et al. 2015; Dias-Oliveira et al. 2015; Meza et al. 2019; Arimatsu et al. 2020). A compilation of 12 occultations observed between 1988 and 2016 revealed a three-fold monotonic increase in the atmospheric pressure of Pluto during that period (Meza et al. 2019). This increase can be explained by the volatile transport model (VTM) of the Laboratoire de Meteorologie Dynamique (LMD) (Bertrand & Forget 2016; Forget et al. 2017; Bertrand et al. 2018, 2019), which was subsequently fine-tuned by Meza et al. (2019). This model provides a framework for simulating the volatile cycles on Pluto over both seasonal and astronomical timescales, allowing us to explore the long-term evolution of Pluto's atmosphere and its response to seasonal variations over its 248 year heliocentric orbital period (Meza et al. 2019). According to the LMD VTM in Meza et al. (2019) (VTM19, hereafter), Pluto's atmospheric pressure is expected to have reached its peak around the year 2020. The pressure increase is attributed to the progression of summer over the northern hemisphere of Pluto, exposing Sputnik Planitia (SP)1 to solar radiation. The surface of SP, which is composed of nitrogen (N\({}_{2}\)), methane (CH\({}_{4}\)), and carbon monoxide (CO) iecs, is believed to sublimate and release volatile gases into the atmosphere during this period, leading to a pressure increase. After reaching its peak, the model predicts a gradual decline in pressure over the next two centuries under the combined effects of Pluto's recession from the Sun and the prevalence of the winter season over SP. On one hand, the VTM19 remains consistent with the analysis of Sicardy et al. (2021) of the 6 June 2020 occultation observed at Devasthal, where two colocated telescopes were used. This latter analysis suggests that Pluto's atmosphere has been in a plateau phase since mid-2015, which aligns with the model predictions that the atmospheric pressure reached its peak around 2020. On the other hand, the Arimatsu et al. (2020) analysis of the 17 July 2019 occultation observed by a single telescope (TUHO) suggests a rapid pressure decrease between 2016 and 2019. These authors detected a significant pressure drop at the 2.4\(\sigma\) level. However, it is worth noting that the geometry of this occultation is grazing. This may have introduced larger correlations between the pressure and the geocentric closest approach distance to Pluto's shadow axis, leading to insufficient precision to confidently support the claim of a large pressure decrease followed by a return in 2020 to a pressure level close to that of 2015 (Sicardy et al., 2021). These contrasting results highlight the need for occultation observations between 2019 and 2020 in order to better understand the behavior and evolution of Pluto's atmosphere during this time period. Furthermore, while Young et al. (2021) support the presence of a pressure drop based on their analysis of the 15 August 2018 occultation, Sicardy et al. (2021) suggest that careful comparisons between measurements by independent teams should be made before drawing any conclusions on the pressure evolution. Observations of the 5 September 2019 occultation, which have not been reported by other teams, are presented in Section 2, followed by a description of the light-curve fitting methods in Section 3. These unique observations allow us to track the changes in Pluto's atmosphere during the time period between the events studied by Arimatsu et al. (2020) and Sicardy et al. (2021). Results are detailed in Section 4, and the pressure evolution is discussed in Section 5, including comparisons with the reanalyzed 15 August 2018 and 17 July 2019 events. Conclusions and recommendations are provided in Section 6. ## 2 Occultation observations Two observation campaigns were organized in China for occultations in 2019 (see Appendix A). One occurred on 17 July 2019, which was studied by Arimatsu et al. (2020), and the other on 5 September 2019, which is reported in the present paper for the first time. Due to bad weather conditions in many areas, no effective light curves were observed by our stations for the first occultation, and only two light curves were obtained for the second. Table 1 lists the circumstances of the 5 September 2019 event. Figure 1 presents all the observation stations and the reconstructed path of the shadow of Pluto2 during this event. Table 1 lists the circumstances of stations with positive detections. Their station codes are DWM and HNU. Footnote 2: The occulted star is Gaia DR3 6771712487062767488, of which the astrometric and photometric parameters are obtained from VizieR (Gaia Collaboration, 2022). To ensure accurate and precise timing in stellar occultations, some stations (e.g., DWM as shown in Table 2) were equipped with QHY174GPS cameras. These cameras, manufactured by QHYCCD3, offer precise recording of observation time and location for each frame using a GPS-based function, and have been used in many stellar occultation studies (e.g., Buie et al., 2020, 2020; Morgado et al., 2021, 2022; Pereira et al., 2023). In the light-curve fitting procedures described in Section 3.2, the time-recording offsets of the QHY174GPS cameras are fixed to zero, considering their reliability and accuracy as time references. Footnote 3: [https://www.qbyccd.com](https://www.qbyccd.com) Footnote 4: [http://www.hristopavlov.net/Tangra3/](http://www.hristopavlov.net/Tangra3/) All observational data were captured in the FITS format. These data were processed using the Tangra occultation photometric tool4(Pavlov, 2020) and our data-reduction code (see Appendix B). It was ensured that the targets and reference stars in all the images we used were not overexposed. The resulting light curves from the observations, after being normalized, are presented in Figure 2. Each data point on the light curves is represented by \(f_{i}(t)\pm\sigma_{i}(t)\), where \(i\) indicates the quantities associated with a specific station, \(t\) represents the recorded timing of \begin{table} \begin{tabular}{l c} \hline \hline \multicolumn{2}{c}{Occulted star} \\ \hline Identification (Gaia DR3\({}^{a}\) ) & 6771712487062767488 \\ Geocentric astrometric position & \(\alpha_{\rm s}=19^{\rm h}29^{\rm m}11\fs 1996\) \\ at observational epoch (ICRF\({}^{b}\)) & \(\delta_{\rm s}=-22\degr 21\arcmin 39\farcs 880\) \\ \hline \multicolumn{2}{c}{Pluto’s body} \\ \hline Mass\({}^{c}\), \(GM_{\rm p}\) (km\({}^{3}\cdot\) s\({}^{-2}\)) & 869.6 \\ Radius\({}^{c}\), \(R_{\rm p}\) (km) & 1187 \\ \hline \multicolumn{2}{c}{Pluto’s atmosphere} \\ \hline N\({}_{2}\) molecular mass\({}^{d}\), \(\mu\) (kg) & \(4.652\times 10^{-26}\) \\ N\({}_{2}\) molecular & \(1.091\times 10^{-23}\) \\ refractivity\({}^{e}\), \(K\) (cm\({}^{3}\)) & \(+6.282\times 10^{-26}/\lambda_{\rm im}^{2}\) \\ Boltzmann constant\({}^{f}\), \(k_{\rm B}\) (J \(\cdot\) K\({}^{-1}\)) & \(1.380649\times 10^{-23}\) \\ Given reference radius\({}^{g}\), \(r_{0}\) (km) & 1215 \\ \hline \multicolumn{2}{c}{Results of atmospheric fit (with 1\(\sigma\) error bars)} \\ \hline Pressure at \(r_{0}\), \(\rho_{0}\) (ubar) & \(6.248\pm 0.30\) \\ Surface pressure\({}^{d}\) at \(R_{\rm p}\), \(p_{\rm surf}\) (ubar) & \(11.478\pm 0.55\) \\ Geocentric closest approach distance & \\ to shadow center\({}^{h}\), \(\rho_{\rm org}\) (km) & \(+3644\pm 25\) \\ Geocentric closest approach time & \\ to shadow center\({}^{i}\), \(t_{\rm eng}\) (UTC\({}^{i}\)) & 15:01:19.1 \(\pm\) 0.38 s \\ \hline \end{tabular} \end{table} Table 1: Circumstances and light-curve fitting results of the 5 September 2019 event. each frame, \(f\) the normalized total observed flux of the occulted star and the Pluto's system, and \(\sigma\) the measurement error associated with each data point. ## 3 Light-curve fitting methods ### Light-curve model In order to simulate observed light curves, we implemented a light-curve model, \(\phi(t;A,s,\Delta t,\Delta\tau,\Delta\rho,p_{0})\), which is described in Appendix C and is consistent with D015 (Dias-Oliveira et al., 2015; Sicardy et al., 2016; Meza et al., 2019; Sicardy et al., 2021). As a function of model parameters, its time-dependent Jacobian matrix was also implemented to represent the sensitivity of the model to the corresponding parameters to be estimated through fitting procedures. The light-curve model of a given station can be formally written as \[\phi_{i}(t)=A_{i}\cdot\left(s_{i}\cdot\psi_{i}(t;\Delta t_{i},\Delta\tau, \Delta\rho,p_{0})+(1-s_{i})\right), \tag{1}\] where \(i\) indicates the quantities associated with the station; for further details, the reader is referred to Appendix C. Here, the reference ephemerides we use are the NIMav95 asteroidal ephemeris (Desmars et al., 2015, 2019) for the orbit of the Pluto system barycenter with respect to the Sun, the PLU0586 satellite ephemerides (Brozoviac et al., 2015; Jacobson et al., 2019) for the orbit of Pluto with respect to the Pluto system barycenter, and the DE4467 planetary ephemerides (Park et al., 2021) for the orbits of the Earth and the Sun with respect to the Solar System barycenter. The reference star catalog where the data of the occulted star are obtained is Gaia DR3. Footnote 5: [https://lesia.obspm.fr/lucky-star/obj.php?p=818](https://lesia.obspm.fr/lucky-star/obj.php?p=818) Footnote 6: [https://ssd.jpl.nasa.gov/ftp/eph/satellites/bsp/plu%58.bsp](https://ssd.jpl.nasa.gov/ftp/eph/satellites/bsp/plu%58.bsp) Footnote 7: [https://ssd.jpl.nasa.gov/ftp/eph/planets/bsp/de449.bsp](https://ssd.jpl.nasa.gov/ftp/eph/planets/bsp/de449.bsp) ### Fitting procedure The light-curve model was fitted to the normalized observed light curves simultaneously by nonlinear least squares, returning a \(\chi^{2}\)-type value of goodness-of-fit. The goal is to minimize the objective function given by \[\chi^{2}_{\rm obs}=\sum_{i,j}\frac{(\phi_{i}(t_{ij})-f_{i}(t_{ij}))^{2}}{\sigma _{i}^{2}(t_{ij})}, \tag{2}\] where \(t_{ij}\) represents the mid-exposure time of the \(j\)-th observation of the station \(i\). In addition, with the used reference ephemerides and star catalog, some a priori information on \(\Delta\rho\) can be obtained: \[\Delta\rho=0\;\mathrm{km}\pm\sigma_{\rho}, \tag{3}\] where the uncertainty \(\sigma_{\rho}\) is set to 72 km using the positional uncertainties listed in the "orbit quality" table of NIMAv9 and in Gaia DR3. This \(\sigma_{\rho}\) value corresponds to about 3 mas on the sky at the geometric distance of Pluto. The a priori information can be treated as independent observational data and used in the model fitting, with the objective function modified as: \[\chi^{2}_{\rm apr}=\chi^{2}_{\rm obs}+\frac{\Delta\rho^{2}}{\sigma_{\rho}^{2}}. \tag{4}\] The fitting steps are as follows: * In order to find all local minima at which a nonlinear least-squares fitting could potentially get stuck, we explored the two-parameter space (\(\Delta\rho,p_{0}\)) by generating the variation of \(\chi^{2}\) as a function of them. Figure 3 presents such two \(\chi^{2}\) maps, labeled (a) and (b), which are analyzed in Section 4. The maps are generated by minimizing \(\chi^{2}_{\rm obs}\) or \(\chi^{2}_{\rm apr}\) at each fixed (\(\Delta\rho,p_{0}\)) point on a regularly spaced grid. The Levenberg-Marquardt (LM) method, which is implemented in the LMFIT package8, was used in each fitting procedure. The free parameters to be adjusted are \(\Delta\tau^{9}\), \(\Delta t_{i}\) of any station with no reliable time reference system like QHY174GPS, and \(s_{i}\) and \(A_{i}\) of each station. Footnote 8: [https://lmsfit.github.io/lmfit-py/](https://lmsfit.github.io/lmfit-py/) * For a more accurate best-fitting solution for (\(\Delta\rho,p_{0}\)), the LM method is used again, with \(\Delta\rho\) and \(p_{0}\) adjusted with initial guesses located at all known local minima of each \(\chi^{2}\) map. * Each \(\chi^{2}\) map, which provides information about the quality of the fit, is used to define confidence limits based on constant \(\chi^{2}\) boundaries (Press et al., 2007). ## 4 Results Figure 2(a) shows the \(\chi^{2}_{\rm obs}\) map for the 5 September 2019 occultation. Two local minima are observed. However, considering the significant \(\chi^{2}\) difference of 9 between the two local minima, the global minimum is more likely to be the correct solution. In addition, the \(\Delta\rho\) value at the global minimum is more consistent with the NIMAV9 solution, \(\Delta\rho=0\) km, at the 0.16 \(\sigma_{\rho}\) level, compared with the other local one at the 2.44 \(\sigma_{\rho}\) level. In an effort to mitigate or at least further weaken the presence of multiple local minima, we calculated the \(\chi^{2}_{\rm apr}\) map by adding the \(\chi^{2}\)-type value of the a priori information, \((\Delta\rho/\sigma_{\rho})^{2}\) Figure 1: Reconstructed occultation map of the 5 September 2019 event. into the \(\chi^{2}_{\rm obs}\) map. Figure 2(b) presents the results, which show that two local minima are still present, but with a \(\chi^{2}\) difference of about 14.5, which is larger than that of the \(\chi^{2}_{\rm obs}\) map. Therefore, the global minimum is confidently accepted as the solution for \((\rho_{\rm{cap}},p_{\rm{surf}})\), as provided in Table 1. Moreover, Figure 3 presents the consistency of our derived \(p_{\rm{surf}}\) across the two different local minima. Our findings demonstrate that the specific choice of local minima does not significantly affect the value of \(p_{\rm{surf}}\), further supporting the reliability of our solution for \(p_{\rm{surf}}\). ## 5 Pressure evolution ### Comparisons and necessary reanalyses of historical events In Figure 4, the red plot represents our \(p_{\rm{surf}}\) measurement from the 5 September 2019 occultation. We also include other published measurements (Hinson et al., 2017; Meza et al., 2019; Arimatsu et al., 2020; Young et al., 2021; Sicardy et al., 2021) and the pressure evolution predicted by the VTM19 in order to provide a comprehensive view of the pressure variations on Pluto. To avoid potential inconsistencies arising from different analysis methods, as discussed by Sicardy et al. (2021), we re-analyzed the 15 August 2018 event studied by Young et al. (2021) using the IXON observational data of Silva-Cabrera et al. (2022). The derived pressure measurement presented in Appendix D.1 is \(12.027^{+0.09}_{-0.08}\) ubar. In addition, we also reanalyze the 17 July 2019 event in Appendix D.2, deriving a pressure of \(p_{\rm{surf}}=9.421^{+0.68}_{-0.75}\) ubar, which is similar to that of Arimatsu et al. (2020), of \(9.56^{+0.52}_{-0.34}\) ubar. This similarity is expected because the same D015 method is used. As this same method is used by Meza et al. (2019) and Sicardy et al. (2021), their pressure measurements, along with that of Arimatsu et al. (2020), can be fully compared with our new ones. Both the remeasurements are plotted in black in Figure 4. The pressure measurement from the 5 September 2019 event shows alignments with those from the 19 July 2016, 15 August 2018, and 6 June 2020 events within their combined \(1\sigma\) levels. Our new measurement from the 15 August 2018 event does not show the significant pressure drop previously reported by Young et al. (2021). The previously reported pressure drop between the 19 July 2016 and 17 July 2019 events is still detected at the same level as in Arimatsu et al. (2020). ### Discussion on pressure variations While the VTM19 remains, on average, applicable and capable of predicting the main atmospheric behavior during the observed years, there are also two probable V-shaped pressure variations observed from 2010 to 2015 and from 2015 to 2020, especially when considering the measurements from the 4 June 2011 and 17 July 2019 events. These V-shaped variations suggest the presence of additional factors that have not been accounted for. Specifically, short-term changes in Pluto's surface ices and their interaction with the atmosphere are likely contributing to the variation. Moreover, spectral monitoring of the surface composition has revealed some short-term changes in the ices over several Earth years (e.g., Grundy et al., 2014; Lellouch et al., 2022; Holler et al., 2022). However, the validity of the V-shaped variations is debatable given the stellar faintness of the 4 June 2011 event and the grazing single-chord geometry of the 17 July 2019 event. If the debatable measurement from the 17 July 2019 event were discarded, no significant changes would be observed between 2016 and 2020. This more likely supports the plateau phase since 2015 predicted by the VTM19. In order to better understand the relationship between these factors, further observations using mul Figure 2: Occultation observations and the best-fitting light-curve model of the 5 September 2019 event. Panel (a): Observed and simultaneously fitted light curves. Panels (b) and (c): Reconstructed stellar paths seen by DWM and HNU, respectively. tiple observational techniques (occultation, spectroscopy, and photometry) are required, as well as simulations with a refined VTM. ## 6 Conclusions The unique observations of the 5 September 2019 occultation provide a surface pressure of \(p_{\rm surf}=11.478\pm 0.55\) ubar. In order to avoid potential method inconsistencies in comparing with historical pressure measurements (Sicardy et al., 2021), we also reanalyzed the 15 August 2018 and 17 July 2019 events based on publicly available data (Silva-Cabrera et al., 2022; Arimatsu et al., 2020). All measurements are presented in Figure 4. The VTM19 remains applicable on average. In addition, we also observed unaccounted-for V-shaped pressure variations with the previously reported pressure drop being a part of these variations; however, these variations are debatable. To better understand all significant pressure variations of Pluto, continuous pressure monitoring through occultation observations is essential where possible. Also, simultaneous and frequent spectroscopic and photometric monitoring of changes to its surface ice are important, as such comprehensive monitoring will provide more short-term and long-term evolution constraints of Pluto's interacting atmosphere and surface. ###### Acknowledgements. We acknowledge Bruno Sicardy for his useful comments that helped improving this manuscript. This work has been supported by the National Natural Science Foundation of China (Grant Nos. 12203105 and 12103091). We acknowledge the science research grants from the China Manned Space Project with No.CNMS-CSST-2021-A12 and NO.CMS-CSST-2021-B10. This research has made use of data from the 40 cm DOR telescope at the Dawei Mountain Observatory of Hunan Astronomical Association and from the 50 cm RC telescope at the Observatory of Hebei Normal University. We acknowledge the support of Chinese amateur astronomers from Hunan Astronomical Association, Nanjing Amateur Astronomers Association, and Shenzhen Astronomical Observatory Team.
2309.11884
The Spatiotemporal Scaling Laws of Bitcoin Transactions
This study, to the best of our knowledge for the first time, delves into the spatiotemporal dynamics of Bitcoin transactions, shedding light on the scaling laws governing its geographic usage. Leveraging a dataset of IP addresses and Bitcoin addresses spanning from October 2013 to December 2013, we explore the geospatial patterns unique to Bitcoin. Motivated by the needs of cryptocurrency businesses, regulatory clarity, and network science inquiries, we make several contributions. Firstly, we empirically characterize Bitcoin transactions' spatiotemporal scaling laws, providing insights into its spending behaviours. Secondly, we introduce a Markovian model that effectively approximates Bitcoin's observed spatiotemporal patterns, revealing economic connections among user groups in the Bitcoin ecosystem. Our measurements and model shed light on the inhomogeneous structure of the network: although Bitcoin is designed to be decentralized, there are significant geographical differences in the distribution of user activity, which has consequences for all participants and possible (regulatory) control over the system.
Lajos Kelemen, István András Seres, Ágnes Backhausz
2023-09-21T08:34:47Z
http://arxiv.org/abs/2309.11884v1
# The Spatiotemporal Scaling Laws of Bitcoin Transactions ###### Abstract This study, to the best of our knowledge for the first time, delves into the spatiotemporal dynamics of Bitcoin transactions, shedding light on the scaling laws governing its geographic usage. Leveraging a dataset of IP addresses and Bitcoin addresses spanning from October 2013 to December 2013, we explore the geospatial patterns unique to Bitcoin. Motivated by the needs of cryptocurrency businesses, regulatory clarity, and network science inquiries, we make several contributions. Firstly, we empirically characterize Bitcoin transactions' spatiotemporal scaling laws, providing insights into its spending behaviours. Secondly, we introduce a Markovian model that effectively approximates Bitcoin's observed spatiotemporal patterns, revealing economic connections among user groups in the Bitcoin ecosystem. Our measurements and model shed light on the inhomogeneous structure of the network: although Bitcoin is designed to be decentralized, there are significant geographical differences in the distribution of user activity, which has consequences for all participants and possible (regulatory) control over the system. Keywords:Bitcoin P2P network Scaling laws Geospatial data ## 1 Where is George? And Satoshi? How do people spend their money? Where and when do they send transactions? What are the scaling laws of the spatiotemporal spending patterns of users in major financial systems? Is there any significant difference in the spending patterns of cash and digital currencies? How much time elapses between two consecutive transactions of a user? How many kilometres do paper bills travel during their lifetime? Given the difficulty of obtaining this type of geospatial data from payment processors, these questions seem unanswerable at scale. Fortunately, while obtaining such geospatial data is challenging, we have found compelling answers in the case of cash. In December 1998, an influential website [https://www.wheresgeorge.com/](https://www.wheresgeorge.com/) was launched as a semi-serious game to track the movements of US dollar bills. Players were invited to enter the serial number of the bills, the time, and the place they received bills with this specific stamp on them. Over the years, a large database emerged that tracked the movement of hundreds of thousands of paper bills across the United States. In 2006, network scientists Brockmann, Hufnagel, and Geisel conducted a comprehensive analysis of this database [4]. They found that the trajectories of paper bills roughly follow a two-dimensional random walk known as Levy flight. In particular, the distances between subsequent observations of paper bills follow a power law distribution. Intuitively, bills predominantly jump small distances (e.g., remain in a city). At the same time, with non-negligible probability, they observed lengthy jumps (e.g., the bill's owner travelled across states). Similarly, the waiting times between transactions also follow a power law distribution. Now, turning our attention to digital currencies, particularly Bitcoin [14], we raise questions about whether similar scaling laws apply and how they might differ. One would expect significant differences from the distributions observed in physical cash. Most importantly, digital currencies do not have physical limitations, i.e., one can easily send electronic transactions across continents. It would be surprising if the spatiotemporal spending patterns of digital currencies would also follow a two-dimensional Levy flight. But then, what kind of distribution do they follow? Would it be possible to assess this? Traditional centralized financial institutions, e.g., banks and credit card companies, extensively collect, analyze, and apply real-time geospatial data of their customers. Geographic information systems manage resources and optimize bank branch networks and marketing efforts. While centralized financial systems heavily rely on geospatial data for various purposes, obtaining and analyzing this data for academic research poses significant challenges due to its sensitivity and the reluctance of financial institutions to share it. This is partly due to the risks for re-identification using credit card metadata [5]. On the other hand, decentralized digital currencies, such as Bitcoin, publicly offer this type of user data. However, it is not easy to map IP addresses to cryptocurrency addresses as it requires a large-scale measurement on the peer-to-peer (P2P) network of Bitcoin. There are multiple known vulnerabilities on the Bitcoin P2P network protocol (most of them are now patched) that could have allowed anyone to link IP addresses to Bitcoin transactions [1, 2, 8, 12]. These papers develop techniques to reduce the anonymity guarantees of the P2P network of Bitcoin. Still, they do not analyze the resulting data sets they obtained. Often, they run their deanonymization attacks solely on testnets due to ethical concerns [7]. Fortunately, we obtained a substantial dataset consisting of 1797 IP addresses and 20680 Bitcoin addresses, dating back to 2013, courtesy of the authors of [8]. This dataset is invaluable for our research. Motivation for the study.The following applications and network scientific questions motivate a deeper understanding of Bitcoin's geospatial scaling laws. **Bussinesses Accepting Cryptos**: Crypto companies, to grow and scale, need to know their product's usage better. In particular, they could have aggregate geospatial data about their customers. Such information allows companies to decide in which countries they should enable cryptocurrency payment options to reach more users or where to establish new cryptocurrency ATMs. **Regulatory clarity**: The wider cryptocurrency community can only hope for regulatory clarity if regulators understand Bitcoin's geospatial usage. Regulators might want to focus on parts of their corresponding country with high cryptocurrency activity for consumer protection and taxation purposes. #### 1.1.1 Network scientific understanding and privacy Finally, a high-quality geospatial database facilitates a deep network scientific understanding of Bitcoin and cryptocurrencies. We can assess Bitcoin's economic network's scaling laws, its (de)centralization, robustness, and privacy (mixing) characteristics. Our contributions.In this work, we provide the following contributions. **Bitcoin transactions' spatiotemporal scaling laws**: We characterize Bitcoin transactions' spatiotemporal scaling laws in Section 2 using the aforementioned database. To the best of our knowledge, this is the first work to assess the spatiotemporal spending patterns of any cryptocurrency empirically. **Markovian model of Bitcoin transactions**: Bitcoin's spatiotemporal patterns do not lend themselves to be characterized as a simple two-dimensional random walk, e.g., Levy flight. However, in Section 3, we can approximate well the observed spatiotemporal patterns of Bitcoin with a simple Markovian model that sheds light on the economic connections between various groups of users (i.e., miners, merchants, and users) in the Bitcoin ecosystem. **Open-source code**: We applied a database from [8] that contains spatiotemporal data on Bitcoin users from 2013. We publish our code at the following link: [https://anonymous.4open.science/r/Scaling-Laws-of-Bitcoin-C834](https://anonymous.4open.science/r/Scaling-Laws-of-Bitcoin-C834). Due to privacy and ethical concerns of the applied data, we can provide access to the geospatial data upon request for reproduction and future research. The rest of this paper is organized as follows. In Section 2, we introduce the database from [8] used to measure the scaling laws of Bitcoin and present the results of our measurements. In Section 3, we describe a Markovian model of Bitcoin users' spatial sending patterns that explains the observed spatial behaviours. We conclude our work in Section 4 with future directions. ## 2 The Spatiotemporal Patterns of Bitcoin Transactions In this Section, we describe the obtained data from Juhasz et al. [8] and thoroughly analyze their database through the lens of network science. ### The Collected Data The dataset we analyze in this study was collected by Juhasz et al. [8] during a comprehensive data collection campaign conducted between October 6th, 2013, and December 25th, 2013. During this two-month period, they operated 140 Bitcoin nodes strategically distributed across various geographical regions. The campaign yielded an impressive corpus of data, recording approximately 300 million broadcast and relay events, encompassing \(4,155,387\) transactions and identifying \(124,498\) unique IP addresses. To ensure the reliability of the data, Juhasz et al. employed a Bayesian approach to ascertain the actual originators of Bitcoin transactions, allowing them to establish meaningful connections between IP addresses and Bitcoin transactions. It is worth noting that only mappings with a probability of correctness exceeding 95% were retained for our analysis. Our database consists of \(101,342\) Bitcoin transactions, wherein both the sender's and receiver's IP addresses are known. While this may represent a small fraction (\(\approx 2.44\%\)) of the overall \(\sim 4.15\) million Bitcoin transactions during the study period, it is statistically significant for our research. In contrast, we presume that Brockmann et al. [4] had access to a significantly smaller fraction of the total number of US dollar cash transactions. We acknowledge the potential use of network anonymity tools, such as Tor or i2p, by some users to obscure their actual IP addresses. However, it is important to note that these tools were not as prevalent during Bitcoin's early days, adding a layer of robustness to our dataset. This dataset serves as the foundation for our investigation, offering valuable insights into Bitcoin's spatiotemporal scaling laws and providing a unique window into the behaviour of cryptocurrency users during this critical period. ### Spatiotemporal Patterns of Bitcoin Transactions Unlike physical cash, Bitcoin transactions' distance distribution does not follow a power-law distribution. Observe the much stronger tails of Bitcoin's transaction distance distribution in Figure 1 as opposed to the US dollar's power law distribution (\(\sim x^{-1.59}\)). The average distance a Bitcoin transaction covers is 5588.71km with a median of 6236.6km. In Section 3, we create a Markovian transaction model that explains and generates the same empirical distance distribution as observed in Figure 1. The elapsed time between the creation of a Bitcoin unspent transaction output (UTXO) and its spending as an input of a transaction is called _waiting time_. We found that just like in the case of US dollar bills (\(\sim x^{-0.6}\)), Bitcoin's waiting time distribution also follows a power law distribution (\(\sim x^{-1.08}\)), see Figure 1. The waiting time distribution has a mean of 18.44 days and a median of 1.42 days. In accordance with previous work [13], we attribute the outstanding number of small waiting times to gambling activity, e.g., Satoshi Dice. Figure 1: Distance and waiting time distributions of Bitcoin and US dollar as measured in [4]. Note the left figure is log-lin, while the right figure is log-log. Independence of transaction value and distance.We calculated the correlation of the transactions' transferred Bitcoin value and many spatial properties of the transactions, e.g., the distance between sender and receiver or the latitude and longitude of the sender. Importantly, but perhaps unsurprisingly, we found that the value of the transactions is uncorrelated with the distance between sender and receiver (correlation \(0.003\)). This is somewhat expected as Bitcoin transactions do not have physical limitations. Interestingly, the more southern the transaction's sender is, the longer the distance between the transaction's sender and receiver will be (correlation \(-0.363\)). This is because the countries in the global south transact regularly with the northern countries, e.g., Argentina with Germany. Similarly, we found that the distance between a transaction's sender and receiver and the waiting times are independent (correlation \(0.03\)). Transaction activity and the Bitcoin user graphIn the analyzed timeframe, Bitcoin's most active users were concentrated in regions such as the US East Coast, Europe, and Southeast Asia, with notable activity in China (see Figure 4). We further examined the Bitcoin user graph, identifying users by their Bitcoin addresses and tracking their transactions. Notably, the distribution of incoming and outgoing transactions from these users follows a power-law distribution (approximately \(\sim x^{-1.51}\) and \(\sim x^{-1.59}\), respectively). This finding aligns with earlier research [11] and suggests a concentration of influence among a subset of users - a phenomenon known as the "Matthew effect". This observation raises questions about the decentralized nature often associated with cryptocurrencies. Regulation and Bitcoin activityIn October 2013, a significant development occurred when Baidu, China's largest search service, publicly announced its acceptance of Bitcoin as a payment method for its firewall and DDoS protection service [9]. Following this announcement, a notable surge in Bitcoin activity within China was observed, see Figure 3. However, this surge was short-lived, as Figure 2: Transactions’ waiting time (left) and value distribution (right) as a function of distance. Note both figures are log-lin, and the colour bar is also log. on December 5, 2013, the Chinese Communist Party declared Bitcoin to be an illegal currency in China and prohibited Chinese Bitcoin exchanges from accepting further renminbi deposits [10]. This regulatory action immediately and dramatically impacted Bitcoin transactions originating from Chinese IP addresses, as reflected in the data, see Figure 3. These real-world events shaped the trajectory of Bitcoin activity in China and serve as a compelling illustration of the intricate relationship between regulatory decisions and cryptocurrency usage patterns. ## 3 A Markovian Model of Bitcoin Transactions We observed in Section 2.2 that the distribution of distances covered by Bitcoin transactions does not follow a simply characterizable two-dimensional (random) walk. To explain the empirical geospatial distribution of Bitcoin transactions, in this Section, we introduce a Markovian transaction model that generates the same spatial distribution of Bitcoin transactions as in Figure 1 (left). First, we group each Bitcoin address into one of the following three groups based on their activity in the observed period. The defining parameters of our Markovian model are established later as the result of an optimization problem. **Miners**: A node is classified as a miner if they had sent at least one transaction with value _val_ and participated in not more than \(\mathsf{tx}_{miner}\) Bitcoin transactions either as a sender or a receiver. **Merchants/Service providers**: Participated as sender or receiver in at least \(\mathsf{tx}_{merch}\) transactions in the studied period. Examples include custodial Bitcoin exchanges, non-custodial wallets, or online casinos. **Users**: If none of the above holds for a particular Bitcoin address in our database, then the Bitcoin network participant is deemed a "regular" user. Furthermore, each participant is characterized by their geographic location according to their IP address. Specifically, users are assigned to the continent Figure 3: The number of sent Bitcoin transactions from the world and China (left). The in- and out-degree (\(\sim x^{-1.51}\) and \(\sim x^{-1.59}\)) of the Bitcoin user graph (right). where they are based, i.e., America (AM), Asia (AS), and Europe (EU). In our database, there was no activity from Africa and only a few transactions from Australia and Oceania were added to the Asian user group. This resulted in nine user categories depending on the network participants' location (AM, AS, and EU) and assigned user type (miner, merchant, user). We iterated over the parameter space \(\mathsf{params}:=(\mathit{val},\mathsf{tx}_{miner},\mathsf{tx}_{merch})\) to find the optimal parametrization of our Markovian model, i.e., which \(\mathsf{params}\) triplet minimizes the distance between the generated distance distributions and the empirical. We assigned network participants to the nine above-mentioned user groups for a fixed \(\mathsf{params}\) triplet. The transition matrix for a given \(\mathsf{params}\) list was defined from the relative transaction frequencies between the user groups that were assigned to the nine user groups. We generated 500 transactions for each \(\mathsf{params}\) triplet. For each 500 transactions, we assigned a distance defined by the particular transition matrix. For each possible pair of a sender and a receiver from the nine groups, we determined the empirical distribution of the transactions' distance from the first group to the second one. Once we know the sender's and receiver's user groups for each transaction, we randomize the distance from this probability distribution. We computed the Kolmogorov-Smirnov (K-S) statistics [15] between the empirical distance distributions, see Figure 4, and the Markovian-generated. The largest p-value \(p=0.081\) in the K-S statistics can be found for \(\mathsf{params}=(15\$,50,120)\). To validate the accuracy of our model, we examined its assignment of some well-known Bitcoin addresses. Notably, it correctly identified addresses associated with prominent entities like SatoshiDice and OkCoin, reinforcing its effectiveness in categorizing users. We conclude that it seems elusive to describe Bitcoin's spatiotemporal scaling laws (e.g., the distance distribution of Figure 1) with a continuous probability distribution as it was possible for the US dollar [4]. We attribute this difficulty to the discontinuity of the observed distance distribution, e.g., certain distances cannot appear due to physical limitations, in particular, the special structure of Figure 4: The empirical distance distribution (cf. Figure 1 (left)) and the Markovian-generated for the optimal \(\mathsf{params}=((15\$,50,120))\) yielding the largest p-value \(p=0.081\) in the Kolmogorov-Smirnov statistics between the two distributions (left). Bitcoin transaction activity heatmap during the studied period, i.e., from October to December 2013 (right), note similarity with [13]. the continents. Instead, we built a Markovian model that uses the intrinsic properties of the underlying network to generate the desired distance distribution. We observe in Figure 5 that the relative frequencies differ significantly across the identified user groups, for example, high merchant and moderate miner activity. Although our model is rather simple, based on this, one can identify users with significantly different roles in the Bitcoin ecosystem: some are involved in many transactions, others are more like regular users. Hence, some kind of inhomogeneity appears despite the originally decentralized system of Bitcoin transactions. Similarly, we observe inhomogeneity across user groups in the stationary probabilities. In conclusion, our Markovian model provides valuable insights into Bitcoin transaction distributions, revealing notable inhomogeneities across user groups. These findings hold significant implications, particularly in the realm of regulations, as they highlight the need to focus regulatory efforts on user groups with the most substantial impact within the Bitcoin ecosystem. ## 4 Conclusion and Future Directions In this study, we have delved into Bitcoin's spatiotemporal scaling laws, shedding light on the spending patterns of its users. Our findings underscore the potential value of anonymized, aggregate statistics about users' spatiotemporal spending behaviours, a prospect that could significantly benefit custodial exchanges, wallet software providers, and other cryptocurrency businesses. However, our work also serves as a stark reminder of the pressing need for robust network anonymity safeguards within the peer-to-peer layer of cryptocurrencies. As the cryptocurrency landscape evolves, the risk of the lack of anonymity being exploited against users' looms larger. For example, nation-states may seek to tax Bitcoin users based on their involvement in the Bitcoin network. On a more Figure 5: The relative frequency matrix with logarithmic probabilities (left) and stationary probabilities (right). Users are grouped into three classes: miners (MI), merchants (ME), and regular users (US). The three main prevalent geographic locations are denoted as America (AM), Asia (AS), and Europe (EU). optimistic note, the Bitcoin and the broader cryptocurrency community could already use a rich line of literature to enhance network anonymity [3, 6]. AcknowledgementsThe research was co-funded by the project Strengthening the EIT Digital Knowledge Innovation Community in Hungary (2021-1.2.1-EIT-KIC-2021-00006), implemented with the support provided by the Ministry of Innovation and Technology of Hungary from the National Research, Development and Innovation Fund, financed under the 2021-1.2.1-EIT-KIC funding scheme.
2305.00484
Sequential Markov Chain Monte Carlo for Lagrangian Data Assimilation with Applications to Unknown Data Locations
We consider a class of high-dimensional spatial filtering problems, where the spatial locations of observations are unknown and driven by the partially observed hidden signal. This problem is exceptionally challenging as not only is high-dimensional, but the model for the signal yields longer-range time dependencies through the observation locations. Motivated by this model we revisit a lesser-known and \emph{provably convergent} computational methodology from \cite{berzuini, cent, martin} that uses sequential Markov Chain Monte Carlo (MCMC) chains. We extend this methodology for data filtering problems with unknown observation locations. We benchmark our algorithms on Linear Gaussian state space models against competing ensemble methods and demonstrate a significant improvement in both execution speed and accuracy. Finally, we implement a realistic case study on a high-dimensional rotating shallow water model (of about $10^4-10^5$ dimensions) with real and synthetic data. The data is provided by the National Oceanic and Atmospheric Administration (NOAA) and contains observations from ocean drifters in a domain of the Atlantic Ocean restricted to the longitude and latitude intervals $[-51^{\circ}, -41^{\circ}]$, $[17^{\circ}, 27^{\circ}]$ respectively.
Hamza Ruzayqat, Alexandros Beskos, Dan Crisan, Ajay Jasra, Nikolas Kantas
2023-04-30T14:00:13Z
http://arxiv.org/abs/2305.00484v3
Sequential Markov Chain Monte Carlo for Lagrangian Data Assimilation with Applications to Unknown Data Locations ###### Abstract We consider a class of high-dimensional spatial filtering or data assimilation problems, where the spatial locations of observations are unknown and driven by the partially observed hidden signal. This problem is exceptionally challenging as not only is high-dimensional, but the model for the signal yields longer-range time dependencies through the observation locations. Motivated by this model we revisit a lesser-known and _provably convergent_ computational methodology from [3, 10, 23] that uses sequential Markov Chain Monte Carlo (MCMC) chains. We extend this methodology for data filtering problems with unknown observation locations. We benchmark our algorithms on Linear Gaussian state space models against competing ensemble methods and demonstrate a significant improvement in both execution speed and accuracy. Finally, we implement a realistic case study on a high-dimensional rotating shallow water model (of about \(10^{4}-10^{5}\) dimensions) with real and synthetic data. The data is provided by the National Oceanic and Atmospheric Administration (NOAA) and contains observations from ocean drifters in a domain of the Atlantic Ocean restricted to the longitude and latitude intervals \([-51^{\circ},-41^{\circ}]\), \([17^{\circ},27^{\circ}]\) respectively. **Keywords**: Spatial Filtering; Markov Chain Monte Carlo; High-Dimensional Filtering. **MSC classes**: 62M20, 60G35, 60J20, 94A12, 93E11, 65C40 **Code available at: [https://github.com/ruzayqat/filtering_unknown_data_locations](https://github.com/ruzayqat/filtering_unknown_data_locations)** **Corresponding author**: Hamza Ruzayqat. E-mail: [email protected] ## 1 Introduction Consider a state-space model comprising of two elements, an unobserved continuous time stochastic process \((Z_{t})_{t\geq 0}\) with \(Z_{t}\in\mathbb{R}^{d}\) and a sequence of observations \((Y_{t_{i}})_{i\geq 1}\) taken at a sequence of known time instants \((t_{i})_{i\geq 1}\). We will assume each \(Y_{t_{i}}\) is a random variable depending on the evolution path of the unobserved process since the last time instant \((Z_{t})_{t\in[t_{i-1},t_{i}]}\). \((Z_{t})_{t\geq 0}\) and \((Y_{t_{i}})_{i\geq 1}\) are combined in a joint stochastic model and the objective of filtering or data assimilation is to estimate the unobserved state \(Z_{t}\) given the observations up to that time \((Y_{t_{i}})_{t_{i}\leq t}\). Such problems occur routinely in applications such as numerical weather prediction, oceanography, finance and engineering, see [9, 16, 17] for example applications and [2, 7] for book length introductions. The problem of filtering is to approximate the conditional distribution of each \(Z_{t_{i}}\) given \((Y_{t_{p}})_{p\leq i}\) also known as the filtering distribution or simply the _filter_. This is a challenging task as in most cases of practical interest, with the exception of linear model observations and discrete-time, linear signal dynamics, the filter is not analytically available and hence one often has to resort to numerical approximations. There are a plethora of techniques that have been presented in the literature, but perhaps the only two exact approaches (in the sense that the accuracy can be made arbitrarily high, subject to computational cost) are based upon particle filters (PF) and Markov chain Monte Carlo (MCMC); see e.g. [2, 7, 12]. PFs and MCMC are simulation (or Monte Carlo) based approximations of the filter such that as the the number of samples grows one can recover the exact filter. PFs are specifically designed for filtering and have a fixed computational complexity per-time-step update. Whilst traditional iterative MCMC can be used for filtering at a fixed time, the cost grows at a linear order in the time step at each filtering update and is not often used for this task. Instead, if one wishes to explore this as an alternative direction, a more practical approach is to use sequential MCMC chains that target the filter at each time and also use the filter update equations. Motivated by the challenges faced in high dimensional filtering and Lagrangian data assimilation we revisit a less popular computational methodology initially proposed in [3]. We note that this method is _provably convergent_[23] and has been applied successfully in challenging filtering problems in the past such as for point process models [10, 23] and group tracking [8, 21]. The problem of high-dimensional filtering is even more challenging. For instance in numerical weather prediction models \(d\) is in the order of millions. Unfortunately simple or standard designs of PFs require an exponential cost in the dimension \(d\) to achieve a certain precision and hence are impractical for very high dimensional applications. Several methods [4, 6, 5, 18, 25, 26] based upon a combination of sequential Monte Carlo (SMC) (e.g. [13, 12]) and MCMC can be adopted. For well-designated classes of models, they have been shown to be both mathematically and empirically able to deal with high-dimensional filtering in a cost that is polynomial in the dimension of the problem. We note, however, that these methods are not a universal solution for the high-dimensional filtering problem due to the substantial computational cost involved. Several approximate (but biased) methods such as the ensemble-Kalman filter (EnKF), ensemble-transform Kalman filter (ETKF), and error-subspace transform Kalman filter (ESTKF) can be found in the literature, which despite being biased are the most popular and widely used methods for high-dimensional filtering, due to the very low computational cost relative to SMC-MCMC methods. Motivated by problems in oceanography, we consider an even more complex high-dimensional filtering problem. In this case the observers travel on a spatial domain, such as a 2-dimensional grid, and their location is unknown and driven, deterministically, by the signal \((Z_{t})_{t\geq 0}\) that is used to model the velocity field (among other variables) on the domain of interest. This is also known in the literature as Lagrangian Data Assimilation and this observation mechanism with unknown observer locations induces an extra-level of complexity versus the traditional filtering or data assimilation problem. The introduction of dependence of the spatial locations of observation on the signal yields a long-range (in time) dependence on the signal, that is not present in the classical filtering problem; the details are given in Section 2. The ensemble methods mentioned above will not be accurate for this new and interesting problem. EnKF type methods are known to struggle with nonlinearities induced by Lagrangian observers even when the locations of the drifters are well known [1] and the situation is much worse in the unknown location case. This was confirmed by extensive preliminary numerical simulations leading to this paper, which in addition showed that the computational cost required to implement SMC methods is very high due to the large number of tempering steps required. This motivated extending a sequential MCMC method developed for the filtering of point processes [10], for this new type of filtering problem. The method of [10] has been shown in [23] to be a theoretically justified method for filtering (in the sense that the estimate will converge to the true filter as the number of samples grows to infinity) and seems to be particularly amenable for the filtering problem in high-dimensions, with a cost of \(\mathcal{O}(d)\) per update step. The contributions of this article are as follows: 1. We formulate a new filtering problem with spatial Lagrangian observations at unknown locations. We develop, based upon [10], a generic sequential MCMC method that is effective for high-dimensional and nonlinear models. 2. We demonstrate the performance of this method in two ways. First we use a tractable Linear Gaussian state space model and compare in terms of accuracy and execution speed with ensemble Kalman filter methods (EnKF, ETKF and ESTKF) and show a significant improvement. Then we present a challenging realistic high dimensional data assimilation case study for which ensemble methods fail due to the nonlinearities present in the model and observation scheme. We use a rotating shallow water model and observations obtained both at known and unknown spatial locations. Our simulation scenarios are realistic and constructed using real data from NOAA. This article is structured as follows. Section 2 describes the class of models considered in this article. Section 3 presents the algorithms adopted for our problem of interest. Section 4 demonstrates the methodology on several examples. ## 2 Modelling Preliminaries ### State-Space Models and Filtering We consider an unknown continuous time stochastic process \((Z_{t})_{t\geq 0}\), with \(Z_{t}\in\mathsf{Z}\subseteq\mathbb{R}^{d}\), for which one has access to partial information via observations arriving at times, \(\mathsf{T}:=\{t_{1},t_{2},\dots\}\), \(t_{0}<t_{1}<t_{2}<\cdots\), \(t_{0}=0\). At any given time \(t_{k}\in\mathsf{T}\) we assume there are a fixed number of \(d_{y}\in\mathbb{N}\) observations obtained from \(N_{d}\in\mathbb{N}\) observers or drifters with the \(j^{th}\) drifter's observation denoted by \(y^{j}_{t_{k}}\in\mathsf{Y}\) and all observations collected in the vector \(y_{t_{k}}\in\mathsf{Y}^{N_{d}}\subseteq\mathbb{R}^{d_{y}}\). These observations are taken at spatial locations \(x^{j}_{t_{1}},x^{j}_{t_{2}},\dots,\,x^{j}_{t_{k}}\in\mathsf{X}\subseteq\mathbb{ R}^{s}\), where \(s\) is typically \(2\) or \(3\). The collection of spatial locations of all observers at an observation time \(t_{k}\in\mathsf{T}\) is written as a vector \(x_{t_{k}}\in\mathsf{X}^{N_{d}}\). We adopt a continuous time modelling approach for \(Z_{t}\) motivated by applications such as atmosphere and ocean sciences. In these topics physical quantities such as wind or water velocity are modelled by continuous time space varying physical models comprising of systems of partial differential equations (PDEs). To allow for model uncertainty we need to incorporate stochastic dynamics for \(Z_{t}\), for which the noise can be either added continuously (as in stochastic PDEs) or discretely in time (e.g. see Example 2.1). Our framework requires that at the discrete time instances in \(\mathsf{T}\), \((Z_{t_{k}})_{k\geq 1}\) forms a discrete time Markov process with a known and well defined (positive) transition density. For \(A\subseteq\mathsf{Z}\) we shall assume that the transition dynamics for \((Z_{t})_{t\geq 0}\) can be obtained as \[\mathbb{P}(Z_{t_{k}}\in A|z_{t_{k-1}})=\int_{A}f_{k}(z_{t_{k-1}},z_{t_{k}})dz _{t_{k}}\] where \(\mathbb{P}\) denotes probability. In the notation for \(f_{k}\) the subscript is included to account for possible time-inhomogeneous structure of \((Z_{t})_{t\geq 0}\), or dependence on the time increments and \(Z_{0}\) is taken as known. We will assume that, at the very least, \(f_{k}(z_{t_{k-1}},z_{t_{k}})\) or a suitable approximation of it can be evaluated pointwise; examples include stochastic differential equations (SDEs) and their various time discretization approximations and similarly stochastic PDEs or PDEs with discrete time additive noise (see Example 2.1 below). **Example 2.1** (PDE with discrete time additive spatial noise).: _We present an example which will be considered often in the paper. We will consider \(Z_{t}\) to be a vector containing hidden state variables at positions defined on a bounded domain with known boundary conditions. Consider \(\mathsf{Z}=\mathbb{R}^{d}\). Let \(\Phi(z_{s},s,t)\), \(\Phi:\mathsf{Z}\times(\mathbb{R}^{+})^{2}\to\mathsf{Z}\), be the solution of a PDE with initial condition \(z_{s}\) run from time \(s\) to \(t\) with \(0\leq s<t\). Then, an example of our model would be for \(k\in\mathbb{N}\), \(Z_{t}=\Phi(Z_{t_{k-1}},t_{k-1},t)\), with \(t\in[t_{k-1},t_{k})\), and_ \[Z_{t_{k}}=\Phi(Z_{t_{k-1}},t_{k-1},t_{k})+W_{t_{k}}\] _where \(W_{t_{k}}\stackrel{{\mathrm{i.i.d.}}}{{\sim}}\mathcal{N}_{d}(0,R)\) is an i.i.d. sequence of \(d-\)dimensional Gaussian random variables of zero mean and covariance matrix \(R\). In such scenarios, the process in continuous time is a PDE that is perturbed by noise at discrete times defined when the observations arrive and \(f\) is a Gaussian density of mean \(\Phi(z_{t_{k-1}},t_{k-1},t_{k})\) and covariance matrix \(R\). Note that, in practice, one may have to replace \(\Phi\) with a numerical approximation of the solution of the PDE._ #### 2.1.1 The standard Lagrangian observation model We will assume that each observation vector \(Y_{t_{k}}\) depends only on \(Z_{t_{k}}\) and there is a positive conditional likelihood density, i.e. for \(k\in\mathbb{N}\), \(B\subseteq\mathsf{Y}^{N_{d}}\), \[\mathbb{P}\left(Y_{t_{k}}\in B|(Z_{t})_{t\geq 0},(X_{t})_{t\geq 0},(Y_{t})_{t \in\mathsf{T}\setminus\{t_{k}\}}\right)=\int_{B}G(z_{t_{k}},x_{t_{k}},y_{t_{k }})dy_{t_{k}}=\int_{B}g_{k}(z_{t_{k}},y_{t_{k}})dy_{t_{k}}. \tag{1}\] Note in the standard case the observer locations, \(x_{t_{k}}\), are known and part of the data, so in the notation we can use a time inhomogeneous conditional likelihood (and a subscript \(k\)) to denote this dependence. Filtering and SmoothingInference for the hidden state is performed using conditional distributions given the available data. One can either consider the whole path trajectory \[\mathbb{P}(Z_{t_{1}},\ldots,Z_{t_{k}}|(X_{t_{p}})_{p\leq k},(Y_{t_{p}})_{p \leq k})\qquad\text{(smoothing)}\] or just the marginal \[\mathbb{P}(Z_{t_{k}}|(X_{t_{p}})_{p\leq k},(Y_{t_{p}})_{p\leq k})\qquad\text{ (filtering)}.\] Often filtering is referred to as data assimilation and we will use both terms interchangeably. For \(k\in\mathbb{N}\) we define the smoothing density \[\Pi_{k}(z_{t_{1}},\ldots,z_{t_{k}}) \propto\prod_{p=1}^{k}f_{p}(z_{t_{p-1}},z_{t_{p}})g_{p}(z_{t_{p}}, y_{t_{p}})\] \[\propto f_{k}(z_{t_{k-1}},z_{t_{k}})g_{k}(z_{t_{k}},y_{t_{k}})\Pi_ {k-1}(z_{t_{1}:t_{k-1}}). \tag{2}\] For \(k\in\mathbb{N}\), we are interested in estimating expectations with respect to (w.r.t.) the filtering distribution (or simply the filter) \[\pi_{k}(\varphi):=\int_{\mathsf{Z}}\varphi(z_{t_{k}})\pi_{k}(z_{t_{k}})dz_{t_{k}}\] where \(\pi_{k}(z_{t_{k}})\) is the marginal in the \(z_{t_{k}}\) co-ordinate of the smoothing distribution and \(\varphi:\mathsf{Z}\rightarrow\mathbb{R}\) is \(\pi_{k}-\)integrable function. It easily follows that the filter density can be obtained recursively \[\pi_{k}(z_{t_{k}})\propto g_{k}(z_{t_{k}},y_{t_{k}})\int_{\mathsf{Z}}f_{k}(z_{t _{k-1}},z_{t_{k}})\pi_{k-1}(z_{t_{k-1}})dz_{t_{k-1}}. \tag{3}\] We note that the filtering problem is discrete time in nature due to the observations arriving at discrete times. One can still obtain \(\mathbb{P}(Z_{t}|(X_{t_{p}})_{p\leq k},(Y_{t_{p}})_{p\leq k})\) for \(t\in(t_{k},t_{k+1})\) by integrating \(\pi_{k-1}\) with the corresponding transition density (and applying Chapman-Kolmogorov equations). ### State Space models with Unknown Observer Locations We proceed to extend the observation model to allow the spatial locations where observations are obtained to depend on the state process. For example, in Lagrangian Data Assimilation in oceanography the unknown ocean velocities will directly affect the drifter's motion. In particular, we now assume that for \(j\in\{1,\ldots,N_{d}\}\) \[dX_{t}^{j}=h(X_{t}^{j},Z_{t})dt, \tag{4}\] for some function \(h:\mathsf{X}\times\mathsf{Z}\rightarrow\mathsf{X}\). We then modify the observation process, as originally considered in (1), for \(k\in\mathbb{N}\), \(B\subseteq\mathsf{Y}^{N_{d}}\) \[\mathbb{P}\left(Y_{t_{k}}\in B|(Z_{t})_{t\geq 0},(Y_{t})_{t\in\mathsf{T} \setminus\{t_{k}\}}\right)=\int_{B}G\left((z_{t_{k}},x_{t_{k}}),y_{t_{k}} \right)dy_{t_{k}},\] where \(G((z,x),\cdot)\) is a probability density on \(\mathsf{Y}^{N_{d}}\). This observation model requires simulation of \(x_{t_{k}}\) in parallel to \(z_{t_{k}}\). Note that \((x_{t_{k}})_{k\geq 1}\) is a deterministic function of \((z_{t})_{t\leq t_{k}}\) and does not contain any additional information, but is required for the purpose of computing \(G\) and needs to be propagated in the recursions for the dynamics. Compared to the model in (1), one has here that \(G\left((z_{t_{k}},x_{t_{k}}),y_{t_{k}}\right)=g_{k}\left((z_{t})_{t_{k-1}\leq t \leq t_{k}},y_{t_{k}}\right)\) instead. Filtering and SmoothingThe filtering and smoothing recursions are analogous to the known observation location case and (2)-(3) in Section 2.1.1 where \(G\) replaces \(g_{k}\). #### 2.2.1 Discussion on the Choice of the Computational Methodology It should be clear from the previous discussion that since \[x_{t_{k}}^{j}=x_{0}^{j}+\int_{0}^{t_{k}}h(x_{s}^{j},z_{s})ds,\quad(j,k)\in\{1,\ldots,N_{d}\}\times\mathbb{N}, \tag{5}\] then via the presence of \(x_{t_{k}}\) the likelihood function \(G((z_{t_{k}},x_{t_{k}}),\cdot)\) will depend upon the complete path realization of the hidden process \((Z_{s})_{0\leq s\leq t_{k}}\) up to time \(t_{k}\). This additional _long-range dependence_ is the barrier to using some of the existing particle filtering methods listed in the introduction. To explain this further we will first consider the case where one augments the state of the state space model with \((x_{t_{k}})_{k\geq 1}\) and apply a standard PF, which is a sequential simulation method that propagate samples by combining importance sampling for (2) and resampling steps. Even for \(d=1\) the standard PF, which is generally a reasonable method in that context, would be destined to fail, due to the well-known path degeneracy phenomenon [19]. This is a consequence of successive resampling steps that cause a lack of diversity in the samples representing the complete path \((Z_{s})_{0\leq s\leq t_{k}}\) and approximating the corresponding smoothing density \(\Pi_{k}\). On the other hand PFs can approximate very accurately the filter \(\pi_{k}\) and this relies on the stability properties of the filter itself, which in turn requires reasonable mixing in the hidden state and a non degenerate likelihood function \(G\)[12]. When augmenting hidden state dynamics with static parameters and implementing PFs, the deterministic components in the state vector cannot mix. Even when MCMC kernels are used as artificial dynamics they will depend on the complete path of the hidden state and path degeneracy will result into high Monte Carlo variance [19]. For this model with the deterministic dynamics \(x_{t_{k}}\) an online PF implementation (with fixed computational cost per time) will suffer from both issues described: path degeneracy and lack of stochastic mixing. This is due to \((X_{t})_{t\geq 0}\) being included in the information of \((Z_{t})_{t\geq 0}\) and this justifies a separate treatment and algorithm design for this class of state space models. A more detailed theoretical study of filter properties for this model and its consequence for numerical methods is left for future work. **Remark 2.1**.: _There are some exceptions where the joint deterministic dynamics \((X_{t},Z_{t})_{t\geq 0}\) may display adequate mixing in continuous time as in some hypo-elliptic SDEs that can result in an ideal \(f_{k}\) with sufficient mixing. Note that when time discretization is performed in these models the mixing in \((X_{t})_{t\geq 0}\) will deteriorate or vanish unless recent advanced numerical schemes are used, e.g. as in [24]._ An alternative course of action is to aim for methods that aim to approximate \(\pi_{k}\) without using importance sampling and resampling for \(\Pi_{k}\) and (2). The method in [26], which was designed for high-dimensional filtering, has been our first attempt to perform filtering for this model. The method transports particles from a variant of (2) that only considers the path between \(t_{k-L+1}\) to \(t_{k}\) for a small lag \(L\) and thus bypasses the degeneracy issues by introducing a moderately low bias. However, we found in extensive preliminary simulations, that the (computational) time to run such a methodology was significant and aimed to lower the computational cost. Our investigation focused on efficiently approximating (3) directly via a sequence of MCMC chains initialized from a previously obtained approximation of \(\pi_{k-1}\). At time \(t_{1}\) the algorithm targets \(\pi_{1}(z_{t_{1}})\) exactly by running an MCMC kernel with invariant distribution \(\pi_{1}(z_{t_{1}})\) for \(N\) steps, and at later times \(t_{k}\), \(k\geq 2\), it targets an approximation of the filter distribution obtained by replacing \(\pi_{k-1}(z_{t_{k-1}})\) in (2) by an empirical density \[S^{N}_{z,k-1}(z_{t_{k-1}}):=\frac{1}{N}\sum_{i=1}^{N}\mathbbm{1}_{\left\{Z^{( i)}_{t_{k-1}}\right\}}(z_{t_{k-1}}),\] where \(Z^{(1)}_{t_{k-1}},Z^{(2)}_{t_{k-1}},\cdots,Z^{(N)}_{t_{k-1}}\) are the MCMC samples obtained in the preceding time step and the superscripts denote MCMC iteration number. The method proved particularly effective and efficient in high dimensional problems and is trivially parallelizable. ## 3 Sequential MCMC for Lagrangian Data Assimilation ### Standard State-Space Models We begin by detailing the method in [10] in the case of the standard state-space model. This latter setting differs from the one for which the method was originally introduced. The approach is based upon the well-known prediction-updating structure that is given in (3). The idea is to initiate the method with an MCMC algorithm associated to an MCMC kernel of invariant measure \[\pi_{1}(z_{t_{1}})\propto f_{1}(z_{0},z_{t_{1}})g_{1}(z_{t_{1}},y_{t_{1}}). \tag{6}\] There are many such kernels and in this article we focus on the random walk Metropolis (RWM) kernel exclusively. Such a kernel costs \(\mathcal{O}(d)\) per step and requires one to be able to compute \(f\) (or an approximation thereof). The MCMC kernel is run for \(N\) steps producing \(N\) samples and an \(N-\)sample approximation of \(\pi_{1}(\varphi)\) given by \[\widehat{\pi}_{1}^{N}(\varphi):=\frac{1}{N}\sum_{i=1}^{N}\varphi(z_{t_{1}}^{(i )}),\] where \(z_{t_{1}}^{(i)}\) is the \(i^{th}-\)sample of the Markov chain. At a subsequent time point \(k\geq 2\), using (3). If one has access to an \(N-\)sample approximation of \(\pi_{k-1}(z_{t_{k-1}})\), then instead of sampling from \(\pi_{k}(z_{t_{k}})\) directly, which can be impossible, one can consider the approximate target density \[\pi_{k}^{N}(z_{t_{k}})\propto g_{k}(z_{t_{k}},y_{t_{k}})\frac{1}{N}\sum_{i=1}^ {N}f_{k}(z_{t_{k-1}}^{(i)},z_{t_{k}}). \tag{7}\] One can then use any MCMC kernel (e.g. RWM) with invariant measure \(\pi_{k}^{N}\). Running the kernel for \(N\) steps yields an \(N-\)sample approximation of \(\pi_{k}(\varphi)\) as \[\widehat{\pi}_{k}^{N}(\varphi):=\frac{1}{N}\sum_{i=1}^{N}\varphi(z_{t_{k}}^{(i )}).\] The method is summarized in Algorithm 1 (see Appendix A for a detailed pseudocode). We note that in our implementations, we initialize the MCMC chains (at time \(k\geq 2\)) by picking one of the samples from \(\widehat{\pi}_{k-1}^{N}\) uniformly at random. 1. Initialize: For any given initial distribution on \(\mathsf{Z}\) run a MCMC kernel of invariant measure \(\pi_{1}\) for \(N\) steps. Keep in memory \(\widehat{\pi}_{1}^{N}(\varphi)\). Set \(k=2\). 2. Update: For any given initial distribution on \(\mathsf{Z}\) run a MCMC kernel of invariant measure \(\pi_{k}^{N}\) for \(N\) steps. Keep in memory \(\widehat{\pi}_{k}^{N}(\varphi)\). Set \(k=k+1\). If \(k=n+1\) go to step 3. otherwise return to the start of step 2.. 3. Return the approximations \(\{\widehat{\pi}_{k}^{N}(\varphi)\}_{k\in\{1,\ldots,n\}}\). **Algorithm 1** Sequential MCMC (SMCMC) Method for Filtering for \(n\) time steps. The convergence of Algorithm 1 has been first discussed in [23]. [23, Proposition 1] gives \(\mathbb{L}_{p}-\)bounds (\(p\geq 1\)) on the differences of \(\widehat{\pi}_{k}^{N}(\varphi)-\pi_{k}(\varphi)\) of \(\mathcal{O}(N^{-1/2})\), hence almost sure convergence of the estimators \(\widehat{\pi}_{k}^{N}(\varphi)\) holds. It is important that the MCMC kernel possesses effective ergodicity properties and the better the kernel mixes, the better the approximations \(\widehat{\pi}_{k}^{N}\) will be. The method as presented in Algorithm 1 is of cost \(\mathcal{O}(dN^{2})\) per update step. The cost can easily reduced using subsampling of the samples used in (7) and even cut to \(\mathcal{O}(dN)\) using a simple auxiliary variable method on the sample indicator, which is what we do in practice; see Appendix A for details. ### State-Space Models with Unknown Observation Locations The approach detailed in the previous section is not straightforward to extend to the model with unknown observer locations in Section 2.2. The first issue is the integral in (5) is generally an intractable formula to compute. One can replace the time integral by a simple Euler approximation with step size \(\tau_{k}=(t_{k}-t_{k-1})/L\), \(L\geq 2\): \[\tilde{X}_{t_{k}}^{j}=\tilde{X}_{t_{k-1}}^{j}+\sum_{l=0}^{L-1}h(\tilde{X}_{t_{k -1}+l\tau_{k}}^{j},Z_{t_{k-1}+l\tau_{k}})\ \tau_{k} \tag{8}\] which still depends on a discrete path of the unobserved process. The next issue has to do with setting an appropriate target distribution for the MCMC sampler analogous to (7). At time \(t_{k-1}\), we have simulated samples \(\{(z_{t_{k-1}}^{(m)},(x_{t_{k-1}}^{(m)})\}_{m=1}^{N}\) to plug in such an expression but the likelihood \(G\) requires setting a single value for \(x_{t_{k-1}}\) (as we can only have one target distribution for the MCMC chain). So we make one final approximation, and use a predicted value following: \[\overline{x}_{t_{k}}^{j}=\overline{x}_{t_{k-1}}^{j}+\mathbb{E}\Big{[}\sum_{l=0 }^{L-1}h(\tilde{X}_{t_{k-1}+l\tau_{k}}^{j},Z_{t_{k-1}+l\tau_{k}})\tau_{k}\Big{|} (Y_{t_{p}})_{p\leq k-1},Z_{t_{k-1}}=z_{t_{k-1}}\tilde{X}_{t_{k-1}}^{j}= \overline{x}_{t_{k-1}}^{j}\Big{]} \tag{9}\] with the expectation taken w.r.t. \(\big{(}\tilde{X}_{t_{k-1}+(l-1)\tau_{k}}^{j},Z_{t_{k-1}+(l-1)\tau_{k}}\big{)} _{1<l\leq L}\) and can be approximated using plain Monte Carlo and propagating the dynamics of \(Z_{t}\) jointly with (8). This will provide an approximation of the spatial locations in this manner for the next step without the need of elaborate simulation methods (like MCMC) that consider all of these discretization points. The main requirement is as before that the dynamics of \(Z_{t}\) can be sampled, at least up-to an approximation. To illustrate this we present the procedure in Example 3.1 below for the case of Example 2.1. We now proceed to present the analog of (7) appealing to the prediction-updating structure in (3). The target distribution for the \(k\)-th MCMC procedure will be \[\overline{\pi}_{k}^{N}(z_{t_{k}})\propto G\left((z_{t_{k}},\overline{x}_{t_{k }}),y_{t_{k}}\right)\frac{1}{N}\sum_{i=1}^{N}f_{k}(z_{t_{k-1}}^{(i)},z_{t_{k}}).\] Running the MCMC kernel for \(N\) steps yields an \(N-\)sample approximation of \(\overline{\pi}_{k}(\varphi)=\int_{\mathsf{Z}}\varphi(z_{t_{k}})\overline{\pi} _{k}(z_{t_{k}})dz_{t_{k}}\) as \[\widehat{\overline{\pi}}_{k}^{N}(\varphi):=\frac{1}{N}\sum_{i=1}^{N}\varphi(z_ {t_{k}}^{(i)}).\] The procedure is summarized in Algorithm 2. We remark that Algorithm 2 will have an intrinsic bias as it will (asymptotically) in \(N\) approximate \(\overline{\pi}_{k}\) and not the true filter. When the cost of the MCMC kernel is \(\mathcal{O}(d)\) per iteration, then the cost of Algorithm 2 is \(\mathcal{O}(dN^{2}+LdN)\) per time step and the quadratic cost in \(N\) can be removed as discussed previously to obtain a cost of \(\mathcal{O}((L+1)dN)\). **Example 3.1** (Example 2.1 continued).: _To compute \(\overline{x}_{t_{1}}^{j}\), this simply comprises of running the dynamics for \(l\in\{0,\ldots,L-2\}\)_ \[Z_{(l+1)\tau_{1}}^{(r)}=\Phi(Z_{l\tau_{1}}^{(r)},l\tau_{1},(l+1)\tau_{1}), \qquad r\in\{1,\cdots,N\} \tag{10}\] and then (in parallel if possible) computing \(\overline{x}_{t_{1}}^{j}\) using the recursion for \((l,j)\in\{0,\ldots,L-1\}\times\{1\cdots,N_{d}\}\)_ \[x_{(l+1)\tau_{1}}^{j,(r)}=x_{t_{\tau_{1}}}^{j,(r)}+h\left(x_{l\tau_{1}}^{j,(r)}, Z_{l\tau_{1}}^{(r)}\right)\ \tau_{1}, \tag{11}\] _with initial condition \(x_{t_{0}}^{j,(r)}=\overline{x}_{t_{0}}^{j}=x_{0}^{j}\) and setting_ \[\overline{x}_{t_{1}}^{j}=\frac{1}{N}\sum_{r=1}^{N}x_{t_{1}}^{j,(r)}. \tag{12}\] _Then one runs a MCMC kernel with invariant measure:_ \[\overline{\pi}_{1}(z_{t_{1}})\propto f_{1}(z_{0},z_{t_{1}})G\left((z_{t_{1}}, \overline{x}_{t_{1}}),y_{t_{1}}\right)\] _The MCMC kernel is run for \(N\) steps and an \(N-\)sample proxy of \(\overline{\pi}_{1}(\varphi)=\int_{\mathbb{Z}}\varphi(z_{t_{1}})\overline{\pi} _{1}(z_{t_{1}})dz_{t_{1}}\), assuming it is well-defined is_ \[\widehat{\overline{\pi}}_{1}^{N}(\varphi):=\frac{1}{N}\sum_{i=1}^{N}\varphi( z_{t_{1}}^{(i)})\] _where \(z_{t_{1}}^{(i)}\) is the \(i^{th}-\)sample of the MCMC chain. At a subsequent time point \(k\geq 2\), we now need to approximate the recursion (9). This can be achieved, by running for \((l,r)\in\{0,\ldots,L-2\}\times\{1,\ldots,N\}\)_ \[Z_{t_{k-1}+(l+1)\tau_{k}}^{(r)}=\Phi(Z_{t_{k-1}+l\tau_{k}}^{(r)},t_{k-1}+l \tau_{k},t_{k-1}+(l+1)\tau_{k}) \tag{13}\] _and then for \((l,j,r)\in\{0,\ldots,L-1\}\times\{1\cdots,N_{d}\}\times\{1,\ldots,N\}\)_ \[x_{t_{k-1}+(l+1)\tau_{k}}^{j,(r)}=x_{t_{k-1}+l\tau_{k}}^{j,(r)}+h(x_{t_{k-1}+ l\tau_{k}}^{j,(r)},Z_{t_{k-1}+l\tau_{k}}^{(r)})\ \tau_{k}, \tag{14}\] _with initial condition \(x_{t_{k-1}}^{j,(r)}=\overline{x}_{t_{k-1}}^{j}\), before finally obtaining_ \[\overline{x}_{t_{k}}^{j}=\frac{1}{N}\sum_{r=1}^{N}x_{t_{k}}^{j,(r)}. \tag{15}\] Numerical Simulations We now illustrate the performance of Algorithms 1 and 2 in three different cases: 1. A linear Gaussian model. The implementation of Algorithm 1 has not been investigated for high-dimensional filtering problems and we will show both efficiency and accuracy compared to competing ensemble methodologies. In contrast to how practitioners use the latter, we use moderately high number of ensembles (500-\(10^{4}\)) noting that for this model these methods are consistent and increasing number of ensembles improves \(accuracy\). 2. A Rotating Shallow-Water Model Observed at Known Locations. We use NOAA data to set the initial conditions and boundaries and then simulate \(Z_{t},x_{t}\) to provide observations. The point is to assess Algorithm 1 using synthetic driter locations and observations, but we note the simulation scenario is set using real data from NOAA to make the case study as realistic as possible. 3. A Rotating Shallow-Water Model Observed with Unknown Locations. We use real data for observer positions and velocities and show that Algorithm 2 is effective at estimating both the unknown velocity fields and observer locations. It is worth noting that in Case 1. the true filter is known and is obtained through the Kalman filter (KF), however, in Cases 2. and 3. the true filter is unknown, and therefore, in these two cases we compare our results to a reference that is chosen to be the hidden signal used to generate the observations in Case 2. and the prior distribution in Case 3. estimated using 50 different simulations of the shallow-water dynamics with noise as will be described below. ### A Linear-Gaussian Model We begin with Algorithm 1 for a Linear and Gaussian model in discrete time (recall that this is related to the model described in Section 2.1). Consider the model \[Z_{n+1} =AZ_{n}+\sigma_{z}W_{n+1},\quad W_{n+1}\stackrel{{ \text{i.i.d.}}}{{\sim}}\mathcal{N}_{d}(0,I_{d}),\quad n\in\{0,\cdots,T\}\] \[Y_{m} =CZ_{mL}+\sigma_{y}V_{m},\quad V_{m}\stackrel{{ \text{i.i.d.}}}{{\sim}}\mathcal{N}_{d}(0,I_{d}),\quad m\in\{1,\cdots,T/L\}\] where \(T\in\mathbb{N}\), \(L\in\mathbb{N}\) is the time frequency at which the system is observed, i.e. \(t_{j}=jL\), \(\mathsf{Z}:=\mathbb{R}^{d}\), \(\mathsf{Y}=\mathbb{R}\), \(d_{y}=d\), \(A\in\mathbb{R}^{d\times d}\) is a square matrix with the maximum eigenvalue is less than or equal to one, \(C\in\mathbb{R}^{d_{y}\times d}\) is defined as \(C=[C_{i,j}]\) with \[C_{i,j}=\left\{\begin{array}{ll}1&\text{if }j=i\,\hat{r}\\ 0&\text{otherwise}\end{array}\right.,\] where \(\hat{r}\) is the spatial frequency at which the coordinates of \(Z_{mL}\) are observed (e.g. if \(\hat{r}=3\), we only observe the \(3^{\text{rd}}\), \(6^{\text{th}}\), \(\cdots\) coordinates of \(Z_{mL}\).) We will compare Algorithm 1 with the mean of the KF, EnKF, ETKF and ESTKF. See [27] for the pseudo-codes. For the EnKF we use the matrix inversion lemma (Sherman-Morrison-Woodbury formula) when \(d_{y}\) is larger than the number of ensembles (see [22] for more details.) In the implementation of the KF, the computation of the Kalman gain involves inverting a matrix that is known as the pre-fit residual. It is important to note that for all the ensemble methods mentioned in this article, we do not directly compute the inverse of a matrix if it is being multiplied from left or right by another matrix or vector. Rather we reformulate the problem as finding the solution(s) of a linear system(s). For example, if we have \(X=A^{-1}B\) (or \(X=BA^{-1}\)), we solve for \(X\) in \(AX=B\) (or in \(A^{T}X^{T}=B^{T}\).) #### 4.1.1 Simulation Settings For the simulations we set \(T=500\), \(L=1\), \(\hat{r}=1\) (the system is fully observed), \(A=0.2I_{d}\), \(Z_{0}^{j}\sim-0.45\times\mathcal{U}_{[0,1]}\) (uniform distribution on \([0,1]\)), for \(j\in\{1,\cdots,d\}\), and \(\sigma_{z}=\sigma_{y}=0.05\), then compare the algorithms for different values of the state dimension \(d\). The absolute errors are defined as the absolute difference of the mean of the KF and the means of the other filters at every time step \(n\) and every state variable. We record the machine time once we have \(70\%\) of the errors below \(\sigma_{y}/2\). We note that there are other possible metrics that could be used to evaluate the performance of the methods, but the chosen metric here is to log the machine time when \(70\%\) of the absolute errors fall below a threshold value that we take it to be \(\sigma_{y}/2\) (other threshold values are possible). #### 4.1.2 Results Table 1 displays the numerical results of our implementation. The table shows the state dimensions, the percentage of the absolute errors below \(\sigma_{y}/2\), the number of ensembles/particles (plus \(N_{burn}\) for SMCMC (Algorithm 1)), the number of independent simulations that were run, the machine time and the ratio of the machine time of ensemble methods w.r.t. SMCMC. In Figure 1, we plot the machine times for each method. In Figure 2, we fix the number of ensembles/particles to \(N=1000\). We choose \(N\) this way such that \(50\%\) of the absolute errors are less than \(\sigma_{y}/2\). The plot shows that even when the number of ensembles is low and fixed the SMCMC method is still dominating especially as the state dimension grows to high values. The results indicate that Algorithm 1 is superior to several established data assimilation methods, particularly in high-dimensional scenarios. For example, when \(d=16000\), achieving a success rate of \(70\%\) in reducing errors below \(\sigma_{y}/2\) using the EnKF is almost three times as expensive as using the SMCMC method. Additionally, the cost for achieving the same level of performance is four and a half times greater for the ETKF and five times greater for the ESTKF. These findings provide clear evidence for the superior performance of Algorithm 1 in high-dimensional data assimilation problems. **Remark 4.1**.: _Simulations were run on a Precision 7920 Tower Workstation with 52 cores and 512GB of memory. We note that the ensemble methods pseudocodes have several matrix multiplications and matrix inversions while in our method matrix multiplication is very limited and there is no matrix inversion. Each run of any of the ensemble methods is allowed to use all 52 cores for matrix multiplication and matrix inversion through multi-threading, while each run of our method only uses one core, however, we run 26 independent simulations of SMCMC in parallel and then average the results._ Figure 1: Comparison of machine times of SMCMC versus ensemble methods for different state dimensions \(d\) at fixed accuracy. This is the total time spent on all simulations when the fraction of absolute errors smaller than \(\sigma_{y}/2\) is approximately 70%. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \hline Methods & **KP** & **EnKF** & **ETKF** & **ESTKF** & **SMCMC** & **KF** & **EnKF** & **ETKF** & **ESTKF** & **SMCMC** & **KF** & **EnKF** & **ETKF** & **ESTKF** & **SMCMC** \\ \hline \hline \(d\) & \multicolumn{10}{c|}{**625**} & \multicolumn{10}{c|}{**1250**} & \multicolumn{10}{c|}{**4000**} \\ \hline \(\%\) of Errors \(\leq 0.5\sigma_{y}\) & \(0.720\) & \(0.729\) & \(0.729\) & \(0.729\) & \(0.729\) & \(0.720\) & \(0.721\) & \(0.719\) & \(0.716\) & & \(0.720\) & \(0.720\) & \(0.721\) & \(0.720\) \\ \hline \(\#\) of Ensembles, Particles & \(500\) & \(500\) & \(500\) & \(500\) & \(500+280\) & \(960\) & \(960\) & \(960\) & \(960\) & \(500+1400\) & & \(3000\) & \(3000\) & \(3000\) & \(1000+6000\) \\ \hline \(\#\) of Simsistencies & \(1\) & \(1\) & \(1\) & \(26\) & \(1\) & \(1\) & \(1\) & \(26\) & & \(1\) & \(1\) & \(1\) & \(1\) & \(26\) \\ \hline Simulation Time (sec) & \(18.6\) & \(102.5\) & \(44.1\) & \(39.4\) & \(26.4\) & \(61.7\) & \(268.7\) & \(160.3\) & \(144.4\) & \(90.1\) & \(1208.3\) & \(1628.8\) & \(1503.0\) & \(2048.8\) & \(806.3\) \\ \hline Time series & \(0.70\) & \(3.89\) & \(1.67\) & \(1.20\) & & \(0.68\) & \(2.98\) & \(1.78\) & \(1.60\) & & \(1.50\) & \(2.02\) & \(2.39\) & \(2.54\) & \\ \hline \hline \(\mathbf{Methods}\) & **KP** & **EnKF** & **ETKF** & **ESTKF** & **SMCMC** & **KF** & **EnKF** & **ETKF** & **ESTKF** & **SMCMC** & **KF** & **EnKF** & **ETKF** & **ESTKF** & **SMCMC** \\ \hline \hline \(d\) & \multicolumn{10}{c|}{**6250**} & \multicolumn{10}{c|}{**900**} \\ \hline \(\%\) of Errors \(\leq 0.5\sigma_{y}\) & \(0.720\) & \(0.703\) & \(0.701\) & \(0.71\) & & \(0.700\) & \(0.700\) & \(0.708\) & \(0.706\) & & \(0.722\) & \(0.731\) & \(0.731\) & \(0.728\) \\ \hline \(\#\) of Ensembles, Particles & \(4400\) & \(4400\) & \(4400\) & \(1000+10000\) & \(6500\) & \(6500\) & \(6500\) & \(1000+15500\) & & \(10000\) & \(1000\) & \(10000\) & \(10000\) & \(10000\) & \(10000\) & \(10000\) \\ \hline \(\#\) of Simulations & \(1\) & \(1\) & \(1\) & \(26\) & \(1\) & \(1\) & \(1\) & \(1\) & \(26\) & & \(1\) & \(1\) & \(1\) & \(1\) & \(26\) \\ \hline Simulation Time (sec) & \(3487.7\) & \(3002.0\) & \(457.4\) & \(4992.7\) & \(1905.5\) & \(8797.8\) & \(3968.5\) & \(11566.3\) & \(12114.2\) & \(4305.9\) & \(30774.2\) & \(26032.3\) & \(30133.7\) & \(38533.4\) & \(3985.7\) \\ \hline Time series & \(1.78\) & \(2.00\) & \(2.33\) & \(2.55\) & & \(2.02\) & \(2.15\) & \(2.64\) & \(2.77\) & & \(2.11\) & \(2.53\) & \(3.67\) & \(3.91\) \\ \hline \hline \(\mathbf{Methods}\) & **KP** & **EnKF** & **ETKF** & **ESTKF** & **SMCMC** & & & & & & & & & & \\ \hline \(\#\) of Ensembles, Particles & \(16000\) & \(16000\) & \(16200\) & \(2000+28000\) & \(\blacksquare\) & & & & & & & & & & \\ \hline \(\#\) of Simulations & \(1\) & \(1\) & \(1\) & \(1\) & \(26\) & & & & & & & & & & \\ \hline Simulation Time (sec) & \(43282.8\) & \(42053.8\) & \(60796.3\) & \(73985.9\) & \(14968.9\) & & & & & & & & & \\ \hline Time series & \(2.89\) & \(2.96\) & \(4.66\) & \(4.88\) & & & & & & & & & & \\ \hline \end{tabular} \end{table} Table 1: Comparison of SMCMC and ensemble methods for different state dimensions \(d\). ### Rotating Shallow-Water Model Observed at Known Locations We consider a signal of the type in Example 2.1, where the PDE is associated to the shallow water equations (SWEs). The SWEs we consider are as follows \[\frac{\partial\zeta}{\partial t}+\frac{\partial(\eta u)}{\partial x }+\frac{\partial(\eta v)}{\partial y}=0,\] \[\frac{\partial(\eta u)}{\partial t}+\frac{\partial}{\partial x} (\eta u^{2}+\tfrac{1}{2}g\eta^{2})+\frac{\partial(\eta uv)}{\partial y}=g\eta \frac{\partial H}{\partial x}+f_{1}\eta v,\] \[\frac{\partial(\eta v)}{\partial t}+\frac{\partial(\eta uv)}{ \partial x}+\frac{\partial}{\partial y}(\eta v^{2}+\tfrac{1}{2}g\eta^{2})=g \eta\frac{\partial H}{\partial y}-f_{1}\eta u,\] where \((x,y)\in[\underline{L}_{x},\bar{L}_{x}]\times[\underline{L}_{y},\bar{L}_{y}]\), \(g\) is the gravitational acceleration, \(f_{1}\) is the Coriolis parameter that is assumed to be varying linearly with \(y\) such that \(f_{1}=f_{0}+\beta(y-y_{0})\), where \(f_{0}=2\Omega\sin\psi_{0}\), \(\Omega=7.29\times 10^{-5}sec^{-1}\) the rate of earth's rotation, \(\psi_{0}\) the central latitude of the region under study, \(y_{0}\) is the \(y\)-value at \(\psi_{0}\), and \(\beta\) the meridional gradient of Coriolis force at \(\psi_{0}\). Here \(\eta\) represents the depth of the water (sea free surface to sea bottom), \(H\) is the bathymetry which is defined as the distance from the geoid to the sea bottom (positive downwards), \(\zeta\) is the elevation of the free surface measured from the geoid (positive upwards), therefore, \(\eta=\zeta+H\), \(u\) and \(v\) are the horizontal velocities in \(x\) and \(y\) directions, respectively. The boundary conditions will be provided by the actual oceanographic data (based on a separate data assimilation procedure) and will be time varying. We use the finite-volume (FV) solution of the SWE [20, 28] which comprises of a 2-stage Runge-Kutta method combined with a local Lax-Friedrichs FV scheme, with time step \(\tau_{k}=(t_{k}-t_{k-1})/L\), hence, \(t_{k}=kL\), with \(t_{0}=0\) and its output will consist of \((Z_{t})_{0\leq t\leq T}\), for some time \(T>0\). The Figure 2: Comparison of machine times of SMCMC versus ensemble methods for different state dimensions \(d\) when the number of ensembles/particles and accuracy are fixed. We set \(N=1000\) such that \(50\%\) of the absolute errors are less than \(\sigma_{y}/2\). details are presented in Appendix B. Let \(N_{x},N_{y}\in\mathbb{N}\) be the number of the cells in the grid in \(x\) and \(y\) direction respectively with \(\Delta_{x}\), \(\Delta_{y}\) the corresponding step sizes. The hidden signal at time \(t=t_{k-1}+l\tau_{k}\in[t_{k-1},t_{k})\), is the vector given by \[Z_{t}=[(\eta_{i}^{t})_{1\leq i\leq N_{x}N_{y}},(u_{i}^{t})_{1\leq i\leq N_{x}N _{y}},(v_{i}^{t})_{1\leq i\leq N_{x}N_{y}}]^{\top}\in\mathbb{R}^{3N_{x}N_{y}},\] where \(Z_{0}\) is known and \((\eta_{i}^{t})_{1\leq i\leq N_{x}N_{y}},(u_{i}^{t})_{1\leq i\leq N_{x}N_{y}},( v_{i}^{t})_{1\leq i\leq N_{x}N_{y}}\) are row vectors obtained from the approximate solver detailed in Appendix B, \(\mathsf{Z}:=\mathbb{R}^{3N_{x}N_{y}}\). At prescribed times \((t_{k})_{k\geq 1}\) we add Gaussian noise, \((W_{t_{k}})_{k\geq 1}\), to the output of the numerical solution of the PDE. To preserve the boundary conditions the noise is constructed such that it is zero at the boundary. In particular, for \(\eta^{t_{k}}\) and \(k\in\mathbb{N}\), we use \[[\Xi_{t_{k}}^{\eta}]_{l=1,s=1}^{N_{y},N_{x}}=\sum_{i=0}^{J-1}\sum_{j=0}^{J-1} \epsilon_{t_{k}}^{\eta,(i,j)}\sin\left(\frac{2\pi jy_{l}}{\bar{L}_{y}-\underline {L}_{y}}\right)\sin\left(\frac{2\pi ix_{s}}{\bar{L}_{x}-\underline{L}_{x}}\right) \tag{16}\] and similarly for \(u^{t_{k}},v^{t_{k}}\) design \(\Xi_{t_{k}}^{u},\Xi_{t_{k}}^{v}\) and then vectorize to get \[W_{t_{k}}=[\operatorname{Vec}(\Xi_{t_{k}}^{\eta})^{T},\operatorname{Vec}(\Xi_ {t_{k}}^{u})^{T},\operatorname{Vec}(\Xi_{t_{k}}^{v})^{T}]^{T}\] Here \(J\in\mathbb{N}\) is a user chosen number of Fourier modes, \(\epsilon_{t_{k}}^{\cdot(i,j)}\sim\mathcal{N}(0,\sigma^{2}/(i\lor j+1))\), for \(i,j\in\{0,\cdots,J-1\}\), where \(i\lor j\) here means the maximum of \(\{i,j\}\), and \(\sigma>0\), see Appendix C for the specific implementation approach. The observations in this model are obtained from a set of \(N_{d}\) drifters in the region of study that are assumed to be moving according to the velocity components of the solution of the SWE above. The observations (and their locations) are generated from the signal before running the filtering algorithm. The observation model is taken as \[Y_{t_{k}}=\mathscr{O}_{t_{k}}(Z_{t_{k}})+V_{t_{k}},\quad V_{t_{k}}\stackrel{{ \mathrm{i.i.d.}}}{{\sim}}\mathcal{N}_{d_{y}}(0,\sigma_{y}^{2}I_{d_{y}}), \quad t_{k}\in\mathsf{T},\] where \(\mathscr{O}_{t_{k}}:\mathsf{Z}\to\mathsf{Y}^{N_{d}}\) is an \(\mathbb{R}^{d_{y}}\)-vector valued function containing measurements of \((u_{i}^{t_{k}})\) and \((v_{i}^{t_{k}})\) from the signal \(Z_{t_{k}}\) at time \(t_{k}\) (we do not measure \(\eta^{t_{k}}\)). These are collected from drifters whose positions move according to a kinematic model, where in (8) we use the values from the velocity fields at each location and therefore set \(h(x_{t_{k}}^{j},Z_{t_{k}})=[u^{t_{k}}(x_{t_{k}}^{j}),v^{t_{k}}(x_{t_{k}}^{j})]\), \(j\in\{1,\cdots,N_{d}\}\). Due to the space discretization involved in the PDE, in practice the observation location is chosen as the closest point on the grid at each time. In Figure 3 we present the basic idea of locating the closest point on the grid via an illustration. First we locate the four grid points surrounding each drifter based on its location (according to (8) in red). Then we pick the closest grid point to each drifter. The set of picked grid nodes will correspond to the approximate spatial locations of the observations (see Figure 3) and similarly for the set of observations are the values of \((u_{i}^{t_{k}})\) and \((v_{i}^{t_{k}})\). Compared to the ideal physical quantities this observation model contains a discretization error that approaches zero as \(\Delta_{x},\Delta_{y}\to 0\). #### 4.2.1 Simulation Settings The region of simulation is a domain of the Atlantic Ocean restricted to the longitude and latitude intervals \([-51^{\circ},-41^{\circ}]\), \([17^{\circ},27^{\circ}]\), respectively. We use Copernicus Marine Services [11] to obtain the bathemetry \(H\), the sea surface height above geoid \(\overline{\eta}\), and the horizontal velocities \(\overline{u}\) & \(\overline{v}\), with a \(1/12\) degree horizontal resolution, at an hourly rate from 2020-03-01 to 2020-03-05. The values of \(\overline{\eta}\), \(\overline{u}\) & \(\overline{v}\) at time 00:00 2020-03-01 is used as an initial state \(z_{0}\). As for the boundary conditions, we also use the values of \(\overline{\eta}\), \(\overline{u}\) & \(\overline{v}\) (at the boundaries) interpolated in time. For the observations, we assume that there are \(N_{d}=12\) drifters in the region during the period of simulation that observe \(u\) and \(v\). We obtain the drifters initial locations from the set of data available in [15, 14]. Drifters stay at the surface of the water and move with surface currents and their positions are tracked every 10 minutes to 1 hour (depending on the programme) and they can move up to \(2m/s\), so up to \(100km\) a day on e.g. the Gulf Stream, but typically indeed it is much less (\(<10km\)) especially away from the boundary currents. They provide a Lagrangian horizontal velocity data near the surface (thus \(d_{y}=2N_{d}=24\)) at roughly hourly resolution. Then, 26 independent simulations of Algorithm 1 were run in parallel with the following parameters: \(N_{x}=121\), \(N_{y}=121\), \(d_{x}=3N_{x}N_{y}=4.3923\times 10^{4}\), \(\Delta_{x}=8.602\times 10^{3}\) meters, \(\Delta_{y}=9.258\times 10^{3}\) meters, \(\tau_{k}=60\) seconds for all \(k\in\mathbb{N}\), \(T=7.2\times 10^{6}\) seconds (i.e. for 33.3 hours), \(L=10\), \(J=8\), \(N=1200\), \(N_{burn}=200\), \(\sigma=2\times 10^{-4}\), \(\sigma_{y}=1.45^{-2}\). #### 4.2.2 Results with synthetic drifter locations In Figure 4 we show a histogram of the absolute errors, defined as the absolute difference between the values of the filter and the _hidden signal_ (from which the data is generated) at all state variables and all times. The histogram shows that 84% of the filter values are within \(\sigma_{y}/2\) (half the noise standard deviation in the observations) from the hidden signal values. Furthermore, in Figure 6-9 we present snapshots of the hidden signal, the filter and their difference at times 8hr, 20hr, 26hr and 32hr. Also shown in the snapshots the tracks of the drifters which were computed before implementing the filtering algorithm. The ratio of the number of observations to the number of state variables is \(24/43923=0.055\%\). Even with a small number of observations, the results show that the filtering technique presented here is quite effective. Finally, we note that these simulations took about 9.9 hours to run 26 independent repeats on 52 cores. ### Rotating Shallow-Water Model Observed at Unknown Locations Here we consider the same model as in the previous section except that it is assumed that the spatial locations of the observational data are unknown and we use real drifter data from NOAA. For \(t_{k}\in\mathsf{T}\), the set of observations is the measurements of \(u\) and \(v\) obtained by the drifters in the region of simulation at time \(t_{k}\). To evaluate the function \(G\left((z_{t_{k}},\overline{x}_{t_{k}}),y_{t_{k}}\right)\), which is a Gaussian density with mean \(\mathscr{O}_{t_{k}}(z_{t_{k}})\) and a covariance matrix \(\sigma_{y}^{2}I_{d_{y}}\), at the given observations at time Figure 3: The illustration depicts the process of selecting which state variables to be observed based on the drifter’s location. The red circles represent a drifter at various points in time, and the red curve indicates its track. The blue circles correspond to the nearest surrounding grid point at the times of observation. \(t_{k}\), we need to determine the mapping \(\mathscr{O}_{t_{k}}:\mathsf{Z}\to\mathsf{Y}^{N_{d}}\). This is done the same way as in the previous example except now we use the estimate of the mean spatial locations of the drifters as in Example 3.1. #### 4.3.1 Simulation Settings The simulation region is the same as it was in the preceding example. [15, 14] provided the data used in this analysis, which showed the presence of 12 drifters in that region during the simulation period along with hourly measurements of \(u\) and \(v\) that we also extrapolated over time. The mean error of the measurements during the simulation period is computed over all times and all drifters and is denoted by \(\overline{\sigma}_{y}=0.0145\), whereas the minimum error value is \(0.0012\) and the maximum error value is \(0.0375\). Here, we should point out two ways in which the data in this example differs from that in the preceding example: i) The data in this case is considered to have been obtained at unknown locations, whereas the data in the previous example was taken at known locations. ii) Second and most importantly, the data in this example is real measurements, whereas the one in the other example is synthetic. Then, using the same parameters as before we run 26 independent simulations of Algorithm 2 in parallel. Figure 4: (Known locations example) Histogram of absolute errors: \(|\mathrm{Filter}-\mathrm{Signal}|\) at all state variables and at all times. The percentage of occurrence here is defined as the number of elements in the bin divided by the total number of elements \(d\times(T+1)\). #### 4.3.2 Results with real data In Figure 5 we show a histogram of the absolute errors, defined as before. The histogram shows that \(93.6\%\) of the filter values are within \(\overline{\sigma}_{y}/2\) from the reference signal values. All comparisons are with a **reference** signal that approximates the mean of the prior distribution. In this example is taken as the mean of \(50\) independent runs of the SW dynamics with noise using the same initial value \(Z_{0}\) and the same boundary conditions. Furthermore, in Figure 10-13 we present snapshots of the hidden signal, the filter and their difference at times \(8\)hr, \(20\)hr, \(26\)hr and \(32\)hr. Also shown in the snapshots the tracks of the drifters in red and blue. Red tracks refer to the mean spatial locations of the drifters computed according to the reference signal, whereas the blue tracks refer to the spatial locations obtained from the data set in [15]. The latter are not used by the algorithm, which manages to estimate them on the fly. A comparison of the errors (third row) between Figure 10-13 and the known observer trajectory case cannot be made directly due to the differences in the reference and the hidden signal. In Figure 10-13 it is notable that despite the lack of observer position trajectories, the difference between posterior and prior mean suggest informative likelihoods for \(u,v\) and that the posterior does manage to gain significant information for these variables. In contrast for \(\eta\) these differences are smaller and learning occurs via the dependence of \(\eta\) with \(u,v\) in the dynamics. The results for this example again show Figure 5: (Unknown locations example) Histogram of absolute errors: \(|\mathrm{Filter-Prior}|\) at all state variables and at all times. The percentage of occurrence here is defined as the number of elements in the bin divided by the total number of elements \(d\times(T+1)\). that the filtering technique presented here is quite effective even when the spatial locations of the observations are assumed unknown. Finally, we note that these simulations took around 13.4 hours to run 26 independent repeats on 52 cores. ### Acknowledgements HR & AJ were supported by KAUST baseline funding. The work of DC has been partially supported by European Research Council (ERC) Synergy grant STUOD-DLV-856408. NK was partially supported by a JP Morgan Chase AI Faculty award. ## Appendix A Pseudocode for Algorithm 1 with \(\mathcal{O}(dN)\) Cost Consider the model in Example 2.1 in addition to the observational model given by \[Y_{t_{k}}=\mathscr{O}_{t_{k}}(Z_{t_{k}})+V_{t_{k}},\quad V_{t_{k}}\stackrel{{ \mathrm{i.i.d.}}}{{\sim}}\mathcal{N}_{d_{y}}(0,\sigma_{y}^{2}I_{d_{y}}),\quad t _{k}\in\mathsf{T},\] where \(\mathscr{O}_{t_{k}}:\mathsf{Z}\to\mathsf{Y}^{N_{d}}\) is an \(\mathbb{R}^{d_{y}}\)-vector valued function. We give below the pseudocode for Algorithm 1 with a cost of \(\mathcal{O}(dN)\). In practice, one would run \(M\) independent runs of this algorithm in parallel then take the mean. 1. **Input:** Given the initial state \(Z_{0}=z_{0}\), the observations \(\{Y_{t_{k}}=y_{t_{k}}\}\), \(k\geq 1\), and the time frequency \(L\). 2. Initialize: If \(L\geq 2\), then for \(l=0,\cdots,L-2\) return \(Z_{(l+1)\tau_{1}}=\Phi(Z_{l\tau_{1}},l\tau_{1},(l+1)\tau_{1})\). Return \(\tilde{Z}_{t_{1}}:=\Phi(Z_{t_{0}},t_{0},t_{1})+W_{t_{1}}\), where \(W_{t_{1}}\sim\mathcal{N}_{d}(0,R)\). Then run a random-walk MCMC as in Algorithm 4 to sample \(N\) particles \(\{Z^{i}_{t_{1}}\}_{i=1}^{N}\) from \(\pi_{1}\) in (6), where the Markov chain is initialized by \(Z^{\prime}=\tilde{Z}_{t_{1}}\). Then, set \(\widehat{\pi}_{1}^{N}(\varphi)\leftarrow\frac{1}{N}\sum_{i=1}^{N}\varphi(z^{i }_{t_{1}})\) and set \(k=2\). 3. Update: If \(L\geq 2\), then for \(l=0,\cdots,L-2\) and \(i=1,\cdots,N\), return \(Z^{i}_{(l+1)\tau_{k}}=\Phi(Z^{i}_{l\tau_{k}},l\tau_{k},(l+1)\tau_{k})\) (in parallel if possible). Sample a uniform random integer \(j\) from \(\{1,\cdots,N\}\). Return \(\tilde{Z}^{j}_{t_{k}}:=\Phi(Z^{j}_{t_{k-1}},t_{k-1},t_{k})+W^{j}_{t_{k}}\), where \(W^{j}_{t_{k}}\sim\mathcal{N}_{d}(0,R)\). Then run a random-walk MCMC as in Algorithm 4 to sample \(N\) particles \(\{Z^{i}_{t_{k}}\}_{i=1}^{N}\) from \[\pi_{k}^{N}(z_{t_{k}})\propto g_{k}(z_{t_{k}},\mathbf{y}_{t_{k}})f_{k}(z^{j}_{ t_{k-1}},z_{t_{k}}),\] where the Markov chain is initialized with \(Z^{\prime}=\tilde{Z}^{j}_{t_{k}}\). Set \(\widehat{\pi}_{k}^{N}(\varphi)\leftarrow\frac{1}{N}\sum_{i=1}^{N}\varphi(Z^{ i}_{t_{k}})\). Set \(k\longleftarrow k+1\). If \(k=n+1\) go to the next step otherwise return to the start of step \(2\).. 4. **Output:** Return \(\{\widehat{\pi}_{k}^{N}(\varphi)\}_{k\in\{1,\cdots,n\}}\). **Algorithm 3** Pseudocode for Sequential MCMC Method for Filtering for \(n\) time steps. We note that when considering a SW model, for example, one must pay close attention to the selection of \(W^{\prime}\) in the MCMC step above. One way is to construct a noise similar to the one described in Section 4.2. In our simulation we set \(W^{\prime}\) as \(W_{t_{k}}\) in Section 4.2 except that the random Gaussians \(\epsilon^{\cdot,(i,j)}_{t_{k}}\) are sampled from \(\mathcal{N}(0,\sigma^{\prime 2}/(i\lor j+1))\) where \(\sigma^{\prime}=3\sigma\). This choice ensures that the acceptance rate for MCMC is in the range of 0.2-0.3. 1. Input: Initial point \(Z^{\prime}\) and target distribution \(\pi\). 2. First compute \(\pi_{\text{old}}\longleftarrow\pi(Z^{\prime})\). Put \(\mathsf{P}=\emptyset\), the empty set, and set \(m=1\). 3. Set \(Z^{p}\longleftarrow Z^{\prime}+W^{\prime}\), where \(W^{\prime}\sim\mathcal{N}_{d}(0,R^{\prime})\) for some covareiance matrix \(R^{\prime}\). 4. Compute \(\pi_{\text{new}}\longleftarrow\pi(Z^{p})\). 5. Sample \(u\sim\mathcal{U}[0,1]\). Compute \(\alpha=\min\{1,\pi_{\text{new}}/\pi_{\text{old}}\}\). If \(\alpha<u\), set \(Z^{\prime}\longleftarrow Z^{p}\), \(\pi_{\text{old}}\longleftarrow\pi_{\text{new}}\) and if \(m>N_{burn}\), add \(Z^{\prime}\) to the set of particles \(\mathsf{P}\). Set \(m\longleftarrow m+1\). 6. If \(m=N_{burn}+N\), return the set of \(N\) particles \(\mathsf{P}\); otherwise repeat Steps (3)-(5). **Algorithm 4** Pseudocode for MCMC steps ## Appendix B Numerical solution of the SWE To write the SW equations in a compact form, we introduce the following vectors \[U=[\eta,\eta u,\eta v]^{\top},\quad A(U)=[\eta u,\eta u^{2}+ \tfrac{1}{2}g\eta^{2},\eta uv]^{\top},\quad B(U)=[\eta v,\eta uv,\eta v^{2}+ \tfrac{1}{2}g\eta^{2}]^{\top},\] \[C(U)=\Big{[}0,g\eta\frac{\partial H}{\partial x},g\eta\frac{ \partial H}{\partial y}\Big{]}^{\top},\quad D(U)=[0,f\eta v,-f\eta u]^{\top}.\] As a result, we can write the SWEs as \[U_{t}+A(U)_{x}+B(U)_{y}=C(U)+D(U).\] Here \(A\) and \(B\) are the physical fluxes in the \(x\) and \(y\) directions, respectively. The spatial resolutions in the \(x\) and \(y\) directions are obtained via \(\Delta_{x}=(\underline{L}_{x}-\bar{L}_{x})/N_{x}\) and \(\Delta_{y}=(\underline{L}_{y}-\bar{L}_{y})/N_{y}\). We refer to the grid \[\big{\{}(x_{i},y_{j})\in[\underline{L}_{x},\bar{L}_{x}]\times[\underline{L}_{ y},\bar{L}_{y}]:\,x_{i}=(i-1)\Delta_{x},\,y_{j}=(j-1)\Delta_{y},\,\,\,i\in\{1, \ldots,N_{x}\},\,\,j\in\{1,\ldots,N_{y}\}\big{\}}\] as the physical grid. Consider the uniform grid with finite volume cells \(I_{i,j}=[x_{i-1/2},x_{i+1/2}]\times[y_{j-1/2},y_{j+1/2}]\) centered at \((x_{i},y_{j})=(\frac{x_{i-1/2}+x_{i+1/2}}{2},\frac{y_{j-1/2}+y_{j+1/2}}{2})\), for all \((i,j)\in\{0,\ldots,N_{x}+1\}\times\{0,\ldots,N_{y}+1\}\), with grid size of \((N_{x}+2)\times(N_{y}+2)\). Then, the discretized solution of the SWEs on \([t_{k-1},t_{k})\) is as follows. For \(l\in\{0,\cdots,L-2\}\), we compute \[U_{i,j}^{t_{k-1}+(l+1)\tau_{k}}=U_{i,j}^{t_{k-1}+l\tau_{k}}-\frac{\tau_{k}}{ \Delta_{x}}(A_{i+\frac{1}{2},j}^{*}-A_{i-\frac{1}{2},j}^{*})-\frac{\tau_{k}}{ \Delta_{y}}(B_{i,j+\frac{1}{2}}^{*}-B_{i,j-\frac{1}{2}}^{*})+\tau_{k}C_{i,j}^{ t_{k-1}+l\tau_{k}}+\tau_{k}D_{i,j}^{t_{k-1}+l\tau_{k}},\] where \(A^{*}\) and \(B^{*}\) are the numerical Lax-Friedrichs fluxes given by \[A_{i+\frac{1}{2},j}^{*} =\tfrac{1}{2}[A(U_{i,j}^{t_{k-1}+l\tau_{k}})+A(U_{i+1,j}^{t_{k-1} +l\tau_{k}})]-\tfrac{1}{2}\lambda_{i+\frac{1}{2},j,\max}^{x}[U_{i+1,j}^{t_{k- 1}+l\tau_{k}}-U_{i,j}^{t_{k-1}+l\tau_{k}}]\] \[A_{i-\frac{1}{2},j}^{*} =\tfrac{1}{2}[A(U_{i,j}^{t_{k-1}+l\tau_{k}})+A(U_{i-1,j}^{t_{k- 1}+l\tau_{k}})]-\tfrac{1}{2}\lambda_{i-\frac{1}{2},j,\max}^{x}[U_{i,j}^{t_{k- 1}+l\tau_{k}}-U_{i-1,j}^{t_{k-1}+l\tau_{k}}]\] \[B_{i,j+\frac{1}{2}}^{*} =\tfrac{1}{2}[B(U_{i,j}^{t_{k-1}+l\tau_{k}})+B(U_{i,j+1}^{t_{k- 1}+l\tau_{k}})]-\tfrac{1}{2}\lambda_{i,j+\frac{1}{2},\max}^{y}[U_{i,j+1}^{t_{ k-1}+l\tau_{k}}-U_{i,j}^{t_{k-1}+l\tau_{k}}]\] \[B_{i,j-\frac{1}{2}}^{*} =\tfrac{1}{2}[B(U_{i,j}^{t_{k-1}+l\tau_{k}})+B(U_{i,j-1}^{t_{k- 1}+l\tau_{k}})]-\tfrac{1}{2}\lambda_{i,j-\frac{1}{2},\max}^{y}[U_{i,j}^{t_{k -1}+l\tau_{k}}-U_{i,j-1}^{t_{k-1}+l\tau_{k}}],\] therefore, \[A^{*}_{i+\frac{1}{2},j}-A^{*}_{i-\frac{1}{2},j}=\tfrac{1}{2}[A(U^{t_ {k-1}+l\tau_{k}}_{i+1,j})-A(U^{t_{k-1}+l\tau_{k}}_{i-1,j})]\\ -\tfrac{1}{2}\Big{(}\lambda^{x}_{i+\frac{1}{2},j,\max}[U^{t_{k-1} +l\tau_{k}}_{i+1,j}-U^{t_{k-1}+l\tau_{k}}_{i,j}]-\lambda^{x}_{i-\frac{1}{2},j, \max}[U^{t_{k-1}+l\tau_{k}}_{i,j}-U^{t_{k-1}+l\tau_{k}}_{i-1,j}]\Big{)}\\ B^{*}_{i,j+\frac{1}{2}}-B^{*}_{i,j-\frac{1}{2}}=\tfrac{1}{2}[B (U^{t_{k-1}+l\tau_{k}}_{i,j+1})-B(U^{t_{k-1}+l\tau_{k}}_{i,j-1})]\\ -\tfrac{1}{2}\Big{(}\lambda^{y}_{i,j+\frac{1}{2},\max}[U^{t_{k-1} +l\tau_{k}}_{i,j+1}-U^{t_{k-1}+l\tau_{k}}_{i,j}]-\lambda^{y}_{i,j-\frac{1}{2}, \max}[U^{t_{k-1}+l\tau_{k}}_{i,j-1}-U^{t_{k-1}+l\tau_{k}}_{i,j-1}]\Big{)}\] where \(\lambda^{x}_{i^{*},j^{*},\max}\) is the maximum eigenvalue of the Jacobian matrix \(\partial A(U)/\partial U\) evaluated at \(U^{t_{k-1}+l\tau_{k}}_{i^{*},j^{*}}\). The eigenvalues are \(\{u_{i^{*},j^{*}}\pm\sqrt{g\eta_{i^{*},j^{*}}},u_{i^{*},j^{*}}\}\). We set \(\lambda^{x}_{i^{*},j^{*},\max}=|u_{i^{*},j^{*}}|+\sqrt{g\eta_{i^{*},j^{*}}}\). Similarly, \(\lambda^{y}_{i^{*},j^{*},\max}\) is the maximum eigenvalue of the Jacobian matrix \(\partial B(U)/\partial U\) evaluated at \(U^{t_{k-1}+l\tau_{k}}_{i^{*},j^{*}}\) and we take it to be \(|v_{i^{*},j^{*}}|+\sqrt{g\eta_{i^{*},j^{*}}}\). Then the hidden signal at time \(t=t_{k-1}+l\tau_{k}\in[t_{k-1},t_{k})\), is the vector given by \[Z_{t}=[(\eta^{t}_{i})_{1\leq i\leq N_{x}N_{y}},(u^{t}_{i})_{1\leq i\leq N_{x} N_{y}},(v^{t}_{i})_{1\leq i\leq N_{x}N_{y}}]^{\top}\in\mathbb{R}^{3N_{x}N_{y}},\] where \(Z_{0}\) is known. Here the vectors \((\eta^{t}_{i})_{1\leq i\leq N_{x}N_{y}},(u^{t}_{i})_{1\leq i\leq N_{x}N_{y}},( v^{t}_{i})_{1\leq i\leq N_{x}N_{y}}\) are obtained from the approximate solution \(U^{t}_{i,j}\), \((i,j)\in\{0,\cdots,N_{x}+1\}\times\{0,\cdots,N_{y}+1\}\). ## Appendix C Implementation of the noise We show how the noise is computed for each of the \(\eta,u,v\) variables. As the procedure is the same in each case we drop the \(\eta,u,v\) superscripts and define \[S_{1}=\Big{[}\sin\Big{(}\tfrac{2\pi jy_{l}}{L_{y}-L_{y}}\Big{)}\Big{]}_{l=1,j =0}^{N_{y},J-1}\in\mathbb{R}^{N_{y}\times J},\qquad S_{2}=\Big{[}\sin\Big{(} \tfrac{2\pi ix_{x}}{L_{x}-L_{x}}\Big{)}\Big{]}_{s=1,i=0}^{N_{x},J-1}\in \mathbb{R}^{N_{x}\times J}\] and let \(\epsilon_{t_{k}}\) be a random \(J\times J\) matrix with independent entries, \(\epsilon^{ij}_{t_{k}}\sim\mathcal{N}(0,\sigma^{2}/(i\lor j+1))\), for \(i,j\in\{0,\cdots,J-1\}\), with \(\sigma>0\). Then, \(\Xi_{t_{k}}=S_{1}\epsilon_{t_{k}}S_{2}^{T}\in\mathbb{R}^{N_{y}\times N_{x}}\) so one has \(\mathbb{E}[\mathrm{Vec}(\Xi_{t_{k}})]=0\) (the expectation is taken with respect to the random matrix \(\epsilon_{t_{k}}\)) and \[\mathbb{E}[\mathrm{Vec}(\Xi_{t_{k}})\mathrm{Vec}(\Xi_{t_{k}})^{T}]=\mathrm{ diag}(\underbrace{Q,\cdots,Q}_{N_{x}-\mathrm{times}}),\] where \(Q=[Q^{rs}]_{r,s=1}^{N_{y}}\) with \[Q^{rs}=\sigma^{2}\sum_{i,j=0}^{J-1}S_{1}^{ri}S_{1}^{si}(S_{2}^{T}S_{2})^{jj} \frac{1}{i\lor j+1}.\] As a result, the covariance matrix of \(W_{t_{k}}\) is given by \(R=\mathrm{diag}(\underbrace{Q,\cdots,Q}_{3N_{x}-\mathrm{times}})\) with \(Q\) the same as above.
2309.16133
Mask4Former: Mask Transformer for 4D Panoptic Segmentation
Accurately perceiving and tracking instances over time is essential for the decision-making processes of autonomous agents interacting safely in dynamic environments. With this intention, we propose Mask4Former for the challenging task of 4D panoptic segmentation of LiDAR point clouds. Mask4Former is the first transformer-based approach unifying semantic instance segmentation and tracking of sparse and irregular sequences of 3D point clouds into a single joint model. Our model directly predicts semantic instances and their temporal associations without relying on hand-crafted non-learned association strategies such as probabilistic clustering or voting-based center prediction. Instead, Mask4Former introduces spatio-temporal instance queries that encode the semantic and geometric properties of each semantic tracklet in the sequence. In an in-depth study, we find that promoting spatially compact instance predictions is critical as spatio-temporal instance queries tend to merge multiple semantically similar instances, even if they are spatially distant. To this end, we regress 6-DOF bounding box parameters from spatio-temporal instance queries, which are used as an auxiliary task to foster spatially compact predictions. Mask4Former achieves a new state-of-the-art on the SemanticKITTI test set with a score of 68.4 LSTQ.
Kadir Yilmaz, Jonas Schult, Alexey Nekrasov, Bastian Leibe
2023-09-28T03:30:50Z
http://arxiv.org/abs/2309.16133v2
# MASK4D: Mask Transformer for 4D Panoptic Segmentation ###### Abstract Accurately perceiving and tracking instances over time is essential for the decision-making processes of autonomous agents interacting safely in dynamic environments. With this intention, we propose Mask4D for the challenging task of 4D panoptic segmentation of LiDAR point clouds. Mask4D is the first transformer-based approach unifying semantic instance segmentation and tracking of sparse and irregular sequences of 3D point clouds into a single joint model. Our model directly predicts semantic instances and their temporal associations without relying on any hand-crafted non-learned association strategies such as probabilistic clustering or voting-based center prediction. Instead, Mask4D introduces spatio-temporal instance queries which encode the semantic and geometric properties of each semantic tracklet in the sequence. In an in-depth study, we find that it is critical to promote spatially compact instance predictions as spatio-temporal instance queries tend to merge multiple semantically similar instances, even if they are spatially distant. To this end, we regress 6-DOF bounding box parameters from spatio-temporal instance queries, which is used as an auxiliary task to foster spatially compact predictions. Mask4D achieves a new state-of-the-art on the SemanticKITTI test set with a score of 68.4 LSTQ, improving upon published top-performing methods by at least +4.5%. Project page: [https://vision.rwth-aachen.de/mask4d](https://vision.rwth-aachen.de/mask4d) ## I Introduction LiDAR is a popular sensor modality in the robotics community due to its ability to provide accurate 3D spatial information. It allows precise scene understanding of the 3D environment over time, which is essential for agents to safely navigate in ever-changing environments by predicting traffic movements and identifying potential hazards. In this work, we address the task of 4D panoptic segmentation on sequences of 3D point clouds. That is, given a sequence of LiDAR scans, the goal is to predict the semantic class of each point while consistently tracking object instances. The research community has made remarkable progress in advancing 3D vision tasks, fueled by the rapid advancement of deep learning methods [25, 32, 40] and the availability of large-scale benchmark datasets [4, 13, 37, 14]. Powerful feature extractors [40, 39, 49, 10] that exploit the rich information offered by LiDAR sensors have been proposed, leading to remarkable improvements in object detection [22, 34, 44], segmentation [49, 27, 40], and tracking [43, 46]. To accomplish holistic 3D scene understanding, 4D panoptic segmentation [1] has recently attracted attention. Traditionally, approaches follow the tracking-by-detection paradigm [30] which decouples 4D panoptic segmentation in the subtasks of semantic segmentation [40, 27], object detection [22] and tracking [43, 29]. While this separation of segmentation, detection, and tracking allows for independent improvements in each component, it tends to neglect joint learning of temporal relationships with semantic information. Significant advances in 4D panoptic segmentation methods address this problem by introducing model architectures that approach the task as a whole and predict semantic class labels for each point and temporally consistent instances. Recent methods generate instance predictions by grouping proposals in the 4D spatio-temporal volume [16, 20, 1] or learned embedding space [26]. Hoewever, current 4D panoptic segmentation methods still fundamentally rely on non-learned clustering methods to aggregate tracklets. At the same time, we observe a noticeable shift towards unifying tasks [41, 45, 19] and model architectures [6, 9] for holistic scene understanding. Central to this trend are mask transformers [32, 7, 3] that directly predict foreground masks and their associated semantic labels, eliminating the need for non-learned clustering strategies. Typically, models consist of two main components: a convolutional feature extractor and a transformer decoder. The convolutional feature extractor processes the point cloud and generates multi-scale features. The transformer decoder leverages these extracted features and iteratively refines queries that encode the spatial and Fig. 1: **Spatially non-compact instances.** Naively adapted for 4D panoptic segmentation, mask transformer approaches reveal a crucial shortcoming: instance predictions tend to be spatially non-compact. As a result, the baseline model predicts two cars as a single object _(left)_. To overcome this limitation, we introduce Mask4D, which additionally regresses 6-DOF bounding box parameters for the instance trajectory. We find that optimizing these bounding box parameters provides a valuable loss signal that promotes spatially compact instances _(right)_. semantic features. Over the course of multiple transformer decoder layers, the queries are refined in a sequential manner. Ultimately, these refined queries yield the final semantic class and mask predictions, allowing mask transformers to avoid hand-crafted grouping of votes or embeddings. Despite the remarkable performance of mask transformer architectures across diverse tasks, such as image segmentation [8, 9], video segmentation [7], and 3D scene segmentation [25, 32, 36], it remains open whether such a paradigm generalizes to the unique challenges of sparse and irregular 4D panoptic segmentation of point cloud sequences. To answer this question, our aim in this paper is to extend mask transformers to 4D panoptic segmentation of point clouds. Unlike prevailing top-performing approaches for 4D panoptic segmentation [1, 20, 48, 26], we directly predict foreground masks for _thing_ instances and _stuff_ regions and their associated semantic labels, bypassing the need for post-processing clustering which requires hand-engineered methods and fine-tuned hyperparameters. Therefore, in an initial study, we adapt Mask3D [32] for 4D panoptic segmentation. We follow recent approaches [1, 20, 17] by superimposing consecutive LiDAR scans into spatio-temporal point clouds that are processed by a sparse convolutional feature backbone [10]. Furthermore, we introduce point-wise spatio-temporal positional encoding in the transformer decoder [7]. Our findings indicate that these modifications are already competitive with specialized 4D panoptic segmentation methods [20]. Yet, a deeper examination reveals a significant flaw in mask transformer approaches for 3D point clouds: instances are not always spatially compact [32, 33]. Specifically, an instance query may connect multiple instances in the spatio-temporal point cloud, even if they are spatially distant but share semantic similarities (Fig. 1, _left_). Based on these findings, we introduce our novel approach called Mask4D, which is tailored to ensure spatially compact instances, thus unleashing the full potential of mask transformer architectures for 4D panoptic segmentation. We achieve this by regressing 6-DOF bounding box parameters from the spatio-temporal queries, providing a loss signal to foster spatially compact instance predictions (Fig. 1, _right_). We evaluate our Mask4D model on the challenging SemanticKITTI 4D panoptic segmentation benchmark, achieving state-of-the-art by a significant margin of +4.5% on the test set among published methods. In summary, our contributions are fourfold: **(1)** We extend the state-of-the-art instance segmentation method Mask3D [32] to the 4D panoptic segmentation task. **(2)** In experiments, we discover a crucial shortcoming of this straightforward adaptation, namely, the tendency for spatio-temporal instance predictions to lack spatial compactness. **(3)** We propose Mask4D which effectively addresses the aforementioned limitation by introducing a box regression branch that promotes spatially compact instance predictions in an end-to-end trainable fashion, rather than relying on a geometric grouping mechanism with hand-tuned hyperparameters. **(4)** Mask4D achieves state-of-the-art performance on both the SemanticKITTI validation and test sets. ## II Related Work **Mask Transformers.** MaskFormer [9] proposes mask classification as a novel segmentation technique, showcasing its advantages over conventional pixel-based methods. Inspired by DETR [6], it combines CNNs and transformer networks in a universal segmentation architecture, eliminating the need for task-specific architectures, and streamlining development processes. Subsequently, Mask2Former [8] introduces masked attention in the transformer decoder, directing the attention only to relevant parts of the image, and incorporates high-resolution multi-scale features for segmenting smaller objects. This improves convergence and performance, achieving state-of-the-art results in 2D segmentation tasks [19, 47, 23]. The paradigm extends to the video instance segmentation [7] task, where Mask2Former effectively addresses temporal consistency, showcasing its universal applicability. Inspired by its success in 2D, Mask3D [32] applies the mask transformer architecture to the 3D domain by leveraging a sparse convolutional backbone [10], and eliminates the need for the predominantly used center-voting and clustering algorithms [12, 18, 42]. For LiDAR panoptic segmentation, MaskPLS [25] compares mask transformer architectures with adapted semantic segmentation approaches [10, 11, 16, 39, 5, 49], demonstrating the superiority of the mask transformer architecture. **4D panoptic segmentation.** 4D-PLS [1] introduces the 4D panoptic segmentation task, associated evaluation metrics, and their method for solving the task. It superimposes consecutive LiDAR scans to form a spatio-temporal point cloud, performs semantic segmentation, and follows a probabilistic approach for clustering instances based on their predicted centers. Along the same lines, 4D-DS-Net [17] and 4D-StOP [20] propose to cluster instances based on spatio-temporal proximity. 4D-DS-Net [17] extends DS-Net [16] to the 4D domain by applying a dynamic shifting module to spatio-temporal point clouds which iteratively refines the estimated instance centers and clusters the points in the spatio-temporal volume. 4D-StOP [20], on the other hand, replaces the probabilistic clustering with an instance-centric voting approach. Here, initial instance proposals are generated using center votes and then aggregated using learned geometric features. Building on the success of 4D-StOP, the concurrent work Eq-4D-StOP [48] predicts equivariant fields and incorporates the necessary layers into the models. This reinforcement of rotation equivariance ensures that the models account for rotational symmetries in the data, resulting in a more robust feature learning. Contrastingly, CA-Net [26] clusters instances in the feature space. It leverages an off-the-shelf 3D panoptic segmentation network [16] and uses extracted point features in a contrastive learning framework [15] to generate instance-wise consistent features, resulting in robust instance associations over time. Unlike previous approaches, Mask4D unifies segmentation and tracking by directly predicting the spatio-temporal instance masks and their corresponding semantic labels, bypassing the need for non-learned clustering approaches. ## III Method Inspired by the success of mask transformer approaches for 3D instance segmentation [25, 32, 36] and 2D video instance segmentation [7], we propose Mask4D - the first mask transformer-based approach for 4D panoptic segmentation. Building on Mask3D [32] for 3D instance segmentation, we introduce technical components that are key to enabling 4D panoptic segmentation of point clouds, _i.e._, predicting the semantic class of each point and consistently tracking instances over time. **Overview.** (Fig. 2) As the input to our model, we use a single voxelized point cloud consisting of superimposed consecutive LiDAR scans. We process the point cloud with a sparse convolutional _feature extractor_ (Fig.2, \(\boxempty\)), which generates a multi-resolution voxel representation for the _transformer decoder_\(\boxempty\). At the core of the model are spatio-temporal (ST) queries that encode geometric and semantic attributes of all instances in a sequence. To learn ST query features, we use a transformer decoder \(\boxempty\) that encompasses consecutive query refinement and mask modules. A _mask module_\(\boxempty\) takes the ST queries and predicts instance heatmaps, semantic class probabilities, and also regresses a bounding box for each instance trajectory. A _query refinement module_\(\boxempty\) updates the ST queries by cross-attending to multi-scale voxel representations. In the following, we provide a detailed description of each component involved. **Input Spatio-Temporal Point Cloud.** We represent a temporal sequence of point clouds as a single superimposed and voxelized point cloud. Similar to other approaches [1, 20], we use pose estimates of the ego vehicle [2, 3] to create a single scene containing points from multiple LiDAR scans in a global coordinate frame. Subsequently, this superimposed point cloud represents a spatio-temporal volume, denoted as \(\mathcal{P}\in\mathbb{R}^{M\times 3}\), which captures the temporal evolution of the scene. We partition this point cloud into equally sized cubic voxels, thus yielding the representation \(\mathcal{V}\in\mathbb{Z}^{K_{0}\times 3}\). This voxelization process not only keeps memory constraints in bounds but also allows for efficient processing of the resulting point cloud by sparse convolutional feature extractors [10]. **Feature Backbone.** (Fig. 2, \(\boxempty\)) The sparse convolutional feature extractor processes the voxelized point cloud \(\mathcal{V}\in\mathbb{Z}^{K_{0}\times 3}\) and extracts multi-scale features \(F_{r}\in\mathbb{R}^{K_{r}\times D_{r}}\) at various resolutions \(r\). This design allows the network to capture both local geometry and global context while ensuring the preservation of fine-grained spatial details. **Mask Module.** (Fig. 2, \(\boxempty\)) Each of the \(N_{q}\) ST queries \(\mathbf{X}\in\mathbb{R}^{N_{q}\times D}\) represents a distinct instance over a time period. The mask module predicts the foreground mask of an instance throughout the sequence and the semantic class of the mask, as well as estimating the 6-DOF bounding box parameters of a trajectory. To generate this binary foreground mask, ST queries are processed by an MLP, and the queries are aligned with the feature space of the backbone's output. To obtain spatio-temporal masks at the finest resolution, we compute the dot product with the finest backbone features \(\mathbf{F}_{0}\), which - after sigmoid activation and thresholding - yields the final binary ST mask. In addition to these masks, we predict semantic class probabilities for each ST query via a linear projection layer to \(C+1\) dimensions, followed by a softmax normalization. A critical element for consistent tracking of instances over time is the bounding Fig. 2: **Illustration of the Mask4D model.** We superimpose a sequence of \(T\) point clouds into a spatio-temporal representation which is subsequently processed by a sparse convolutional feature backbone \(\boxempty\). Given a multi-scale feature representation extracted from the feature backbone, the transformer decoder \(\boxempty\) iteratively refines spatio-temporal (ST) instance queries. A mask module \(\boxempty\) consumes ST queries and point features at various scales and predicts semantic class probabilities, instance heatmaps, and a 6-DOF bounding box for each ST query. box regression branch. We feed the ST queries to an MLP followed by sigmoid activation to map the features to a 6-dimensional bounding box parameter space that encodes the normalized bounding box center coordinates \((x,y,z)\) as well as the box dimensions \((w,h,d)\). **Query Refinement Module.** (Fig. 2, \(\boxbox\)) Following Cheng _et al._[8], the query refinement blocks refine the ST queries \(\mathbf{X}\) by using the voxel features \(\mathbf{F}_{r}\) at various resolutions \(r\). First, a masked cross-attention layer [8] transforms voxel features \(\mathbf{F}_{r}\) into keys \(K\) and values \(V\), while ST queries are mapped to queries \(Q\). Here, ST queries attend only to the foreground voxels predicted by the previous mask module. We then apply self-attention between queries to ensure that multiple queries do not converge on a single instance. We use spatio-temporal Fourier positional encodings [38] to incorporate both spatial and temporal information into our transformer blocks. To do this, we sum spatial positional encodings based on the voxel positions and temporal positional encodings based on the LiDAR scan time frame [7]. **Hungarian Matching.** (Fig. 2, \(\box\)) In a single forward pass, Mask4D determines \(N_{q}\) foreground masks along with their associated semantic class labels. Since both these predictions and the ground truth targets are not in any particular order, it is necessary to establish optimal one-to-one correspondences between them for model optimization. Typically, Mask transformer methods [6, 8, 9] rely on the Hungarian Algorithm [21] for this purpose. The assignment cost for a predicted semantic mask, _i.e._, _thing_ instances and _stuff_ regions, and a target mask is defined as follows: \[\mathcal{C}=\mathcal{L}_{\text{mask}}+\mathcal{L}_{\text{sem}} \tag{1}\] where \(\mathcal{L}_{\text{mask}}{=}\lambda_{\text{disc}}\mathcal{L}_{\text{disc}} +\lambda_{\text{BCE}}\mathcal{L}_{\text{BCE}}\) is a weighted combination of the binary cross-entropy loss and the dice loss [28] for supervising foreground mask predictions and \(\mathcal{L}_{\text{sem}}{=}\lambda_{\text{CE}}\mathcal{L}_{\text{CE}}\) is the multi-class cross-entropy loss \(\mathcal{L}_{\text{CE}}\) for supervising mask semantics. The Hungarian algorithm is applied to solve the assignment problem and to find the globally optimal matches that minimize the total cost while ensuring that each target mask is assigned only once. The unmatched predicted masks are assigned to a "no-object" mask. **Training the model.** After establishing one-to-one correspondences, we can directly optimize each predicted mask. Our resulting loss consists of three loss functions: We keep the same binary mask loss \(\mathcal{L}_{\text{mask}}\) and the multi-class cross-entropy loss \(\mathcal{L}_{\text{sem}}\) from the Hungarian matching as referenced in Eq. 1. Observing that the \(\mathcal{L}_{\text{mask}}\) loss does not consider the distance of incorrectly added points to the mask, we introduce a new auxiliary bounding box regression loss \(\mathcal{L}_{\text{box}}\) which promotes spatially compact instances. We implement the bounding box loss as an L1 loss on the normalized axis-aligned box parameters. By optimizing the bounding box parameters from ST queries, the spatial location of their corresponding masks is supervised. Consequently, this helps to distinguish similar instances of the same class that are spatially separated. The overall loss is: \[\mathcal{L}=\mathcal{L}_{\text{mask}}+\mathcal{L}_{\text{sem}}+\mathcal{L}_{ \text{box}} \tag{2}\] **Extracting 4D panoptic segmentations.** Mask4D predicts \(N_{q}\) instance tracks as semantic heatmaps which are not necessarily non-overlapping. To assign a single semantic class label and instance ID to every point within the spatio-temporal point cloud, we proceed in the following manner: First, for each spatio-temporal query, we obtain semantic confidence by selecting the semantic class with the maximum probability. Second, this semantic confidence is multiplied with the corresponding instance heatmap, resulting in an overall confidence heatmap. We then assign each point to the query with the maximum confidence. **Tracking over long sequences.** To track instances across long LiDAR sequences that exceed memory limits, it is critical to associate instances across successive spatio-temporal point clouds. Therefore, we follow Aygun _et al._[1] and construct long sequences from short sequences in a way that ensures seamless associations. We establish a one-to-one match between predicted instances in the last and first frames between short sequences. ## IV Experiments ### _Comparing with State-of-the-Art Methods._ **Dataset.** We evaluate Mask4D on the well-established SemanticKITTI dataset [2], which is derived from the KITTI odometry dataset [14]. The dataset is split into training, validation, and test sets, and consists of over \(43,000\) LiDAR scans recorded with a Velodyne-\(64\) laser scanner capturing various urban driving scenarios. Each point in the LiDAR point clouds is densely annotated with one of \(C{=}19\) semantic labels, _e.g._, _car_, _road_, _cyclist_, as well as a unique instance ID that is consistent over time. For every time step, the dataset includes precise pose estimates of the ego vehicle, which is critical for the 4D panoptic segmentation task. **Metric.** The LiDAR Segmentation and Tracking Quality Metric (LSTQ) [1] is designed to evaluate the performance of 4D panoptic segmentation algorithms. It consists of two main components: classification and association scores. The classification score \(S_{cls}\) evaluates how well the algorithm performs in assigning correct semantic labels to the LiDAR points. It is calculated as the instance-agnostic mean intersection over union (mIoU) over all classes. The association score \(S_{assoc}\) evaluates the quality of point-to-instance associations considering the entire LiDAR sequence. It measures how well the algorithm tracks object instances over time without considering the semantic predictions. The overall LSTQ metric is computed as the geometric mean of the classification score and the association score: \(LSTQ=\sqrt{S_{cls}\times S_{assoc}}\). The geometric mean ensures that a high score can only be obtained if the approach performs well in both the classification and the association task. **Implementation Details.** In all experiments, we use \(N_{q}{=}100\) ST queries which are initialized with Farthest Point Sampled (FPS) point positions [31, 32]. Each spatio-temporal point cloud is formed by superimposing 2 consecutive LiDAR scans which are voxelized with a voxel size of \(5\) cm. The sparse feature backbone is a Minkowski Res16UNet34C [10]. We train the model for 30 epochs with a batch size of 4 using the AdamW optimizer [24] and the one-cycle learning rate scheduler [35] with a maximum learning rate of \(2\cdot 10^{-4}\). We perform standard data augmentation techniques including random rotation, translation, scaling, and instance population [44]. For the test set submission, we employ random rotation and translation as test time augmentations to enhance the semantic predictions. **Results.** In Tables I and II, we report the scores on the SemanticKITTI 4D panoptic segmentation validation and test set, respectively. Mask4D outperforms previous published approaches by at least +2.5 LSTQ on the validation set and +4.5 LSTQ on the test set. Notably, Mask4D demonstrates strong semantic understanding by achieving at least +9.0 S\({}_{\text{cls}}\) improvement over published methods on the test set. ### _Analysis Experiments._ **Spatio-Temporal Formation.** We achieve a globally consistent sequence of LiDAR scans by leveraging the precise pose estimates from the LiDAR sensor [3]. Considering that the sparse convolutional feature backbone (Fig. 2, \(\ in a significant improvement of +2.7 \(\mathrm{S_{assoc}}\), confirming our initial findings and supporting our hypothesis. Anticipating further improvements by replacing DBSCAN with a learned component, we introduce a specialized box regression branch 3 which promotes spatial awareness to better separate instances. This approach outperforms the baseline, both with and without DBSCAN, by a margin of up to \(+3.5\,\mathrm{S_{assoc}}\). Combining the box regression branch with DBSCAN yields our proposed method Mask4D 4, which not only ensures a strong association between instances (\(+4.2\,\mathrm{S_{assoc}}\)) but also achieves state-of-the-art semantic scene understanding, scoring \(66.9\,\mathrm{S_{cls}}\) on the SemanticKITTI validation. **Visualization of point features learned by Mask4D.** In Fig. 3, we show examples of PCA projected features \(F_{0}\) extracted from the finest resolution of Mask4D's feature backbone (Fig. 2, \(\Box\)). When trained without our suggested box loss, Mask4D shows less distinct separation of instance point features within the feature space (Fig. 2(a)). Conversely, the model optimized with the auxiliary task of 6-DOF bounding box regression for each instance trajectory shows a distinct separation of instance point features in the feature space (Fig. 2(b)). This indicates that Mask4D learns a more semantically meaningful feature space for the task of 4D panoptic segmentation leading to its superior association score \(S_{\mathrm{assoc}}\), as highlighted in Tab. IV. **Qualitative results.** In Fig. 3(a), we show qualitative results. We observe that Mask4D not only produces sharp instance masks but also reliably tracks the moving bicyclist # throughout the entire sequence. We also demonstrate a failure case of our tracking approach. As we process long sequences by stitching short sequences with overlaps, we incorrectly split tracks when an instance is not present in the overlapping LiDAR scan. For example, in Fig. 3(b), a pedestrian near the ego vehicle falls below the LiDAR's field of view. As a result, when the pedestrian becomes visible again, our tracking approach fails and predicts it as a new instance. ## V Conclusion Inspired by the success of recent mask transformer-based approaches, we have extended Mask3D to the task of 4D panoptic segmentation and have achieved promising results. In an in-depth analysis, we have found that Mask3D for 4D panoptic segmentation tends to produce spatially non-compact instances, resulting in poor association quality. To overcome this limitation, we have introduced Mask4D, the first transformer-based approach, that unifies segmentation and tracking of 3D point cloud sequences and is tailored to ensure spatially compact instances. To this end, Mask4D regresses 6-DOF bounding box parameters that are optimized to provide a loss signal to encourage spatially compact instance predictions. Through extensive experimental evaluations, we have demonstrated the effectiveness of Mask4D, achieving state-of-the-art performance on the task of 4D panoptic segmentation on SemanticKITTI validation and test sets. We anticipate follow-up work along the lines of direct prediction of instance and semantic labels. **Acknowledgments:** This project is partially funded by the Bosch-RWTH LHC project "Context Understanding for Autonomous Systems", the BMBF project 6GEM (16KISK036K) and the NRW project WestAI (01IS22094D). Compute resources were granted by RWTH Aachen under project supp0003. This work is part of the first author's master thesis. Fig. 4: **Qualitative Results.** We show color-coded instance tracks over 8 superimposed frames in a spatio-temporal point cloud and a failure where a pedestrian track is split due to an observation being outside of the LiDAR’s field of view. Fig. 3: **Visualization of learned point representations.** We use PCA to project the learned point representation of instances into RGB space. Our model, when trained without bounding box supervision, exhibits reduced variance in its feature representation for instances. In contrast, Mask4D effectively separates distinct instances in the feature space.
2309.07980
Identifying Concerns When Specifying Machine Learning-Enabled Systems: A Perspective-Based Approach
Engineering successful machine learning (ML)-enabled systems poses various challenges from both a theoretical and a practical side. Among those challenges are how to effectively address unrealistic expectations of ML capabilities from customers, managers and even other team members, and how to connect business value to engineering and data science activities composed by interdisciplinary teams. In this paper, we present PerSpecML, a perspective-based approach for specifying ML-enabled systems that helps practitioners identify which attributes, including ML and non-ML components, are important to contribute to the overall system's quality. The approach involves analyzing 59 concerns related to typical tasks that practitioners face in ML projects, grouping them into five perspectives: system objectives, user experience, infrastructure, model, and data. Together, these perspectives serve to mediate the communication between business owners, domain experts, designers, software and ML engineers, and data scientists. The creation of PerSpecML involved a series of validations conducted in different contexts: (i) in academia, (ii) with industry representatives, and (iii) in two real industrial case studies. As a result of the diverse validations and continuous improvements, PerSpecML stands as a promising approach, poised to positively impact the specification of ML-enabled systems, particularly helping to reveal key components that would have been otherwise missed without using PerSpecML.
Hugo Villamizar, Marcos Kalinowski, Helio Lopes, Daniel Mendez
2023-09-14T18:31:16Z
http://arxiv.org/abs/2309.07980v1
# Identifying Concerns When Specifying Machine Learning-Enabled Systems: A Perspective-Based Approach ###### Abstract Engineering successful machine learning (ML)-enabled systems poses various challenges from both a theoretical and a practical side. Among those challenges are how to effectively address unrealistic expectations of ML capabilities from customers, managers and even other team members, and how to connect business value to engineering and data science activities composed by interdisciplinary teams. In this paper, we present _PerSpecML_, a perspective-based approach for specifying ML-enabled systems that helps practitioners identify which attributes, including ML and non-ML components, are important to contribute to the overall system's quality. The approach involves analyzing 59 concerns related to typical tasks that practitioners face in ML projects, grouping them into five perspectives: system objectives, user experience, infrastructure, model, and data. Together, these perspectives serve to mediate the communication between business owners, domain experts, designers, software and ML engineers, and data scientists. The creation of _PerSpecML_ involved a series of validations conducted in different contexts: (i) in academia, (ii) with industry representatives, and (iii) in two real industrial case studies. As a result of the diverse validations and continuous improvements, _PerSpecML_ stands as a promising approach, poised to positively impact the specification of ML-enabled systems, particularly helping to reveal key components that would have been otherwise missed without using _PerSpecML_. **Keywords:** requirements engineering, machine learning-enabled systems, technology transfer, case study ## 1 Introduction Contemporary advances in Machine Learning (ML) and the availability of vast amounts of data have both given rise to the feasibility and practical relevance of incorporating ML components into software-intensive systems. In this paper, we refer to them as ML-enabled systems. These systems have their behavior dictated by data instead of relying on explicitly defined rules. In other words, data replaces code to some extent. This shift from engineering purely conventional software systems to ones which have ML-components woven-in poses new challenges from the viewpoint of software engineering (SE); for instance, challenges related to covering quality properties such as fairness and explainability [21], or challenges related to collaboration and mismatched assumptions in ML projects given the required multidisciplinary teams [31, 37]. These particularities typically demand extra effort to successfully develop ML-enabled systems. It is, therefore, not surprising to us that Gartner reports only 53% of ML projects to make it into production [19]. Within SE, Requirements Engineering (RE) is, in simple terms, the discipline that is meant to effectively translate stakeholder needs into requirements, constraints, and other information that defines what software systems should do under which conditions [13]. Due to the communication and collaboration-intensive nature, as well as inherent interaction with most other development processes, RE can provide the very foundation to address several of the challenges of building ML-enabled systems [27]. For example, when developing ML models, we need to identify relevant and representative data, validate models, and balance model-related user expectations (_e.g._, accuracy versus inference time); just as in RE for conventional software systems where we need to identify representative stakeholders, validate specifications with customers, and address conflicting requirements. This has also caught a new level of interest by the research community trying to better understand how RE techniques can be extended and what challenges need to be solved to reliably build ML-enabled systems [12]. Literature has shown that identifying quality metrics beyond accuracy, their specification, and understanding how they can be analyzed are not well-established yet in ML contexts [1, 2, 40, 45]. In fact, a recent roadmap for the future of SE [8] emphasizes that existing RE methods will need to be expanded to decouple ML problem and model specification from the system specification. On a more practical side, outside of BigTech companies with lots of experience, there is a focus on training more accurate ML models and their deployment, but rarely on the entire system including ML and non-ML components (_e.g._, how data is collected, how mistakes are dealt). This may lead to incomplete specifications of ML-enabled systems [23, 30], leaving most decisions to be made by data scientists [31, 48]. In order to help addressing these issues, we present _PerSpecML_, an approach for specifying ML-enabled systems that involves analyzing 59 concerns grouped into five perspectives: system objectives, user experience, infrastructure, model, and data. Together, these perspectives serve to mediate the communication between business owners, domain experts, designers, software and ML engineers, and data scientists. We created _PerSpecML_ by following a technology transfer model [20], which is recommended to foster successful transfer of technology from research to practice [50]. Throughout this process, we participated in real ML projects of a research and development (R&D) initiative [26], conducted a literature review on RE for ML [45], created a catalogue with an initial set of concerns [46], and proposed a candidate solution for specifying ML-enabled systems [47]. In this paper, we iteratively evaluate and improve [46, 47] by conducting three studies in different contexts: (i) in an academic validation involving two courses on SE for data science, (ii) with practitioners working with ML-enabled systems in an R&D initiative, and (iii) in two real industrial case studies conducted with a Brazilian large e-commerce company. The iterative validations and continuous improvements result in _PerSpecML_, our approach for specifying ML-enabled systems, and collectively corroborated its potential as a comprehensive tool for guiding practitioners in collaboratively designingML-enabled systems, enhancing their clarity, exploring trade-offs between conflicting requirements, uncovering overlooked requirements, and improving decision-making. Furthermore, we found that the participants involved in the validations gradually improved their perception of _PerSpecML_'s ease of use, usefulness, and intended to use. The remainder of this paper is organized as follows. Section 2 presents the background and related work. In Section 3, we detail how we conceive, evaluate and evolve _PerSpecML_. In Section 4, we present _PerSpecML_ and details its elements. In Sections 5 and 6, we describe the evaluation in academia and with industry representatives. Section 7 reports on industrial case studies. Section 8 and Section 9 raises potential threats to validity and discusses our research findings. By last, in Section 10, we conclude the paper. ## 2 Background and Related Work This section introduces a background on the core essence of ML and presents particularities and challenges when engineering ML-enabled systems that RE may address. We also describe related work. ### ML in a Nutshell ML is the study of computer algorithms that explores data to determine the best way to combine the information contained in the representation (training data) into a model that generalizes to data it has not already seen [35]. These systems, unlike non-ML, base its behavior on external data instead of explicitly programming hard rules. However, data may not be adequate and lead to bad outcomes. The output of the ML model is a prediction, sometimes surprisingly accurate and sometimes surprisingly inaccurate. When an ML model is integrated into a functional system, it becomes an ML-enabled system. This supposes a change in the way of designing, developing and testing these type of systems. Typically, these ML model performance metrics comprises the primary goal of data scientists during ML model development. A good ML-enabled system is one in which the learning improves over time, particularly when the learning improves by getting feedback from users. This implies taking care of not only data and models, but also business context, user experience, infrastructure and integration of several services. When designing an ML-enabled system is important to understand the constraints on its operation. For example, where will the model run? What data will it have access to? How fast does it need to be? What is the business impact of a false positive? A false negative? How should the model be tuned to maximize business results? An ML model is just one component of an ML-enabled system as a whole. There is an incredible amount of work to be done between the development of an ML model, the incorporation of it into a system and the eventual sustainable customer impact [6, 23, 30]. Thinking about possible strategies to address these concerns increases the chance of designing and development an ML-enabled system that meets customer's needs, and can avoid often costly problems later. ### RE for ML-Enabled Systems Requirements Engineering (RE) constitutes approaches to understand the problem space and specify requirements that all stakeholders agree upon. As such, it is concentrates on understanding what the actual problem is, what needs towards a system result and how to resolve potential conflicts, and it is thus characterized by the involvement of interdisciplinary stakeholders and often resulting in uncertainty [49]. RE is often considered a crucial and challenging stage of any software project. Indeed, most of the problems in software systems with and without ML components come from poor requirements rather than faulty implementation. In this line, Kastner [27] states that an ML model can be seen as a specification based on training data since data is a learned description of how the ML model shall behave. This means that the learned behavior of an ML-enabled system might be incorrect, even if the learning algorithm is implemented correctly. Practitioners argue that the incorporation of ML implies addressing additional qualities, setting more ambitious goals, dealing with a high degree of iterative experimentation, and facing more unrealistic assumptions [36]. It is therefore reasonable to assume that handling and resolving validation problems is (or should be) in scope of the role of a requirements engineer. We further argue that investing in RE can help to identify and mitigate problems early on. Nevertheless, establishing RE may be difficult due to the lack of guidance, tools, and techniques to support the engineering of ML-enabled systems [1, 45]. It is not surprising that ML-enabled systems are rarely built based on comprehensive specifications [31, 32] and that RE is seen by practitioners as the most difficult phase in ML projects [23]. In the last years, the literature on RE for ML has focused on issues with data requirements [9], process of data-driven projects [48], challenges of addressing non-functional requirements and particularities of certain quality attributes such as explainability, transparency and fairness [11, 21, 34]. Despite the important contributions in the field so far, the importance of specifying ML components in a way that customers can understand and analyze it to make adequate decisions is too often overlooked [15], and only a limited number of studies have looked into how to specify and document requirements for ML-enabled systems [1, 2, 40, 45]. For instance, Berry [7] states that the measures used to evaluate a learned machine, the criteria for acceptable values of these measures, and the information about the ML context that inform the criteria and trade-offs in these measures, collectively constitute the requirements specification of ML-enabled systems. ### Related Work We subsequently highlight research that has investigated what quality attributes should be analyzed and how practitioners can specify and document requirements for ML-enabled systems. We further take a more holistic RE perspective where an ML model is merely part of a larger ML-enabled system. Dorard [16] proposed a management template for ML, also known as ML canvas, that can be used to describe how ML systems will turn predictions into value for end-users, considering elements such as problem definition, data collection and preparation, feature engineering, model selection, evaluation metrics, deployment, and monitoring. This is probably the most spread approach for documenting ML-enabled systems given its simplified representation. However, this can be seen as a limitation since ML canvas may not capture all the intricate details and complexities of real-world projects, leading to potential oversights or gaps in the analysis. We seek to bridge these gaps with _PerSpecML_ by focusing on five different perspectives covering technical aspects and broader contextual concerns such as ethical considerations, legal constraints, and business implications, which can be crucial in real-world implementations. Rahimi _et al._[41] discussed on ideas for extracting and visualizing safety-critical requirements specifications and how a self-driving car would recognize pedestrians. The authors describe how RE can be useful to better understand the domain and context of a problem and how this helps to better select a high-quality dataset for model training and evaluation. We are aware that identifying gaps in the associated dataset and the constructed ML model is essential to improve the overall quality, fairness, and long-term effectiveness of the ML-enabled system, but at the same time other external components such as those related to the operation (_e.g._, data streaming) play an important role and can make the difference between an ML-enabled system that fits customer's needs and one that doesn't. In an effort to model a representation of data-driven systems, several works have been proposed. For instance, Chuprina _et al._[10] presented an artefact-based RE approach that encompasses four layers: context, requirements, system, and data. While the context specification captures the operational environment of a system, the requirements specification covers the user-visible black-box behaviour and characteristics such as explainability, transparency and ethics. On the other hand, the system specification defines the solution space and considers the system in a glass box view. The data-centric layer captures artifacts such as training and test datasets, and verifying algorithms. Similarly, Nakamichi _et al._[38] proposed a requirements-driven model to determine the quality attributes of ML-enabled systems that covers perspectives such as environment/user, system/infrastructure, model, data and quality characteristics. Despite the important contributions of these works, we found some limitations when compared to _PerSpecML_. Firstly, our intention is to be more specific, including more fine-grained attributes for each layer/perspective and modeling their relationships so that practitioners can have a complete view of the ML context and the software system as a whole. Secondly, we detail ML-related concerns that we faced in practice that were not considered as part of their proposals, such as concerns related to business requirements and user experience, which in our context showed being important for the success of ML-enabled systems. Another study we consider relevant is one conducted by Nalchigar [39]. They reported on an empirical study that evaluates a conceptual modeling framework for ML solution development for the healthcare sector. It consists of three views consumed by business people, data scientists, and data engineers. The business view shows how business goals are refined into decision goals and question goals, and how such questions can be answered by ML. The analytic design view models a solution in terms of algorithms, non-functional requirements and performance indicators. Lastly, the data preparation view conceptualizes the design of data preparation tasks in terms of data tables, operations, and flows. We also find this work as relevant as the previous ones, but we believe that other views related to the operation of ML-enabled systems such as infrastructure and user experience must be considered to support the activities of practitioners such as software and ML engineers, and designers. More recently, Siebert _et al._[43] presented a formal modelling definition for quality requirements in ML-enabled systems that allows to identify attributes and quality measures related to components such as model, data, system, infrastructure and environment. We consider this work strongly related to ours. For instance, the authors discusses quality attributes of an ML-enabled system beyond the ML components, just as _PerSpecML_ proposes. It is also explicit about considering multiple perspectives: of the entire system, and of the environment the system is embedded it. As a key difference between the works, we provide a diagram that summarizes the perspectives, the quality attributes/concerns, and shows their relationships. This seeks to facilitate effective communication and collaboration among stakeholders, provide a visual representation that can be easily understood by technical and non-technical team members, capture and document various aspects of the ML-enabled system's design, and support analysis and verification activities. Similarly, Maffey _et al._[33] proposed MLTE, an initial framework to evaluate ML models and systems that provides domain-specific language that teams, including model developers, software engineers, system owners, can use to express model requirements, an infrastructure to define, generate, and collect ML evaluation metrics, and the means to communicate results. While MLTE defines a general measurable process to evaluate ML systems, our proposal differs by going a step back and pointing out typical concerns involved when setting objectives and defining key components of ML-enabled systems. We see MLTE and _PerSpecML_ as tools that can complement each other by supporting practitioners from different angles, since they share the same purpose of early addressing practical problems faced by multidisciplinary teams throughout the ML development process. ## 3 Methodology for Conceiving _PerSpecML_ In this section, we describe the process we followed to design and evaluate _PerSpecML_ based on the technology transfer model introduced by Gorschek _et al._[20]. We used this model since our research method involved evaluations in both academia and industry with the aim of scaling the proposal up to practice, for which this model is recommended [50]. This mix of evaluations provides an opportunity to gather user feedback and incorporate it into the solution design. By involving stakeholders and practitioners in the evaluation process, we gathered valuable insights about their experience, needs, and preferences. This feedback informed iterations and refinements of the solution, making it more user-centric and aligned with actual user requirements. Fig. 1 outlines the seven steps of the model, which we will describe sequentially hereafter (while following the terminology of the transfer model). ### Step 1: Identify Improvement Areas Based on Industry Needs We followed the principle of constructivism [18] that advocates that a person needs to understand how something works before exploring the different ways to construct solution proposals. During the last four years, the first author has participated in research and development (R&D) projects designing and developing ML-enabled systems. These projects involve different types of ML tasks (_e.g._, supervised and unsupervised learning, computer vision) and algorithms (_e.g._, decision trees, logistic regression, neural networks).This experience allowed us to assess current practices, observing domain and business settings, understand typical industry needs for ML-enabled systems, and issues related to their development. More specifically, we identified i) how important the domain and business settings are to align the stakeholder needs, requirements, and Figure 1: Technology transfer model proposed by Gorschek _et al._[20] constraints with the engineering and data science activities ii) interdisciplinary teams typically involved in ML projects, and iii) the lack of tools and documents that can capture key components when specifying ML-enabled systems. ### Step 2: Formulate a research agenda In order to better define the problem and gain more insights into existing solutions and what needs to be created, we conducted a systematic mapping study on RE for ML [45], analyzed later literature reviews [1, 2, 40] and took advice from an industry-oriented publication based on more than a decade of experience in engineering ML-enabled systems [22]. Here, we identified, for instance, i) additional quality attributes of ML-enabled systems that practitioners should analyze ii) the lack of studies focused on identifying key components of ML-enabled systems that may later be specified, and iii) the lack of studies evaluated in practice to validate its effectiveness, feasibility and gather user feedback. ### Step 3: Formulate a Candidate Solution After observing and gathering experience from real-world ML projects and reviewing the literature, we decided to focus on the creation of a candidate solution that can support the design of ML-enabled systems. As a first step, we proposed a catalog of 45 concerns to be analyzed by practitioners with the aim at identifying key components of ML-enabled systems [46]. The initial set of concerns were evaluated in a focus group with practitioners with different levels of experience of a R&D initiative, more specifically, three data scientists, two developers and three project leads. Their feedback was positive as they perceived the catalog of concern as prominent, and allowed us to identify initial improvements. Fig. 2 shows the catalog. Figure 2: Initial catalog of concerns [46] Therefrom, we used this catalog to create a candidate solution for specifying ML-enabled systems [47]. This candidate solution modeled the concerns in a structured manner by proposing a diagram that categorizes the concerns into perspectives, pointing out relationships and stakeholders involved in the analysis of the concerns. The purpose was to capture essential information about the desired functionality, components, and constraints of the ML-enabled system. Fig. 3 shows the diagram we proposed in a first effort to specify ML-enabled systems. In this paper, we iteratively improve this candidate solution by conducting three different evaluations that are briefly described hereafter. The resulting approach, which we baptized _PerSpecML_, is detailed in Section 4. ### Steps 4, 5, and 6: Evolution and Transfer Preparation through Validation The goal of these steps was to refine the candidate solution towards its industry-readiness. In order to accomplish this goal, we conducted three evaluations in different contexts, as suggested by the technology transfer model [20]: (i) with students from two courses on SE for data science specifying an ML-enabled system for a toy scenario (validation in academia), (ii) with practitioners working in a R&D initiative discussing specifications of ML-enabled systems built retroactively with stakeholders of real projects (static validation), and (iii) in two industrial case studies conducted with an e-commerce company, specifying real ML-enabled systems from scratch using the approach (dynamic validation). Note that, according to [20], the terminology'static' refers to evaluating the candidate solution off-line, involving industry participants and real artifacts, but not as part of a real project life-cycle activity, which is the 'dynamic' one. With these iterative validations we seek to ensure early issue detection, user satisfaction, continuous improvement, adaptability and overall confidence in the final solution. Details on the validations are provided in Section 5, 6, and 7. Figure 3: Initial diagram for specifying ML-enabled systems [46] ### Step 7: Release the Solution _PerSpecML_, which is presented in the next section, is now being adopted within the R&D initiative involved in the static validation to specify their ML-enabled system projects. In addition, the approach has been successfully transferred to the data science team responsible for the two case study projects involved in the dynamic validation. At first, the team decided to limit _PerSpecML_ to ML projects involving supervised learning tasks. The full adoption is pending results from other evaluations. ## 4_PerSpecML_ In this section we present _PerSpecML_, a perspective-based approach for specifying ML-enabled systems that involves analyzing 59 concerns related to typical tasks that practitioners face in ML projects when defining and structuring these software systems. The concerns are grouped into five perspectives: system objectives, user experience, infrastructure, model, and data, providing a structured way to analyze and address different aspects of the ML-enabled system. Together, these perspectives align the activities between business owners, domain experts, designers, software and ML engineers, and data scientists. By using _PerSpecML_, practitioners are expected to be able to: * **Enhance clarity:** Different stakeholders such as software engineers and data scientists may have varying goals, requirements, and concerns. Modeling perspectives and tasks helps to identify and explicitly represent these diverse viewpoints, ensuring a clear understanding of the ML-enabled system from multiple angles. * **Foster collaboration:** Providing a perspective-based approach encourages collaboration and communication among stakeholders. It facilitates discussions and negotiations by providing a common structure to express and compare different viewpoints. * **Identify trade-offs:** Perspectives and concerns enable the exploration of trade-offs between conflicting objectives and requirements. By explicitly modeling a high-level ML-enabled system workflow, practitioners can analyze the impact of design decisions on each perspective and make informed choices that balance different concerns. * **Improve decision-making:** Understanding the tasks and concerns of both ML and no-ML components helps practitioners to evaluate and compare alternative solutions, enabling informed decision-making as the project progresses. ML projects are full of decisions that stakeholders must make. * **Ensure completeness:** By considering multiple perspectives and concerns, practitioners can uncover hidden or overlooked requirements or risks. This helps in ensuring that the final ML-enabled system addresses the needs of all stakeholders and avoids potential pitfalls or shortcomings. In the following, we detail each element of _PerSpecML_ that we evolved throughout the iterative validations we conducted. We describe the stakeholders, the perspectives and their concerns, the relationship between them, and the two final artifacts that structure the above elements: the perspective-based ML task and concern diagram and the corresponding specification template. We also describe the logical flow for executing _PerSpecML_. ### Stakeholders Building successful ML-enabled systems requires a wide range of skills, typically by bringing together team members with different specialties [22, 28]. Taking a holistic system view is essential because ML expertise alone is not sufficient and even engineering skills to, for example, build pipelines and deploy ML models cover only small parts of the software system. We also need to be concerned about how to improve the experience of end-users in order to deal with unrealistic assumptions, and align business value to ML technical activities in order to cover business requirements. Given this, we seek _PerSpecML_ to impact the work of business owners, domain experts, designers, software/ML engineers, data scientists and requirements engineers. **Business owners (BO)** should understand what properties and components are essential to achieve the business objectives and be aware of the ML capabilities in order to set realistic goals and expectations. For instance, how to connect business objectives with ML outcomes? What is the real cost involved in maintaining an ML-enabled system? What team and skills are needed to successfully building ML-enabled systems? **Domain experts (DE)** play an important role in accurately defining the problem in a way that aligns with real-world scenarios and requirements, ensuring that the ML-enabled system addresses the specific challenges and objectives of the domain. By collaborating closely with domain experts, other stakeholders can benefit from their in-depth knowledge and insights to define relevant features and data sources, and interpreting the results of the ML model in a meaningful context. **Designers (DG)** collaborate to translate complex ML concepts and model outputs into intuitive and easy-to-understand interfaces that provide value to end users. For instance, where and how the ML outcomes will appear? how often it will appear? and how forcefully it will appear? A good user experience must be on the user's side and make them happy, engaged, and productive. Creating interactions with users to get feedback and grow learning is essential to ensure the quality of the ML model over time. **Software/ML engineers (SE)** should understand how the entire system will interact with the ML model. They work on transforming the data scientists' research prototypes into ML-enabled systems that can handle large-scale data, ensure scalability, and meet performance concerns. For instance, what are the pros and cons of deploying an ML model as a back-end application or as a web service? online or batch predictions are enough to meet user demand? **Data scientist (DS)** leverages their expertise in data analysis, statistical modeling, and ML algorithms to extract insights, develop ML models, and drive data-driven decision-making, but they should also understand the constraints these systems put on the ML models they produce. For instance, what quality properties the ML model should consider? What domain restrictions may apply? what should be the complexity of the ML model? and how should the ML model be tuned to maximize business results? **Requirements engineers** collaborate closely with stakeholders to support the discussions between business owners, domain experts, and data scientists, and the development team, facilitating effective communication and understanding of project requirements. We seek to empower requirements engineers by using _PerSpecML_ to identify and resolve conflicts often associated with ML projects. For instance, how much loss of accuracy is acceptable to cut the inference latency in half? can data scientists sacrifice some accuracy but offer better interpretability and explainability? One of the main benefits of applying RE for ML projects is to help balance these concerns. ### Concerns In SE, a concern typically refers to a specific aspect, interest, or issue that needs to be addressed or considered during the development and maintenance of a software system, consequently influencing its design, implementation and behavior. When designing ML-enabled systems and breaking them down into components, it is crucial to identify which attributes are important to contribute to the overall system's quality. Determining this requires a deep understanding of the system's goals, stakeholders' requirements, and the overall context in which the software will be used. In the case of ML components, the challenge is further amplified since it incorporates models that make predictions based on patterns and trends learned from data, which introduce unique considerations. All of these considerations, including ML components and deterministic (non-ML) components, become concerns for practitioners in charge of designing an ML-enabled system. One of the main elements of _PerSpecML_ are its concerns. In total, we identified 59 concerns including, for example, data streaming, model serving and telemetry when thinking on the operation of the ML-enabled system, and inference time, explainability and reproducibility when thinking on the development of the ML model. The concerns, that can be seen as quality attributes, came from i) own experiences of the authors of this work who have been actively participated in real ML projects, from ii) literature reviews on RE for ML that have researched both academia and industry, and from iii) practitioners who iteratively evaluated the concerns and recommended new ones to be considered. In _PerSpecML_, the concerns are part of tasks that stakeholders typically face throughout the development of ML-enabled systems. ### Related Tasks Modeling In _PerSpecML_ we also focus on capturing and representing the tasks that should be performed by stakeholders to develop successful ML projects. In total, our approach outlines 28 tasks that are covered by the five perspectives. These tasks group associated concerns that should be analyzed by stakeholders. With this feature, stakeholders can more easily understand and describe how tasks are performed, what concerns are involved, the relationships between concerns, and the interactions with other stakeholders. For instance, typically in ML projects, data scientists are tasked with training, validating, and deploying ML models. These tasks involve implicit concerns that are not easily identified at first sight, such as inference time, learning time, model complexity and hyperparameters tuning. In addition, some specific tasks can benefit from involving more than one stakeholder in the analysis. For instance, to validate ML models it is necessary to generate model performance metrics, typically performed by data scientists, and analyze such metrics in collaboration with domain experts who deep understand the problem and data. In the early phases of developing ML-enabled systems, several key tasks should be performed to lay a strong foundation for the project's success. These tasks typically involve all the stakeholders, and concern understanding the problem, setting goals, among other. Table 1 details the tasks from a system objectives perspective. A positive user experience is crucial for the successful adoption, acceptance, and utilization of ML-enabled systems. It enhances user engagement, improves user satisfaction, and ultimately contributes to the overall success of the ML project. Table 2 details the tasks should be done to ensure that ML-enabled systems become a valuable and integral part of users' workflows. \begin{table} \begin{tabular}{|p{142.3pt}|p{142.3pt}|} \hline **Task** & \multicolumn{2}{|c|}{**Description**} \\ \hline **Understand** & understand the problem domain and the real-world context in which the ML model will be deployed, and define the ML problem and the specific task to be solved \\ \hline **Set goals at different levels** & define the goals of the ML project at different levels in order to ensure that it meets the expectations of the stakeholders \\ \hline **Establish success indicators** & define measures that provide early insights on the achievement of the objectives \\ \hline **Manage expectations** & define what the ML model can and cannot do. Stakeholders may have unrealistic expectations about the ML capabilities, and providing clarity will prevent disappointment and frustration \\ \hline \end{tabular} \end{table} Table 1: Description of the tasks to define the system objectives \begin{table} \begin{tabular}{|p{142.3pt}|p{142.3pt}|} \hline **Task** & \multicolumn{2}{|c|}{**Description**} \\ \hline **Establish the value of predictions** & determine that the ML model’s outputs are relevant, accurate, and impactful and how they contribute to achieving the project’s objectives \\ \hline **Define the interaction of predictions with users** & define how users will interact with predictions (_e.g._, frequency and forcefulness) in order to design user-friendly interfaces and workflows \\ \hline **Visualize predictions** & present ML model outputs in a visually understandable format. Visual aids such as charts, and graphs can help users comprehend complex data and insights \\ \hline **Collect learning feedback** & offer feedback mechanisms to users in order to provide updates on ML models \\ \hline **Ensure the credibility of predictions** & ensure that users have a clear understanding of the ML model’s capabilities and potential inaccuracies \\ \hline \end{tabular} \end{table} Table 2: Description of the tasks to ensure user experience A robust and well-designed infrastructure is fundamental for the success of ML projects. It enables efficient development, deployment, and scaling of ML models. Table 3 details the tasks of the infrastructure perspective. A structured ML model development process fosters transparency, reproducibility, and accountability. It supports the creation of robust, reliable, and trustworthy ML solutions. Table 4 details the tasks of the model perspective. \begin{table} \begin{tabular}{|p{142.3pt}|p{142.3pt}|} \hline **Task** & \multicolumn{2}{|c|}{**Description**} \\ \hline **Select and configure** & shortlist a set of ML algorithms that are well-suited for the task at hand, and experiment with different combinations of hyperparameters to find the optimal configuration that yields the best performance \\ \hline **Train the ML model** & create a ML model that captures the underlying patterns in the data and can make predictions on unseen examples \\ \hline **Validate the ML model** & ensure that the trained ML model meets the desired criteria \\ \hline **Deploy the ML model** & make the trained ML model available and operational in a production environment, allowing it to serve predictions to end-users or other systems \\ \hline **Evaluate other quality characteristics** & assess various aspects of the ML model beyond its predictive accuracy. Other quality characteristics are equally important for the model’s overall performance, reliability, and suitability for real-world applications \\ \hline \end{tabular} \end{table} Table 4: Description of the tasks to support the creation of ML models \begin{table} \begin{tabular}{p{142.3pt}|p{142.3pt}} \hline **Task** & \multicolumn{2}{|c}{**Description**} \\ \hline **Transport data to the model** & involves moving the relevant data from its source to the ML model for analysis, training, or prediction \\ \hline **Make the ML model** & refers to the process of deploying and exposing the trained ML model so that it can be accessed for making predictions \\ \hline **Update the ML model** & refers to the process of making improvements or modifications to an existing ML model to enhance its performance \\ \hline **Store ML artifacts** & involves the systematic storage and management of various artifacts generated throughout the ML development process \\ \hline **Observe the ML model** & involves analyzing the performance, behavior, and outcomes of both the ML model and the software system \\ \hline **Automate End-to-End ML workflow** & involves the design and implementation of a systematic and streamlined process that automates the ML workflow, from data preparation to model deployment and monitoring \\ \hline **Integrate the ML model** & involves incorporating the trained ML model into the larger software system where it will be used for making predictions \\ \hline **Evaluate the financial cost** & assess and analyze the expenses related to the computational resources, hardware, software, and services required to support the ML project \\ \hline \end{tabular} \end{table} Table 3: Description of the tasks to support the infrastructure of ML-enabled systems The management of data in ML projects is essential for building accurate and reliable ML models. Table 5 details the tasks to be done, mainly by data scientists and domain experts, to maintain high-quality data throughout the lifecycle of ML projects. ### Perspectives In SE, a perspective refers to a representation of a system or its components. It provides a focused way of analyzing a particular aspect of the system, allowing to capture different concerns and stakeholders' viewpoints. Perspectives have been effectively used in SE to model scenarios where team members work on a particular phenomena [5]. In _PerSpecML_, we modeled five perspective that are detailed as follows. **System Objectives Perspective:** When evaluating ML solutions, there is a tendency to focus on improving ML metrics such as the F1-score and accuracy at the expense of ensuring business value and covering business requirements [4]. Success in ML-enabled systems is hard to define with a single metric, therefore it becomes necessary to define success at different levels. This perspective involves analyzing the context and problem that ML will address to ensure that ML is targeting at the right problem; defining measurable benefits ML is expected to bring to the organization and users; what system and model goals will be evaluated; the ML expected results in terms of functionality, and ML trade-off to deal with customer expectations. Table 6 details the concerns when thinking on objectives for ML-enabled systems. **User Experience Perspective:** A good ML-enabled system includes building better experiences of using ML. The goal of this perspective is to present the predictions of the ML model to users in a way that achieves the system objectives and gets user feedback to improve the ML model. Therefore, we consider analyzing concerns such as defining what is the added value as perceived by users from the predictions to their work; how strongly the system forces the user to do what the ML model \begin{table} \begin{tabular}{|p{142.3pt}|p{142.3pt}|} \hline **Task** & **Description** \\ \hline **Access data** & involves timely obtaining and retrieving the necessary data from various sources to be used for model development and evaluation \\ \hline **Select and describe data** & involves carefully choosing the relevant data that will be used to train, validate, and test ML models, and describing the features of the data \\ \hline **Evaluate high-quality data** & involves a comprehensive assessment of the data used for training and testing ML models in order to ensure that the data meets certain criteria and standards to produce accurate and reliable results \\ \hline **Convert data in the representation of the ML model** & involves transforming the raw input data into a format that can be processed by the ML algorithm \\ \hline **Split dataset** & involves dividing the available data into separate subsets for training, validation, and testing purposes \\ \hline **Define a golden dataset** & involves creating a high-quality dataset that represents the problem domain and serves as the ground truth for training and evaluating ML models \\ \hline \end{tabular} \end{table} Table 5: Description of the tasks to support data quality in ML projects indicates; how often the ML model interacts with users; how the predictions will be presented so that users get value from them; how the users will provide new data for learning; and what is the user impact of a wrong ML model prediction. Table 7 details the concerns when thinking on user experience for ML-enabled systems. \begin{table} \begin{tabular}{|c|p{14.2pt}|p{142.3pt}|} \hline **Id** & **Concern** & **Addressing this concern involves specifying** \\ \hline **O1** & **Context** & the specific circumstances, environment, or conditions in which the ML-enabled system will operate \\ \hline **O2** & **Need** & the requirement, desire, or gap that must be addressed to achieve a particular set of circumstances within a given context \\ \hline **O3** & **ML functionality** & the nature of the learning problem and the desired outcome that the ML model is designed to achieve (_e.g._, classify customers) \\ \hline **O4** & **Profit hypothesis** & how the ML system’s outcomes will translate into tangible gains for the organization \\ \hline **O5** & **Organizational goals** & measurable benefits ML is expected to bring to the organization. _E.g._, increase the revenue in X%, increase the number of units sold in Y%, number of trees saved \\ \hline **O6** & **System goals** & what the system tries to achieve, with the support of an ML model, in terms of behavior or quality \\ \hline **O7** & **User goals** & what the users want to achieve by using ML. _E.g._, for recommendation systems this could involve helping users find content they will enjoy \\ \hline **O8** & **Model goals** & metrics and acceptable measures the model should achieve (_e.g._, for classification problems this could involve accuracy X%, precision Y%, recall Z%) \\ \hline **O9** & **Leading indicators** & measures correlating with future success, from the business’ perspective. This includes the users’ affective states when using the ML-enabled system (_e.g._, customer sentiment and engagement) \\ \hline **O10** & **ML trade-off** & the balance of customer expectations (_e.g._, inference time vs accuracy, false positive vs false negative) \\ \hline \end{tabular} \end{table} Table 6: Description of each concern of the system objectives perspective \begin{table} \begin{tabular}{|c|p{142.3pt}|p{142.3pt}|} \hline **Id** & **Concern** & **Addressing this concern involves specifying** \\ \hline **U1** & **Value** & the added value as perceived by users from the predictions \\ \hline **U2** & **Forcefulness** & how strongly the system forces the user to do what the ML model indicates they should (_e.g._, automatic or assisted actions) \\ \hline **U3** & **Frequency** & how often the system interacts with users (_e.g._, whenever the user asks for it or whenever the system thinks the user will respond) \\ \hline **U4** & **Visualization** & user-friendly interfaces to showcase the ML model’s outputs and facilitate its integration into the customer’s existing systems (_e.g._, specifying dashboard and visualization prototypes for validation) \\ \hline **U5** & **Learning feedback** & what interactions the users will have with the ML-enabled system in order to provide new data for learning, or human-in-the-loop systems where ML models require human interaction \\ \hline **U6** & **Acceptance** & how well and how the model arrives at its decisions \\ \hline **U7** & **Accountability** & who is responsible for unexpected model results \\ \hline **U8** & **Cost** & the user impact of a wrong ML model prediction \\ \hline **U9** & **User education \& Training** & the need to provide user education and training on the limitations of the ML-enabled system and how to interpret its outputs \\ \hline \end{tabular} \end{table} Table 7: Description of each concern of the user experience perspective #### Infrastructure Perspective: ML models produced by data scientists typically are turned into functional and connected software systems that demand special characteristics when in operation. The goal of this perspective is to cover the execution of the ML model, the monitoring of both data and model outputs, and its learning from new data. We consider analyzing concerns such as defining what streaming strategy will be used to connect data with the ML model; how the ML model will be served; the need for the ML model to continuously learn from new data to extend its knowledge; where the ML artifacts (_e.g._, experiments, ML models, datasets) will be stored; the need for monitoring the ML model and data; the strategy to automate ML operations that allow to reproduce and maintain ML artifacts, and the integration the ML model will have with the rest of the system functionality. Table 8 details the concerns when thinking on the infrastructure for ML-enabled systems. #### Model Perspective: Building a ML model implies not only cleaning and preparing data for analysis, and training an algorithm to predict some phenomenon. Several other aspects determine its quality. This perspective involves analyzing concerns such as defining the initial candidate of expected inputs and outcomes (of course, the set of meaningful inputs can be refined during pre-processing activities); the set of algorithms that could be used according to the problem to be addressed; the need to tune the hyperparameters of the algorithms; the metrics used to evaluate the ML model and measurable performance expectations that tend to degrade over time; the need for explaining and understanding reasons of the model outputs; the ability of the ML model to perform well as the size of the data and the complexity of the problem increase (scalability), to deal with discrimination and negative consequences for \begin{table} \begin{tabular}{|c|p{142.3pt}|p{142.3pt}|} \hline **Id** & **Concern** & **Addressing this concern involves specifying** \\ \hline **11** & **Data streaming** & what data streaming strategy will be used (_e.g._, real time data transportation or in batches) \\ \hline **12** & **Model serving** & how the ML model will be executed and consumed (_e.g._, client-side, back-end, cloud-based, web service end-point) \\ \hline **13** & **Incremental learning** & the need for ML-enabled system abilities to continuously learn from new data, extending the existing model’s knowledge \\ \hline **14** & **Storage** & where the ML artifacts (_e.g._, models, data, scripts) will be stored \\ \hline **15** & **Monitorability** & the need to monitor the data and the outputs of the ML model to alert/detect when data drifts or changes \\ \hline **16** & **Telemetry** & what ML-enabled system data needs to be collected. Telemetry involves collecting data such as clicks on particular buttons and could involve other usage data \\ \hline **17** & **Reproducibility** & the need to repeatedly run an algorithm/ML process on certain datasets/experiments and obtain the same (or similar) results \\ \hline **18** & **Maintainability** & the need to modify ML-enabled systems to improve performance or adapt to a changed environment \\ \hline **19** & **Integration** & the integration that the model will have with the rest of the system functionality (_e.g._, safety, security, privacy, fairness, legal) \\ \hline **110** & **Cost** & the financial cost involved in executing the inferences and with the infrastructure that could affect architectural decisions. Great models can be unusable due to the cost to run and maintain them \\ \hline \end{tabular} \end{table} Table 8: Description of each concern of the infrastructure perspective certain groups (bias & fairness), to protect sensitive data and prevents unauthorized access (security & privacy); the acceptable time to train and execute the ML model, and the complexity of the ML model in terms of size and generalization. In Table 9, we provide the description of the concerns that may be relevant to select, train, tune and validate a ML model. **Data Perspective:** Data is critical to ML. Poor data will result in inaccurate predictions. Hence, ML requires high-quality input data. Based on the Data Quality model defined in the standard ISO/IEC 25012 [25] and our own experience, we elaborate on the data perspective. In this perspective, we considered concerns such as defining from where the data will be obtained; the strategy to select data; the \begin{table} \begin{tabular}{|p{28.5pt}|p{28.5pt}|p{28.5pt}|} \hline **Id** & **Concern** & **Addressing this concern involves specifying** \\ \hline \multirow{3}{*}{**M1**} & **Algorithm \& model selection** & the set of algorithms that could be used/investigated, based on the ML problem and other concerns to be considered (_e.g._, constraints regarding explainability or model performance, for instance, can limit the solution options) \\ \hline \multirow{3}{*}{**M2**} & **Algorithm tuning** & the need to choose a set of optimal hyperparameters for a learning algorithm. A hyperparameter is a parameter whose value is used to control the learning process \\ \hline \multirow{3}{*}{**M3**} & **Input \& Output** & the expected inputs (features) and outcomes of the model. Of course, the set of meaningful inputs can be refined/improved during pre-processing activities, such as feature selection \\ \hline \multirow{3}{*}{**M4**} & **Learning time** & the acceptable time to train the model \\ \hline \multirow{3}{*}{**M5**} & **Performance metrics** & the metrics used to evaluate the model (_e.g._, precision, recall, F1-score, mean square error) and measurable performance expectations \\ \hline \multirow{3}{*}{**M6**} & **Baseline model** & the optional simple model that acts as a reference. Its main function is to contextualize the results of trained models \\ \hline \multirow{3}{*}{**M7**} & **Inference time** & the acceptable time to execute the model and return the predictions \\ \hline \multirow{3}{*}{**M8**} & **Model size** & the size of the model in terms of storage and its complexity (_e.g._, for decision trees there might be needs for pruning) \\ \hline \multirow{3}{*}{**M9**} & **Performance degradation** & the awareness of performance degradation. Over time many models’ predictive performance decreases as a given model is tested on new datasets within rapidly evolving environments \\ \hline \multirow{3}{*}{**M10**} & **Versioning** & the versions of libraries, ensuring compatibility, and handling any conflicts or issues that may arise due to dependencies. This is important for maintaining reproducibility, portability, and ensuring that the ML model can be easily set up and executed on different systems \\ \hline \multirow{3}{*}{**M11**} & **Interpretability \& Explainability** & the need to understand reasons for the model inferences. The model might need to be able to summarize the reasons for its decisions. Other related concerns such as transparency, may apply \\ \hline \multirow{3}{*}{**M12**} & **Scalability** & the need for the model to perform well as the size of the data and the complexity of the problem increases \\ \hline \multirow{3}{*}{**M13**} & **Bias \& Fairness** & the need for the model to treat different groups of people or entities \\ \hline \multirow{3}{*}{**M14**} & **Security \& Privacy** & the need for the model to protect sensitive data and prevents unauthorized access \\ \hline \end{tabular} \end{table} Table 9: Description of each concern of the model perspective description of data; evaluating the inherent quality data attributes (_e.g._, accuracy, completeness, consistency, real usage); what data operations and modeling must be applied; the expected data distributions and how data will be split into training, validating and test data; the time between when data is expected and when it is readily available for use, and the need for a golden dataset approved by a domain expert. Table 10 details the concerns when thinking on data for ML-enabled systems. ### Relationship between Concerns Identifying relationships that show influence and implications between the concerns of an ML-enabled system is of paramount importance for successful project outcomes. These relationships extend across various dimensions, such as system design, risk management, and resource allocation. Understanding these factors allows for optimal decision-making, alignment with ML project goals, and efficient workflow planning. In _PerSpecML_, we highlight these relationships to (i) help stakeholders identify conflicting objectives and requirements, and (ii) promote transparent communication \begin{table} \begin{tabular}{|c|p{142.3pt}|p{142.3pt}|} \hline **Id** & **Concern** & **Addressing this concern involves specifying** \\ \hline **D1** & **Source** & from where the data will be obtained \\ \hline **D2** & **Timeliness** & the time between when data is expected and when it is readily available for use \\ \hline **D3** & **Data selection** & the process of determining the appropriate data type and suitable samples to collect data \\ \hline **D4** & **Data dictionary** & the collection of the names, definitions, and attributes for data elements and models \\ \hline **D5** & **Quantity** & the expected amount of data according to the type of the problem and the complexity of the algorithm \\ \hline **D6** & **Accuracy** & the need to get correct data \\ \hline **D7** & **Completeness** & the need to get data containing sufficient observations of all situations where the model will operate \\ \hline **D8** & **Credibility** & the need to get true data that is believable and understandable by users \\ \hline **D9** & **Real usage** & the need to get real data representing the real problem \\ \hline **D10** & **Bias** & the need to get data fair samples and representative distributions \\ \hline **D11** & **Consistency** & the need to get consistent data in a specific context \\ \hline **D12** & **Ethics** & the need to get data to prevent adversely impacting society (_e.g._, listing potential adverse impacts to be avoided) \\ \hline **D13** & **Anonymization** & the need to anonymize or pseudonymize to protect individual identities while still maintaining the utility of the data for ML purposes \\ \hline **D14** & **Data operations \& Modeling** & what operations must be applied on the data (e.g., data cleaning and labeling) and what is necessary to convert data in the representation of the model. \\ \hline **D15** & **Data distribution** & the expected data distributions and how data will be split into training, validating and test data \\ \hline **D16** & **Golden dataset** & the need for a baseline dataset approved by a domain expert that reflects the problem. It is employed to monitor other data acquired afterwards \\ \hline \end{tabular} \end{table} Table 10: Description of each concern of the data perspective between team members, ensuring the long-term viability and impact of ML projects. For instance, if users require to know the reasons of the ML model's decision-making then the explainability & interpretability concern arises. But this may depend on the chosen algorithm since some ML algorithms tend to be less explainable than others (_e.g._, simpler ML algorithms such as decision trees, linear regression, and logistic regression are often considered more explainable than complex ML algorithms such as deep neural networks, random forests, and gradient boosting models). In addition, complex ML models may provide high accuracy, making it necessary to strike a balance between these concerns based on the specific needs and constraints of the ML project. Identifying these relationships is also important within the infrastructure perspective. For instance, defining the source to access data influences the implementation or setup of a data streaming solution, which is required to transport the data to the ML model. Understanding these kind of relationships helps optimize the ML workflow and streamline the project execution. On the other hand, in the system objectives perspective, the ML functionality guides the selection of appropriate ML algorithms (_i.e._, different tasks, such as classification or regression, require specific algorithms that are suitable for the task at hand). Furthermore, it affects how the ML model's performance is evaluated and measured (_i.e._, different performance metrics, such as accuracy or recall are used based on the specific task). All _PerSpecML_ relationships can be found in our online repository1. Footnote 1: [https://doi.org/10.5281/zenodo.7743479](https://doi.org/10.5281/zenodo.7743479) ### Perspective-Based ML Task and Concern Diagram In order to provide a holistic view of the ML-enabled system that facilitates producing a description of what will be built and delivers it for approval and requirements management, we present a perspective-based ML task and concern diagram that integrates the key components discussed earlier: concerns, tasks, perspectives, and stakeholders. Table 11 shows the notation we used to represent these components in the diagram. \begin{table} \begin{tabular}{c|c} \hline \hline **Notation** & **Description** \\ \hline \multirow{2}{*}{**Perspective (ld)**} & The diagram contains five rounded rectangles that represent the perspectives. Each perspective is associated with a color to facilitate its identification, and is connected to their tasks \\ \cline{2-3} & Task \\ \hline \multirow{2}{*}{**Stakeholder**} & The diagram contains rectangles attached to a perspective that connect a task (at the top right) to one or more concerns (at the bottom). Each task has at least one actor suggested (at the top left) related to the execution of the task and the analysis of the concerns \\ \hline \hline \end{tabular} \end{table} Table 11: Legend of the perspective-based ML task and concern diagram The perspective-based ML task and concern diagram shown in Fig. 4 serves as a visual representation of the interplay between these components and their relationships within the context of ML projects. It offers a comprehensive overview of how different perspectives shape the tasks at hand, while considering the specific concerns associated with each task. Additionally, it highlights the involvement of various stakeholders who contribute their expertise and insights throughout the development process. By presenting this integrated diagram, we aim to provide a clear and structured approach for understanding the complex dynamics involved in building successful ML-enabled systems. ### Perspective-Based ML Specification Template Documenting and organizing requirements is crucial for ensuring a clear understanding of the desired software system functionality, facilitating communication and collaboration, verifying and validating requirements, managing changes, and enabling knowledge transfer. It plays a vital role in successful software development and project outcomes. In order to fulfill these promises, we proposed a specification template based on the perspective-based ML task and concern diagram that provides a standardized format for documenting and organizing the applicable concerns of ML-enabled systems. Fig. 5 presents the perspective-based ML specification template for user experience and infrastructure perspectives. The complete template is available in our online repository1 Footnote 1: [https://github.com/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/enen/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/enen/en/en/en/enen/en/en/en/en/enen/en/en/en/enen/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/enen/en/en/en/en/en/en/en/en/en/enen/en/en/en/enen/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/enen/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/enen/en/en/en/en/en/enen/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/enen/en/enen/en/enen/en/en/en/en/en/enen/en/en/en/en/en/en/en/enen/en/en/en/en/enen/en/en/en/en/en/en/en/en/en/enen/en/enen/en/en/en/en/en/en/en/en/en/en/en/en/en/en/enen/en/enen/en/en/en/enen/en/enen/en/en/en/enen/en/enen/en/enen/en/en/enen/enen/en/en/en/en/enen/en/en/en/en/enen/en/en/en/en/en/enen/en/enen/en/enen/enen/en/enen/en/en/enen/enen/en/enen/enen/en/enen/en/enen/en/en/enen/en/en/en/en/enen/en/en/en/en/enen/en/en/enen/en/enen/enen/en/en/en/en/enen/en/enen/en/en/enen/en/en/en/enen/enen/en/en/enen/enen/en/enen/enen/en/en/enen/en/enen/en/enen/en/en/enen/en/enen/en/en/enen/en/enen/enen/enen/en/enen/enen/en/enen/enen/en/en/enen/en/enen/en/enen/enen/en/enen/en/enen/en/en/enen/enen/en/en/enen/en/enen/enen/en/en/enen/en/enen/en/enen/enen/en/en/enen/en/enen/en/enen/en/enen/enen/en/en/enen/en/en/enen/en/enen/en/enen/en/enen/en/enen/enen/enen/en/enen/enen/enen/en/enen/en/enen/en/enen/enen/enen/en/enen/enen/enen/enen/en/enen/en/enen/en/enen/en/enen/enen/enen/en/enen/en/enen/enen/enen/en/enen/enen/en/enen/enen/enen/enen/enen/enen/enen/enen/enen/enen/en/enen/enenen/en/enen/enen/enen/en/enenen/enen/enen/enen/enenen/enen/enen/enen/enen/enenen/enen/enenen/enen/enenen/enen/enen/enen/enen/enen/enen/enen/enen/enen/enen/enen/enen/enen/enen/enen/enen/enenen/enen/enen/enen/enen/enen/enen/enen/enen/enen/enen/enen/enen/enen/enen/enen/enen/enen/enenen/enen/enenen/enen/enenen/enen/enen/enen/enenen/enenen/enenen/enenenen/enenen/enenen/enenenen/enenen/enenen/enenen/enenenen/enenenen/enenenenen/enenenenenen/en](https://github.com/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/enen/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/enen/en/en/en/enen/en/en/en/en/enen/en/en/en/enen/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/enen/en/en/en/en/en/en/en/en/en/enen/en/en/en/enen/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/enen/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/enen/en/en/en/en/en/enen/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/enen/en/enen/en/enen/en/en/en/en/en/enen/en/en/en/en/en/en/en/enen/en/en/en/en/enen/en/en/en/en/en/en/en/en/en/enen/en/enen/en/en/en/en/en/en/en/en/en/en/en/en/en/en/enen/en/enen/en/en/en/enen/en/enen/en/en/en/enen/en/enen/en/enen/en/en/enen/enen/en/en/en/en/enen/en/en/en/en/enen/en/en/en/en/en/enen/en/enen/en/enen/enen/en/enen/en/en/enen/enen/en/enen/enen/en/enen/en/enen/en/en/enen/en/en/en/en/enen/en/en/en/en/enen/en/en/enen/en/enen/enen/en/en/en/en/enen/en/enen/en/en/enen/en/en/en/enen/enen/en/en/enen/enen/en/enen/enen/en/en/enen/en/enen/en/enen/en/en/enen/en/enen/en/en/enen/en/enen/enen/enen/en/enen/enen/en/enen/enen/en/en/enen/en/enen/en/enen/enen/en/enen/en/enen/en/en/enen/enen/en/en/enen/en/enen/enen/en/en/enen/en/enen/en/enen/enen/en/en/enen/en/enen/en/enen/en/enen/enen/en/en/enen/en/en/enen/en/enen/en/enen/en/enen/en/enen/enen/enen/en/enen/enen/enen/en/enen/en/enen/en/enen/enen/enen/en/enen/enen/enen/enen/en/enen/en/enen/en/enen/en/enen/enen/enen/en/enen/en/enen/enen/enen/en/enen/enen/en/enen/enen/enen/enen/enen/enen/enen/enen/enen/enen/en/enen/enenen/en/enen/enen/enen/en/enenen/enen/enen/enen/enenen/enen/enen/enen/enen/enenen/enen/enenen/enen/enenen/enen/enen/enen/enen/enen/enen/enen/enen/enen/enen/enen/enen/enen/enen/enen/enen/enenen/enen/enen/enen/enen/enen/enen/enen/enen/enen/enen/enen/enen/enen/enen/enen/enen/enen/enenen/enen/enenen/enen/enenen/enen/enen/enen/enenen/enenen/enenen/enenenen/enenen/enenen/enenenen/enenen/enenen/enenen/enenenen/enenenen/enenenenen/enenenenenen/en) time and effort during the specification process. This reduces redundancy, and allows stakeholders to focus on the specific details and concerns of the ML-enabled system. The perspective-based ML specification template consists of a set of predefined questions that guide the exploration and assessment of the concerns related to the tasks and perspectives. For example, if the concern is about the strategy to storage ML artifacts, the template includes a question that highlights ML artifacts such as models, data, experiment, and environments. If the concern is about the strategy to improve the performance of the ML algorithm, the template includes a question that highlights options such as hyper-parameter tuning. By analyzing these question-oriented concerns, we seek that stakeholders can ensure a comprehensive and systematic exploration of the concerns. Inherently to the nature of ML projects, some types of concerns (_e.g._, algorithm & model selection (M1) and data operations & modeling (D14)) are uncertain at the beginning of the project, mainly due to a common need of experimentation to get a better understanding on achievable requirements. Hence, they may be refined as the project progresses. The perspective-based ML specification template we proposed, highlights these concerns with the letter "E". ### _PerSpecML'_ Logical Flow In order to provide clarity, structure, reproducibility, and consistency, this section shows the steps to be followed for executing _PerSpecML_. The purpose is to break down the overall process into manageable and sequential tasks, making it easier for stakeholders to understand and follow. Fig. 6 summarizes the logical flow to ensure that _PerSpecML_ is executed in a systematic and organized manner, leading to more successful outcomes. Figure 5: Perspective-based ML specification template for user experience and infrastructure perspectives We expect _PerSpecML_ to be used by requirements engineers, in collaboration with other stakeholders, to support the specification of ML-enabled systems. The process begins by considering the perspectives. We established an intuitive order to analyze them: system objectives, user experience, infrastructure, model, and data. Given a perspective, a requirement engineer or a stakeholder performing that function can analyze each concern with the recommended stakeholders, also considering the relationships between concerns. If the concern is applicable, it should be specified in the perspective-based ML specification template and classify its relevance into desirable, important or essential. ## 5 Validation in Academia As we mentioned before, _PerSpecML_ is the result of a series of validations that were conducted in different contexts. The first validation was carried out within an academic environment where students had to use the candidate solution introduced in Section 3.3 to specify a toy problem. The simplified nature of the toy problem allowed for a clear understanding of how the candidate solution performed and how it could be improved. This led to valuable lessons and discoveries that were applied in the next validation with a more complex problem. In the following, we detail the validation in academia. ### Context The academic validation took place in the context of two courses on SE for data science with professionals, who are also students, from a Brazilian logistic company called Loggi2, and computer science graduate students from the Pontifical Catholic University of Rio de Janeiro (PUC-Rio). Participants were asked to specify a feature for an ML-enabled system using the example of a bank loan problem, by analyzing the candidate solution's perspectives and concerns. The feature consisted of automatically classifying customers into good or bad payers and was described in user story format. Footnote 2: [https://www.loggi.com](https://www.loggi.com) _As a Bank Manager I want to automatically classify customers so that I can decide upon granting a requested loan_ From the user story, we can infer that the ML component needs to access, for learning purposes, data on customer characteristics, previously granted loans, and payment Figure 6: Logical flow for executing _PerSpecML_ records. Regarding non-ML components and integration with other services, the participants could assume restrictions and requirements of the software system that the ML component would use. With this information, we asked the participants to analyze each concern of the candidate solution and provide a reasonable specification, if applicable. Thereafter they were asked to individually answer a follow-up questionnaire critically assessing the relevance and completeness of the candidate solution's perspectives and concerns. All the material provided to the participants is available in our online repository1. Fig. 7 illustrates the academic validation. Footnote 1: [https://www.cds.org/](https://www.cds.org/) ### Goal and Method We detail the goal of the validation in academia in Table 12. We followed the Goal-Question-Metric (GQM) goal definition template [5], which is a structured approach commonly used in SE and other disciplines, to help establish a clear connection between the overall goal, the specific questions that need to be answered, and the metrics used to measure progress. Based on the goal, we established the following research questions for the validation in academia: * **RQ1:** What is the relevance of each perspective of the candidate solution? We wanted to identify whether the perspectives of the candidate solution were perceived as meaningful and pertinent by the participants. This feedback helped confirm that \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|} \hline **Analyze** & the candidate solution’s perspectives and concerns \\ \hline **for the purpose of** & characterization \\ \hline **with respect to** & perceived relevance and completeness, and ease of use, usefulness and intended use \\ \hline **from the viewpoint of** & professionals and computer science graduate students \\ \hline **in the context of** & two courses with 53 data science professionals from Loggi and 15 computer science students from PUC-Rio who were learning SE for data science \\ \hline \end{tabular} \end{table} Table 12: Study goal definition of academic validation Figure 7: Process diagram for the academic validation the perspectives align with the needs and expectations of the intended users, and allowed us to identify areas that may need refinement. * **RQ2:** Are the perspectives of the candidate solution and their concerns complete? This research question relates to the coverage of both the perspectives and concerns. This feedback helped to determine if critical components were missing or if there are gaps that need to be addressed. * **RQ3:** To what extent does participants perceive the candidate solution as useful and beneficial? With this, we seek to understand the factors that influence the acceptance and adoption of the candidate solution. The question followed the technology acceptance model (TAM) [14] and aimed to capture participants' overall assessment and intention to use the candidate solution, incorporating elements of perceived usefulness, perceived ease of use, and intended use. * **RQ4:** What are the limitations and opportunities for improvement of the candidate solution? This research question seeks feedback on the approach itself. ### Selection of Subjects The subjects were the attendants of two SE for data science courses. The in-company course at Loggi had 53 professionals with different background being trained in SE practices for building ML-enabled systems. The graduate course at PUC-Rio had 15 students (nine master and six Ph.D students). While students may have limited expertise compared to professionals in the field, they can provide fresh perspectives, helping us identify potential blind spots. In fact, using students as subjects remains a valid simplification of real-life settings needed in laboratory contexts [17]. In Table 13, we characterized the subjects by their educational background and average year of experience in ML projects. We can see that in the in-company course, not controlled by us, the professionals interested in data-driven projects are divided into those with a computer science background and those with background in other areas such as economics and mathematics. However, it is not surprising since the literature has already noted these findings for this role [28]. Overall, the participants were perceived as relatively inexperienced, as they possess only a few years of practical experience in developing ML-enabled systems. While the participants were selected by convenience (attendants of the courses), we believe that their profiles were suitable for our intended initial validation. \begin{table} \begin{tabular}{|c|c|c|c|} \hline **Course** & **Total** & **Background** & \begin{tabular}{c} **Experience** \\ **(Average in years)** \\ \end{tabular} \\ \hline In-company & \begin{tabular}{c} 33 \\ 20 \\ \end{tabular} & \begin{tabular}{c} computer science \\ other discipline \\ \end{tabular} & \begin{tabular}{c} 1.2 \\ 1.9 \\ \end{tabular} \\ \hline University & 15 & computer science & 1.3 \\ \hline \end{tabular} \end{table} Table 13: Subjects involved in the validation in academia ### Data Collection and Analysis Procedures To address the research questions related to the relevance, completeness, perceived usefulness, and potential improvements of the candidate solution in specifying ML-enabled systems, a questionnaire-based evaluation method was employed. This section outlines the data collection and analysis procedures used in the validation in academia. **Questionnaire Design:** A follow-up questionnaire was designed to gather responses from participants regarding the research questions. The questionnaire included a combination of closed-ended questions related to _RQ1_, _RQ2_ and _RQ3_, and one open-ended question related to _RQ4_ to get both quantitative and qualitative data. **Data Collection:** The questionnaire was delivered to the participants in online format for the in-company course and in-person session for the university course. Participants were provided with clear instructions on how to perform the specification task and how to complete the questionnaire and any specific considerations to keep in mind while responding. **Quantitative Data Analysis:** For RQ1, RQ2, and RQ3, which involve assessing relevance, completeness, and perceived usefulness, quantitative data analysis techniques were employed. Closed-ended questions were used to capture participants' ratings on a two-point likert scale for _RQ1_ and _RQ2_, and four-point likert scale for _RQ3_. Statistical analysis, such as mean and frequency distribution were computed to summarize the quantitative data. **Qualitative Data Analysis:** For RQ4, which seeks to identify potential changes or additions to support practitioners, qualitative data analysis techniques were utilized. Open-ended questions allowed participants to provide detailed and descriptive responses. Qualitative analysis involved thematic coding, categorization, and identification of patterns or recurring themes across the responses. **Interpretation and Findings:** The analysis of the collected data was interpreted according to the research questions. The findings were presented in a clear and concise manner, addressing each research question separately. In this case, charts were used to illustrate the results, providing a comprehensive overview of the validation in academia. ### Results #### RQ1. What is the relevance of each perspective of the candidate solution? This question was designed as a single choice question. To assess the relevance of each perspective of the candidate solution, participants were asked to rate the importance high or low. The perspectives considered in this evaluation included ML objectives, user experience, infrastructure, model, and data. The results indicated that all perspectives were deemed relevant by the participants. Out of a total of 68 participants, 67 considered the data perspective highly relevant, indicating its significant importance in specifying ML-enabled systems. The ML objectives, model and infrastructure perspectives followed closely, at 66,65 and 63 respectively. The user experience perspective received a slightly lower number of 58, indicating its relatively high but somewhat lesser relevance. Fig. 8 presents the relevance of the candidate solution' perspectives based on their respective ratings. Somehow we expect these results, since typically the main focus of practitioners in ML projects is data and models. In contrast, user experience concerns take a back seat to the development of ML-enabled systems. That is why with this work we seek to reinforce the importance of considering a user experience perspective. #### RQ2. Are the perspectives of the candidate solution and their concerns complete? This question was also designed as a single choice question with the option to explain the answer.To assess the completeness of perspectives and their associated concerns of the candidate solution, participants were provided with a list of predefined concerns corresponding to each perspective. They were then asked to indicate whether they believed the list was complete or if there were additional concerns that should be considered. The results revealed that participants generally considered the initial concerns and perspectives to be comprehensive but suggested some additional concerns. Only six out of 68 participants felt that something was missing. Across perspectives, the model perspective had the highest number of additional concerns identified by participants, highlighting the importance of monitoring ML models, optimizing parameters of ML algorithms, and breaking concepts about explainability. Below are the comments of the participants in that direction. "There should be a monitoring concern related to the model view. In the same way we have to train the model, we have to monitor the model outputs" "Parameter tuning in algorithms helps improve model performance. I would include this concern" Figure 8: Frequencies of the relevance of each perspective of the candidate solution "Explainability could be divided into two: explainability and interpretability, given that there are explainable models that are not necessarily interpretable" #### RQ3. To what extent does participants perceive the candidate solution as useful and beneficial? To gauge participants' perception of the acceptance of the candidate solution for specifying ML-enabled systems, participants were asked to rate the solution on various aspects. These aspects included ease of use, usefulness and intended use. Ratings were provided on a scale of 1 to 4 (four-point likert scale), with 1 indicating strongly disagree, 2 indicating partially disagree, 3 indicating partially agree, and 4 indicating strongly agree. The TAM questionnaire results are shown in Fig. 9. The responses indicated a positive perception of the candidate solution. Participants from both courses rated the solution highly in terms of usefulness, with an average rating of 3.7, suggesting that the candidate solution can support the specification of ML-enabled systems. The ease of use of the candidate solution received an average rating of 3.1, indicating that the candidate solution did not provide enough guidance to be considered clear. The intended use of the candidate solution was rated at an average of 3.3, reflecting its feasibility and applicability. Overall, the candidate solution was perceived as highly useful, but showed potential for improvement in terms of ease of use. We understood that improving the candidate solution' guidance will imply an improvement in the perception of intended use. #### RQ4. What are the limitations and opportunities for improvement of the candidate solution? Here, participants had the option to respond in open text format. To identify potential improvements in supporting practitioners in specifying ML-enabled systems, participants were asked to provide suggestions regarding components, perspectives, or concerns that could be changed or added to enhance the candidate solution.The analysis of participants' responses revealed several valuable suggestions. As identified in the Figure 9: Frequencies of the TAM constructs for academic validation results of _RQ3_, some participants emphasized the need to further integrate the relationship between concerns. Others highlighted the importance of incorporating a road-map to apply the candidate solution. Additionally, one participant recommended providing more practical examples and case studies to enhance the solution's applicability. In the following, we present the comments of the participants in that direction. "It would be interesting to connect more concerns because I clearly see some relationships. For example, in the model perspective the explainability concern depends, to some extent, on the selection of the algorithm" "I would suggest explaining better how to use the approach because sometimes I did not know where to start and when to end" "Definitely a practical example would help to better understand the proposal" These results provided insights into the relevance of the perspectives, the completeness of the concerns, the perceived usefulness, and potential improvements, guiding the refinement of the candidate solution. The validation in academia resulted in the following improvement opportunities. 1. In the infrastructure perspective, we decided to include'monitorability' as a new concern, since this may require implementing different services such as real-time logging, alerts, and data drift detection 2. In the model perspective, we broke the explainability concern into 'explainability and interpretability', since these terms can have different interpretations 3. We added 'algorithm parameter tuning' as a new concern of the model perspective, since data scientists typically need to analyze strategies to improve ML metrics 4. We defined a **set of steps** to be followed by stakeholders in order to apply the candidate solution ## 6 Static Validation In Industry At this point, we made some improvements to the candidate solution, resulting in a version called _PerSpecML v1_. Building upon the foundation of the candidate solution, _PerSpecML v1_ incorporates refinements and additions based on valuable feedback and insights from the students involved in the academic validation. In this section, we detail the second evaluation that was carried out in industry where practitioners had to use _PerSpecML v1_ to retroactively specify two ready-made ML projects. We called this evaluation as static since it was performed without executing _PerSpecML v1_ in a real or simulated environment. ### Context The static validation in industry involved practitioners of a R&D initiative called _ExACTo3_ who developed two ML-enabled system projects from different domains for a large Brazilian oil company. The projects were developed following the Lean R&D approach [26] and are already deployed in production in several oil refineries. We refer to these projects as project A and B, since for reasons of confidentiality and undergoing patent requests they cannot be explicitly mentioned. Table 14 details these projects. We retroactively specified Project A and B using _PerSpecML v1_ with the support of the product owner of each project, analyzing the perspectives and their concerns, and filling a drafted specification template. This means that the specifications were added after the project had already finished. Thereafter, we discussed the resulting specifications in a focus group with the practitioners who developed these projects. Lastly, we provided to practitioners with a follow-up questionnaire to critically evaluate _PerSpecML v1_. All mentioned artifacts are available in our online repository1. Fig. 10 shows the process diagram for the static validation in industry. Footnote 1: [https://www.faceface.com/face](https://www.faceface.com/face) * **RQ1:** What problems do participants face in practice when specifying ML-enabled systems? We wanted to identify the challenges and difficulties encountered by participants when specifying ML-enabled systems. By understanding these problems, we analyzed the adherence to our solution, and identified the suitability of _PerSpecML v1_ to cover the needs of practitioners. * **RQ2:** What perception do the participants have of the retroactive specifications of projects A and B derived from _PerSpecML v1_? By answering this research question, we gathered insights about the benefits or detriments of using _PerSpecML v1_. * **RQ3:** What are the limitations and opportunities for improvement of _PerSpecML v1_? With the feedback received, we refined and enhanced _PerSpecML v1_ * **RQ4:** To what extent do the participants perceive _PerSpecML v1_ as easy to use, useful and usable in the future? Through the TAM questionnaire, we explored the level of satisfaction and confidence participants had in _PerSpecML v1_ as an approach for specifying ML-enabled systems. ### Selection of Subjects We invited six practitioners who have been actively working with the development of ML-enabled systems in the _ExACTa_ initiative. Before starting the focus group and providing the questionnaire, we carefully selected the participants by asking them about the role and their experience in years working with ML projects. Table 16 shows an overview of the participant characterization. It is possible to observe that in this study participants represent three different roles: data scientists who are interested in how the approach can help to build suitable and functional ML models, developers who are interested in how the approach can help \begin{table} \begin{tabular}{|c|c|c|} \hline **Id** & **Role** & **Project** & **Experience (Years)** \\ \hline P1 & & A & 6 \\ P2 & Data scientist & B & 2 \\ P3 & & B & 2 \\ \hline P4 & & A & 2 \\ P5 & Developer & B & 3 \\ \hline P6 & Project lead & A & 2 \\ \hline \end{tabular} \end{table} Table 16: Subjects involved in the static validation in industry \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|} \hline **Analyze** & _PerSpecML v1_ (academically validated improved version) and its resulting specifications \\ \hline **for the purpose of** & characterization \\ \hline **with respect to** & perceived industrial relevance, ease of use, usefulness and intended use \\ \hline **from the viewpoint of** & practitioners \\ \hline **in the context of** & retroactively elaborated ML-enabled systems specifications using _PerSpecML v1_ with six experienced software practitioners involved in the development of these systems \\ \hline \end{tabular} \end{table} Table 15: Study goal definition of the static validation to design the integration between components, and project leaders who are interested in how the approach can help the team achieve its goals. This allowed to gather feedback from people who have different needs and priorities. On the other hand, participants showed have more than two years of experience, helping us determine whether _PerSpecML v1_ would work well in practice and what could be improved. Note that we selected three practitioners of each project involved in the evaluation. ### Data Collection and Analysis Procedures To address the research questions, a combination of focus group discussions and questionnaires were employed for data collection. In the following, we outline the data collection and analysis procedures used in the static validation in industry. #### Focus Group We conducted a focus group for promoting in-depth discussion on RQ1 and RQ2 [29]. Focus group is a qualitative research method that involves gathering a group of people together to discuss a particular topic, allowing for interaction between the participants, which can help to surface different viewpoints. **Procedure:** The focus group was conducted in a structured and moderated format. The discussions were guided by the first author using open-ended questions related to RQ1 and RQ2, allowing participants to share their experiences, perspectives, and challenges faced when specifying ML-enabled systems. **Data Collection:** We recorded the focus group with the consent of the participants to gather qualitative data. Transcripts of the focus group discussions were generated by the first author from the recordings, capturing participants' insights, ideas, and suggestions regarding RQ1 and RQ2. **Data Analysis:** Thematic analysis was employed to identify common themes, patterns, and recurring topics in the focus group data [42]. The transcripts were coded, and emerging themes were categorized with the consensus of the authors. By last, the final set of categories were analyzed to address the research questions. The transcriptions and all codes are available in our online repository1. Examples of codes are highlighted when presenting the results. Footnote 1: [https://github.com/](https://github.com/) to RQ4. These findings provided numerical insights and trends, allowing for a comprehensive understanding of participants' perceptions about the acceptance of _PerSpecML v1_. Qualitative data analysis techniques were also used to respond RQ3, involving coding and categorization. ### Results #### RQ1. What problems do participants face in practice when specifying ML-enabled systems? We asked the participants about the problems they face when specifying ML-enabled systems. We coded and categorized the transcriptions of such discussions and then analyzed them to answer this research question. We found that participants frequently mentioned _lack of approaches to support the specification_ given that ML incorporates additional challenges, which can make it difficult to specify ML-enabled systems. For instance, P6 stressed: "To the best of my knowledge there are no tools or approaches spread in industry helping practitioners to elicit, specify and validate requirements for ML systems" In the same line, P4 and P5 complemented: "I'm curious to see a formal specification of an ML component. Based on my experience, these definitions are informal and emerge as the project progresses" "Sometimes I feel that the ML development team often transmits skepticism to customers, not because of the lack of knowledge of its members, but because of the lack of an established process to define what can be done in ML terms with what the customer makes available (_e.g._, data, business information)" On the other hand, we identified expressions about specification problems derived from the _need to involve domain experts_. For instance, P1 reported that understanding the specific domain plays a major role for accurate specifications: "Typically domain experts are busy, so they tend to be less involved in the early phases of ML projects. In the end, they often find unexpected results. Their involvement is important in areas such as feature engineering, data pre-processing and model evaluation" P4 highlighted that customers often overestimate what ML can do. This leads to _unrealistic expectations of ML capabilities_, posing challenges in the specification process. The participant expressed: "Most of the time, customers expect that ML systems can solve all problems. They also don't imagine the number of components that are required to operate and maintain an ML model over time. Requirements engineering could help to address these challenges" These findings reflect some of the problems faced by participants in practice when specifying ML-enabled systems, as identified through the focus group discussions with experienced practitioners. The insights gained from these discussions shed light on the key areas that require attention to overcome challenges such as _the lack of approaches to support the specification, the need to involve domain experts, and the customer unrealistic expectations of ML capabilities_ RQ2. What perception do the participants have of the retroactive specifications of projects A and B derived from _PerSpecML v1_?_ After the participants analyzed the resulting specifications for Project A and B derived from _PerSpecML v1_, we asked them what they thought about it. Their feedback indicated positive perceptions of the specifications and their future impact on the development process. For instance, the participants highlighted that the specifications acted as a _guide during the development process_, helping to improve the overall development workflow. P1 manifested: "Looking at the diagram and its corresponding specifications allowed me to get an early overview of the requirements that can be refined as the project progresses. It is like a high-level guided development" P1, P3 and P6 expressed that the retroactive specifications _enhanced clarity and understanding_ of the ML-enabled systems for both projects: "I found that the specifications facilitated a better understanding of the systems' functionality, components, and data requirements, specially for Project A, in which I was involved" "I really liked the focus on diverse aspects such as data, model, and infrastructure. This landscape facilitates the understanding of the projects" "Identifying the tasks and concerns and their relationships allows identifying dependencies and influences as intended" In addition, P3 mentioned that using _PerSpecML v1_ allowed to _identify hidden concerns_ that are not easily identified at first sight: "Typically, user experience concerns are put in the background. With _PerSpecML_ was possible to early specify forcefulness, a concern analyzed late in the validation phase of Project B" Finally, P5 noted that the retroactive specifications derived from _PerSpecML v1_ helped in _documenting and communicating_ the ML-enabled systems for both projects: "In my opinion, it is easy to convey the specifications to stakeholders, enabling better collaboration and alignment throughout the development process. For example, as a developer I can identify tasks where I need to collaborate with data scientists" Overall, there was a clear consensus on the benefits of the retroactive specifications of Project A and B, derived from _PerSpecML v1_. According to the participants, the specifications _enhanced clarity and understanding, improved documentation and communication, acted as guide during the development process, and identified hidden concerns._ #### RQ3. What are the limitations and opportunities for improvement of _PerSpecML v1_?_ Participants' feedback revealed several limitations and opportunities for improvement. These insights, derived from the open-ended question of the questionnaire, can be related to the findings of RQ4, where we had participants who expressed partial agreement and disagreement about ease of use, usefulness, and intended use. For instance, P1 and P2 suggested that _providing additional guidance_ could help users grasp _PerSpecML v1_ more easily. "It is not clear to me how to get the specifications from analyzing the diagram. Even with the provided steps to apply the solution, it is not clear to me" "Providing tutorials or additional documentation could improve its application" Participants also provided feedback on _improving the user interface_ of _PerSpecML v1_, suggesting a more user-friendly design. "In my opinion, the specification template, which summarizes what the system should do, should be cleaner. I mean, the relationships between concerns are not needed as they exist in the diagram" "Better visualizations and intuitive navigation could further enhance the user experience and ease of use" On the other hand, P6 commented on _improving the relationship between tasks and concerns_. More specifically, the participant suggested breaking down a task of the ML objective perspective, since the concerns were not related at all. "In the ML objective perspective there is something that does not make sense. The 'define objectives' task has independent concerns that could be part of separate tasks" We identified limitations and opportunities for improvement of _PerSpecML v1_ related to _providing additional guidance, improving the user interface, and improving the relationship between tasks and concerns_. Some of them may be related with the participants' perceptions explored in RQ4. We addressed these limitations and capitalized on the opportunities for improvement, allowing to refine _PerSpecML v1_ to better meet the needs and challenges identified by practitioners. RQ4. To what extent do the participants perceive _PerSpecML v1_ as easy to use, useful and usable in the future? The participants' responses to a TAM questionnaire indicated varying degrees of agreement or disagreement with statements about ease of use, usefulness, and intended use. While the majority of participants totally agreed with the statements, there were a few participants who expressed partial agreement or disagreement. More specifically, one participant encountered some difficulties in using _PerSpecML v1_, two participants had reservations about its usefulness, and one participant was not fully confident in using it in the future. The TAM questionnaire results are shown in Fig. 11. These varied perceptions explained to some extent the feedback received in RQ3 for identifying areas of improvement and addressing any concerns or challenges raised by participants. At the end of this validation, we decided to consider the feedback of the practitioners of the _ExACTa_ initiative. In the following, we outline what was incorporated into _PerSpecML v1_ from this static validation in industry. 1. We added the **domain expert role** to the _PerSpecML v1 '_ stakeholders, including it in tasks 2. The steps defined in the academic validation to apply _PerSpecML v1_ turned into a **workflow diagram** to facilitate its application 3. We improved the _PerSpecML v1_ documentation by creating a **Miro board1** that summarizes the perspectives, tasks and concerns to be analyzed. We also added a **practical use case** and **explanations** of each _PerSpecML_ component Footnote 1: [https://miro.com/miroverse/perspecml-machine-learning/](https://miro.com/miroverse/perspecml-machine-learning/) 1. We improved the user interface of both diagram and specification template by adding **colors** that identify each perspective and their concerns 2. We simplified the specification template by **removing the representation of the relationships between concerns** (leaving them only in the perspective-based ML task and concern diagram, as they are used during the analysis) 3. We checked **terminology** and the **relationship between tasks and concerns** of each perspective to ensure its suitability Figure 11: Frequencies of the TAM constructs for static validation industry ## 7 Dynamic Validation in Industry Based on the valuable feedback and insights from the practitioners involved in the static validation, we made significant improvements to _PerSpecML v1_, resulting in a more robust and enhanced version called _PerSpecML v2_. In this section, we evaluated _PerSpecML v2_ by performing (i) requirement workshop sessions and (ii) interviews with practitioners who work for a large Brazilian e-commerce company known as Americans that offers technology, logistics, and consumer financing services. We called this validation as dynamic, since it was performed by executing _PerSpecML v2_ for specifying two real ML projects from scratch. ### Context We conducted the dynamic validation on two distinct case studies at Americans, where each case study involved a real ML-enabled system that was specified from scratch using _PerSpecML v2_. Each system was assigned a team made up of novice and experienced practitioners. The purpose of these ML-enabled systems is to enhance user experience, increasing engagement, and driving business goals of the Americans company. Table 17 details the ML-enabled systems that were part of this evaluation. Regarding the operation of these studies, we assisted practitioners in the application of _PerSpecML v2_ in requirements workshop sessions by providing the necessary materials and information in advance. This included documentation on _PerSpecML v2_ and example use cases. During the sessions, the practitioners analyzed and specified the ML-enabled systems by using _PerSpecML v2_. The specifications were made by adding post-its into the interactive Miro board we created from the static validation. Thereafter, we interviewed, in two additional sessions, the experienced practitioners who have knowledge of the domain problem and led the design and implementation of both ML-enabled systems to discuss the resulting specifications.Finally, we provided to all practitioners, a follow-up questionnaire to critically evaluate _PerSpecML v2_ and the resulting specifications. All mentioned artifacts are available in our online repository1. Fig. 12 shows the process diagram for the dynamic validation in industry. \begin{table} \begin{tabular}{|p{56.9pt}|p{113.8pt}|p{113.8pt}|} \hline **System** & **ML domain** & **Description** \\ \hline Product Classification & \begin{tabular}{c} Natural Language Processing \\ \end{tabular} & \begin{tabular}{c} It classifies titles of products registered by sellers in the marketplace of the Americans company into categories. Based on the correct category, basic attributes for registering the product details are then provided to the seller \\ \end{tabular} \\ \hline Market & \begin{tabular}{c} Recommendation System \\ \end{tabular} & \begin{tabular}{c} It suggests products to customers that are likely to be of interest or relevance to them. Based on historical data and similarity measures, the products are recommended \\ \end{tabular} \\ \hline \end{tabular} \end{table} Table 17: ML-enabled systems involved in the dynamic validation ### Goal and Method We detail the goal of the case studies of the dynamic validation in Table 18. We followed the GQM template to describe what we evaluated in this second industrial validation. Here, we also describe the research questions. Based on the presented goal, aligned to the purpose of a dynamic industrial validation, we defined the following research question to better understand the practical suitability of using _PerSpecML v2_. * **RQ1:**: What perception do practitioners have while specifying ML-enabled systems by using _PerSpecML v2_? For this research question, we conducted a comprehensive evaluation of practitioners' experiences while specifying ML-enabled systems using _PerSpecML v2_. During the requirements workshop sessions, we observed their interactions with _PerSpecML v2_, noted any challenges or difficulties they encountered, and gathered their feedback through discussions and direct feedback. * **RQ2:**: What perception do experienced practitioners have of the resulting specifications derived from _PerSpecML v2_? To answer this question, we interviewed three experienced practitioners who reviewed and discussed the specifications derived from _PerSpecML v2_. We selected them since experienced practitioners can better assess the efficiency and effectiveness of _PerSpecML v2_ than novice, for instance, by comparing it to existing methods they have used in the past. During the interview, \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|} \hline **Analyze** & _PerSpecML v2_ (statically validated improved version) and its resulting specifications \\ \hline **for the purpose of** & characterization \\ \hline **with respect to** & the perceived quality of the specifications derived from _PerSpecML v2_, and ease of use, usefulness and intended use of _PerSpecML v2_ \\ \hline **from the viewpoint of** & practitioners \\ \hline **in the context of** & two requirements workshop sessions involving 11 novice practitioners and three experienced practitioners who used _PerSpecML v2_ to specify two ML projects from scratch, and (ii) two interviews with the three experienced practitioners who evaluated the resulting specifications derived from _PerSpecML v2_ \\ \hline \end{tabular} \end{table} Table 18: Study Goal Definition of the Dynamic Validation Figure 12: Process diagram for the dynamic validation in industry the experienced practitioners provided their feedback and insights of the specifications. The goal was to gather valuable insights into how the experienced practitioners perceived the quality, completeness, and suitability of the specifications produced by using _PerSpecML v2_. * **RQ3:** What are the limitations and opportunities for improvement of _PerSpecML v2_? To explore this research question, we considered the feedback and discussions from both the novice and experienced practitioners. The novice practitioners' first-hand experience with using _PerSpecML v2_ shed light on challenges, difficulties, and limitations they encountered while applying the approach. Additionally, the insights provided by the experienced practitioners allowed us to identify areas for improvement and potential enhancements. With the feedback received, we further refined _PerSpecML v2_ and came up to its final version. * **RQ4:** To what extent do the practitioners perceive _PerSpecML v2_ as easy to use, useful and usable in the future? To address this research question, we provided to participants a follow-up questionnaire. We collected feedback from both the novice and experienced practitioners regarding their perception of _PerSpecML v2_ as an approach for specifying ML-enabled systems. The novice practitioners, who used _PerSpecML v2_ during the requirements workshop session, provided their insights on the ease of use, usefulness, and usability of the approach. Additionally, the experienced practitioners shared their opinions on the practicality and potential future utility of _PerSpecML v2_. By analyzing their feedback, we gained a comprehensive understanding of how _PerSpecML v2_ was perceived by practitioners across different experience levels. ### Selection of Subjects The dynamic validation involved two main groups of participants from Americans: novice practitioners who specified two ML-enabled systems from scratch using _PerSpecML v2_, and experienced practitioners who also specified the systems, and additionally evaluated the resulting specifications. The practitioners were characterized by having varied backgrounds, such as computer science, mathematics, physics, and others. The diversity in their educational background and experience helped validate the maturity of _PerSpecML v2_. Their feedback shed light on its suitability for real-world implementation and if it meets the expectations and requirements of industry professionals. In Table 19, we characterized the subjects by their role in the development of the ML-enabled systems involved in this study, educational background, and years of experience involved in ML projects. The subjects involved in specifying the ML-enabled systems from scratch were divided into two teams. In the fist one that we call team A, we had six novice practitioners and one experienced practitioner responsible for _Product classification_ system. In the second team that we call B, we had five novice practitioners and two experienced practitioners responsible for _Market_ system. We highlighted the experienced practitioners who led each team with grey color in order to differentiate them from novice. Note that experienced practitioners are data scientists with a different educational background than computer science or engineering (except for P14), as expected for these positions [3, 28]. ### Data Collection and Analysis Procedures To address the research questions outlined in this dynamic validation, we employed three main data collection procedures: requirements workshop sessions, interviews, and a follow-up questionnaire. #### Requirements Workshop Sessions **Workshop Design:** We designed the requirements workshop sessions with a clear agenda and objectives, and outlined the tasks that the participants performed during the workshop, such as using _PerSpecML v2_ to specify the two ML-enabled systems from scratch. This allowed to provide the input to respond to RQ1. **Data Collection:** During the sessions, we collected data in the form of written specifications produced by the practitioners. These specifications included concerns on the five perspectives such as objectives, user experience, infrastructure, model, and data. ### Interviews **Interview Design:** We developed a semi-structured interview protocol for RQ1. The protocol included a set of open-ended questions that focus on the experienced practitioners' perception of the resulting specifications derived from_PerSpecML v2_. Questions explored aspects such as the quality, completeness, clarity, and effectiveness of the specifications. This shed light on answering RQ1. **Data Collection:** We conducted interviews with the experienced practitioners. During the interviews, we used the protocol to guide the discussions, while allowing practitioners to share their thoughts and insights freely. We recorded the interviews in video format, with their consent, in order to ensure accurate capture of responses and allows for later review and analysis. **Data Analysis:** We transcribed the video recordings of the interviews into text format in order to analyze the participants' responses, and then we applied coding \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline **Team** & **Id** & **Role** & **Background** & \begin{tabular}{c} **Experience** \\ **(years)** \\ \end{tabular} \\ \hline \multirow{8}{*}{Team A} & P1 & \multirow{8}{*}{Developer} & Computer science & **1** \\ & P2 & & Design & **1** \\ & P3 & & Computer science & **1** \\ & P4 & & Computer engineering & **1** \\ & P5 & Scrum master & Physics & **1** \\ & P6 & Data scientist & Computer science & **1** \\ & P7 & Data scientist & Linguistic & **8** \\ \hline \multirow{8}{*}{Team B} & P8 & \multirow{8}{*}{Developer} & Electronic engineering & **1** \\ & P9 & & Computer engineering & **1** \\ & P10 & & Computer science & **1** \\ & P11 & & Mathematics & **1** \\ & P12 & Scrum master & Computer science & **2** \\ \cline{1-1} & P13 & Data scientist & Electrical engineering & **4** \\ \cline{1-1} & P14 & Data scientist & Computer science & **6** \\ \hline \end{tabular} \end{table} Table 19: Subjects involved in the dynamic validation in industry techniques to categorize them into themes. In addition, we triangulated by comparing and cross-referencing the results from the different interviewees. **Reporting:** We summarized the findings and insights from the interviews in a structured manner by including direct quotes and paraphrased statements from the practitioners to support the analysis and interpretations. ### Questionnaire **Questionnaire design:** The questionnaire included structured questions and rating scales designed to capture quantitative and qualitative data related to RQ2 and RQ3, respectively. It addressed perceptions and feedback regarding the usefulness and ease of use of _PerSpecML v2_, and identified limitations or opportunities for improvement. **Data Collection:** The questionnaire responses were collected electronically through an online survey platform, taking care of anonymity and confidentiality. **Data Analysis:** Quantitative data analysis techniques, such as descriptive statistics and inferential analysis, were used to analyze the questionnaire responses related to RQ2. These findings provided numerical insights and trends, allowing for a comprehensive understanding of participants' perceptions about the acceptance of _PerSpecML v2_. Qualitative data analysis techniques were also used to respond RQ3, involving coding and categorization. ### Results RQ1. What perception do practitioners have while specifying ML-enabled systems by using _PerSpecML v2_? During the workshop specification sessions, we observed the interactions of practitioners with _PerSpecML v2_ to identify benefits or difficulties they encountered. The comments and discussions indicated that practitioners had a generally positive perception of _PerSpecML v2_ as a supportive tool for guiding them through the specification process. For instance, novice practitioners P3 and P5 appreciated _the visual and intuitive interface of PerSpecML v2_: "At first sight, I was able to identify each perspective, its tasks, and their concerns. This helps me to better understand the requirements and dependencies of the _Product Classification_ system" "I find the specification template and language constructs within _PerSpecML_ beneficial in structuring the specifications effectively" As the workshops progressed, practitioners recognized the _PerSpecML v2_'s role in _early identification and resolution of potential concerns_ in ML projects, and its _ability to facilitate collaboration and communication_ among different teams involved in ML projects. P11, P13, P1 and P3 expressed: "Many times in our projects some of these concerns are only addressed when it is clearly too late. I see the diagram as a roadmap that allows me to identify components that would not be identified without its use" "There are several tasks that at the beginning of the project do not concern our team, but that deserve to be analyzed for their relationships with others" "_PerSpecML_ summarizes the work of several ML teams in one diagram" "Linking the model update task in the infrastructure perspective with the need to get user feedback in the user experience perspective makes sense. This encourages communication between teams involved in ML projects" While some initial learning curve was observed, practitioners quickly grasped _PerSpecML v2_'s functionalities and became comfortable using the approach. Their perception of usability and effectiveness improved as they gained more hands-on experience during the workshop sessions. RQ3 gave us more insights in this line. RQ2. What perception do experienced practitioners have of the resulting specifications derived from _PerSpecML v2_? The experienced practitioners expressed positive feedback regarding the resulting specifications derived from _PerSpecML v2_ for the two ML projects. For instance, P13 and P14 appreciated the _clear and well-structured nature_ _of the_ _specifications_, and the _utility for specific users_: "The specifications demonstrated a good understanding of the ML projects' requirements, guiding the novice practitioners through the specification process" "The diagram can be extremely helpful for novice data scientists or engineers to get an overview of the ML workflow" However, P7 pointed out minor areas where specifications could be further refined to better align with specific project needs: "I am not sure if at the end the specifications are already sufficiently clear, but I can state what has been raised is reasonable and useful. Coming up with a clear specification requires refinements and increments" Indeed, the requirements workshop was supposed to be the first effort towards comprehensive specifications that should be further improved after the workshop. On the other hand, P7 and P14 (experienced practitioners from separate workshops) both compared _PerSpecML v2_ with the approach they used so far in their projects. "_PerSpecML_ provides a more comprehensive overview and is far better than the ML canvas to support specifying ML-enabled systems" "Currently, we use _ML canvas_ to describe ML systems, but _PerSpecML_ covers more elements, and helps analyze their relationships" Overall, the experienced practitioners were impressed with the novice practitioners' efforts and saw _PerSpecML v2_ as a valuable tool for fostering collaboration and understanding between different skill levels within the team. #### RQ3. What are the limitations and opportunities for improvement of _PerSpecML v2_? The open-ended responses in the follow-up questionnaire provided valuable insights into the limitations and opportunities for improvement of _PerSpecML v2_. For instance, P7 suggested adding a concern related to the _financial cost_ associated with the infrastructure that is required to operate an ML-enabled system, while P3 recommended paying attention to the _versioning of libraries_. "Based on my experience, ML systems can be expensive to maintain. Even large companies should carefully consider the costs of maintaining ML systems before implementing them. I would include this concern for sure" "It is important to consider the versioning of the libraries that are typically used in the development of ML-enabled systems. On several occasions I have seen my teammates in trouble, for example, when the Python version is not compatible with the TensorFlow version. If there is a proper version management this could be avoided" Moreover, P13 suggested complementing the model perspective with the phenomenon that occurs when the performance of ML models decreases over time, and that both data scientists and customers typically pass up. "Requirements specifications captures what the system is supposed to do, right? ML models tend to degrade over time due to several factors such as environmental and data changes. This behavior is typically not considered, therefore, it should be specified" On the other hand, P12 added another interesting opportunity for improvement: classifying the concerns by importance to better cope with the number of concerns to be analyzed. "When analyzing the diagram I see that the number of concerns is considerable. That's not a bad, in fact, it shows everything to think when designing ML systems. For this reason, I think it would be interesting to classify each concern by its importance. This would somehow prioritize the specification process" Finally, P14 mentioned the importance of automating _PerSpecML v2_: "It would be good to automate the approach by decreasing human involvement in the execution of _PerSpecML_ that are prone to errors. It is a matter of practicality. In short, you can automate the _PerSpecML_' logical flow" Overall, the feedback indicated that _PerSpecML v2_ had potential for enhancement, and practitioners were eager to see future updates and features that could further elevate the tool's usability and effectiveness. 7.4 RQ4. To what extent do the practitioners perceive _PerSpecML v2_ as easy to use, useful and usable in the future? Based on the TAM questionnaire that included four-point Likert scale ratings, we found that practitioners indicated a high level of acceptance and positive perception of _PerSpecML v2_. The summary of the responses is shown in Fig. 13. The majority of participants rated _PerSpecML v2_ as easy to use, with a significant portion (12 out of 14) giving it a rating of 4 (strongly agree). The documentation, intuitive interface and clear instructions provided by _PerSpecML v2_ -improvements that came up in static validation-contributed to its perceived ease of use, making it accessible and user-friendly for both novice and experienced practitioners. However, one participant expressed partial disagreement with the statement of ease of use. This response came from P14, an experienced data scientist who mentioned suggestions for improvements on this topic in the previous question. Additionally, the practitioners found _PerSpecML v2_ to be highly useful in the specification process. Excluding one who expressed partial agreement, all the participants gave it a rating of 4 for usefulness (strongly agree). Indeed, the discussions and the outputs of the workshop sessions showed that _PerSpecML v2_ was especially valuable in guiding practitioners through the specification process and enhancing the overall clarity of the specifications. Furthermore, the practitioners showed positive attitudes towards _PerSpecML v2_'s intended use. The majority of respondents (10 out of 14) expressed that they would be willing to use _PerSpecML v2_ in future ML projects, indicating the approach's potential to become an essential part of their workflow for specifying ML-enabled systems. Overall, the questionnaire results demonstrated a strong acceptance and positive perception of _PerSpecML v2_'s ease of use, usefulness, and future usability among the practitioners. When comparing these results with the static validation, we saw that the perception of ease of use improved considerably, indicating that the improvements from that evaluation had an effect. Figure 13: Frequencies of the TAM constructs for dynamic validation in industry At the end of this validation, we decided to consider the feedback of the practitioners of the Americanas company. In the following, we outline what was incorporated into _PerSpecML v2_ from this dynamic validation in industry, which led to the final version of _PerSpecML_. 1. We added '**financial cost**' as a new concern of the infrastructure perspective, since ML typically demand implementing several services that impact project budget 2. We added '**versioning**' as a new concern of the model perspective, since this is essential for reproducibility, compatibility, and long-term maintainability of ML models 3. We added '**performance degradation**' as a new concern of the model perspective, since it can lead to inaccurate predictions, which can cause problems for businesses and organizations 4. Based on a meta-review of the validations, we included '**education 8 training**' in the user experience perspective, and '**anonymization**' in the data perspective. The first new concern will help that users have a clear understanding of the ML model's capabilities and potential inaccuracies ensure the system's credibility and user satisfaction, and the second one will help to protect sensitive data when required while still maintaining the utility of the data for ML purposes 5. We refined the _PerSpecML v2'logical flow to explicitly include the relevance of the concerns into desirable, important or essential. This could help to prioritize the requirements of ML-enabled systems ## 8 Threats to Validity Assessing the validity of study results is particularly important for ensuring the accuracy, reliability, and generalization of findings. In this study, we empirically evaluated _PerSpecML_ by analyzing human factors, such as practitioners' perceptions and experiences. In the following, we critically examine potential limitations and challenges that could impact the trustworthiness and applicability of our research outcomes. To this end, we followed the categories suggested by Wohlin _et al_[50]. **Construct validity:** For our quantitative and qualitative analyses, we conducted a mix of data collection methods, such as the TAM questionnaire, focus groups, and interviews. These choices were based on the well-established theoretical foundation of such methods. For instance, the TAM model has been widely used in technology acceptance research [44], and its questions were carefully designed to measure specific constructs related to the users' attitudes and intentions towards adopting our approach. **Internal validity:** In the static validation, the practitioners' familiarity with the ML projects that were retroactively specified may have influenced their perception and performance during the validation process, leading to potential bias in the results. To mitigate this threat, we decided to retroactively specify the ML projects with the support of the product owner of each project, but without involving the practitioners. In this case, we wanted to take advantage of this situation since by knowing the ML projects, the practitioners could more easily evaluate the resulting specifications, _e.g._, whether important aspects was missing. **External validity:** We are aware that the generalization of the findings from the academic and static validation to real-world industrial scenarios may be limited. For instance, the toy scenario used in the academic setting and the specifications built retroactively in the static validation may not fully capture the complexity and challenges faced in actual industrial projects. Our intention with these artifacts was to use them to iteratively improve _PerSpecML_ until it was mature and could be evaluated in a more realistic setting. Regarding the subject representativeness, we believe that the validation conducted in academia with students, and in industry with novice and experienced practitioners, constitutes a diverse setting that allowed for the examination of _PerSpecML_ across different scenarios, thereby strengthening the generalization of the findings. **Conclusion validity:** During the data collection and analysis procedures of the three evaluations, we used a single researcher for open coding. To mitigate this threat, we peer-reviewed the list of codes attached to the transcriptions, and validated our findings with the participants of the academic, static and dynamic validation. Therefore, as suggested by [29], we presented our conclusions to the involved participants to validate their agreement. Moreover, we triangulated both qualitative and quantitative data helped provide a more robust understanding of _PerSpecML_'s usability and effectiveness, supporting well-informed conclusions. ## 9 Discussion In this section, we reflect on the outcomes of the validations and how they contribute to the understanding and improvement of _PerSpecML_, our perspective-based approach for specifying ML-enabled systems. We explore the broader implications of the findings, other areas of study, and how our approach can positively impact the development of ML-enabled systems. In terms of **rigor**, _PerSpecML_ is the result of a series of validations that were conducted in different contexts, each contributing valuable insights and refining our approach to meet the diverse needs of practitioners involved in ML projects. Through careful evaluations encompassing academia and industry, _PerSpecML_ has undergone iterative enhancements, ensuring its effectiveness and adaptability in guiding the specification of ML-enabled systems across various scenarios and project complexities. The combination of student validation, real-world discussions with experienced data scientists, and collaborative evaluations with both novice and experienced practitioners has culminated in a robust and user-friendly approach that empowers teams to collaboratively and comprehensively define ML-enabled systems from inception to completion. In terms of **scope and coverage**, _PerSpecML_ was designed with the underlying assumption that the problem to be solved can benefit from ML, which is not always the case. Guidance to assess this assumption is out of our scope. While the focus of _PerSpecML_ are requirements engineers, the specialists who provide a clear understanding of what needs to be built, other stakeholders such as project leaders can preside the application of _PerSpecML_. In addition, we are aware that not every ML-enabled system needs to address all the concerns we proposed and not every ML-enabled system needs to implement them to the same degree. Beyond qualities of ML components, of course, we also care about qualities of the system as a whole, including response time, safety, security, and usability. That is, traditional RE for the entire system and its non-ML components is just as important. Note that when considering the overall system, general quality characteristics of software products such as the ones mentioned in the ISO/IEC 25010 standard [24], should also be analyzed. In terms of **expected benefits**, the main purpose of _PerSpecML_ is to support the specification of ML-enabled systems by analyzing the ML perspective-based diagram and filling out the ML specification template. Nevertheless, we believe _PerSpecML_ may eventually be useful in various situations. First, to validate an already specified ML-enabled system. In this case, the concerns would be a reference since they came from diverse source of knowledge (literature review, practical experiences and an external industrial experience on building ML-enabled systems [22]). Second, _PerSpecML_ may help design ML-enabled systems, since it includes (i) different components, including functional and non-functional properties, (ii) how they interact with each other, (iii) how they are deployed, and (iv) how they contribute with business requirements. Third, _PerSpecML_ is applicable to the most common ML approaches from typical ML domains, such as classification or regression problems, to more complex domains, such as computer vision and natural language processing. In fact, in the validations we conducted, we used different type of ML domains. ## 10 Concluding Remarks In this paper we presented _PerSpecML_, a perspective-based approach for specifying ML-enabled systems, designed to identify which attributes, including ML and non-ML, are important to contribute to the overall system's quality. The approach empowers requirements engineers to analyze, with the support of business owners, domain experts, designers, software and ML engineers, and data scientists, 59 concerns related to typical tasks that such practitioners face, grouping them into five perspectives: system objectives, user experience, infrastructure, model, and data. We introduced two main artifacts of _PerSpecML_: (i) the perspective-based ML tasks and concern diagram that provides a holistic view of ML-enabled systems, and (ii) its corresponding specification template that provides a standardized format for documenting and organizing the applicable concerns. Together, these artifacts serve to guide practitioners in collaboratively and comprehensively designing ML-enabled systems, enhancing their clarity, exploring trade-offs between conflicting requirements, uncovering hidden or overlooked requirements, and improving decision-making. The creation of _PerSpecML_ involved a series of validations conducted in diverse contexts, encompassing both academic and real-world scenarios as suggested in [20] for scaling proposals up to practice. The evaluation process began with a validation in academia, where students from two courses of SE for data science participated in specifying an ML-enabled system for a toy problem. This initial validation mainly showcased the promise of the approach and its potential for improvement in terms of ease of use. The static validation in an industry setting involved discussions with practitioners of a R&D initiative, analyzing specifications retroactively for two ready-made ML projects. This validation highlighted _PerSpecML_'s role as a roadmap for identifying key components that could be missed without using the approach, but also identified opportunities for improvements related to usability. Lastly, the dynamic validation engaged both novice and experienced practitioners of a Brazilian large e-commerce company, who specified two real ML-enabled systems from scratch using _PerSpecML_. The feedback from previous validations allowed the practitioners to focus on improvements related to the completeness of the concerns and how to use the approach. As a result of the diverse evaluations and continuous improvements, _PerSpecML_ stands as a promising approach, poised to positively impact the specification of ML-enabled systems. While the validations of _PerSpecML_ have yielded promising results and provided valuable insights, there remain several avenues for future work and enhancements to further enrich the approach and its applications in the field. For instance, investigating ways to automatically generate detailed documentation from the specifications provided in _PerSpecML_ artifacts could significantly streamline project management and maintainability. This would further bridge the gap between specification and implementation phases. In addition, conducting other studies and soliciting continuous feedback from practitioners who actively use _PerSpecML_ in real projects would offer valuable insights into its long-term benefits. By last, given the potentially conflicting nature of the concerns highlighted in _PerSpecML_, delving into the study of trade-offs becomes even more promising, as it offers a pathway to address the complex particularities of ML-enabled systems. ## Acknowledgment We would like to thank the employees of Loggi, of the ExACTa Initiative at PUC-Rio and of Americans S.A. Thanks also for the financial support of the Brazilian CAPES and CNPq agencies (grant 312827/2020-2).
2309.09775
ArxNet Model and Data: Building Social Networks from Image Archives
A corresponding explosion in digital images has accompanied the rapid adoption of mobile technology around the world. People and their activities are routinely captured in digital image and video files. By their very nature, these images and videos often portray social and professional connections. Individuals in the same picture are often connected in some meaningful way. Our research seeks to identify and model social connections found in images using modern face detection technology and social network analysis. The proposed methods are then demonstrated on the public image repository associated with the 2022 Emmy's Award Presentation.
Haley Seaward, Jasmine Talley, David Beskow
2023-09-18T13:57:24Z
http://arxiv.org/abs/2309.09775v1
# ArxNet Model and Data: Building Social Networks from Image Archives ###### Abstract A corresponding explosion in digital images has accompanied the rapid adoption of mobile technology around the world. People and their activities are routinely captured in digital image and video files. By their very nature, these images and videos often portray social and professional connections. Individuals in the same picture are often connected in some meaningful way. Our research seeks to identify and model social connections found in images using modern face detection technology and social network analysis. The proposed methods are then demonstrated on the public image repository associated with the 2022 Emmy's Award Presentation. Keywords:face detection network science social network ## 1 Introduction and Background The increased use of photography in the 20th century led to the phrase "a picture is worth a thousand words," first recorded used circa 1911. Images contain a wealth of information. Images of people portray emotion, personality, economic status, and other information about the people and events that are etched into pixels. Pictures also record human relationships. Images that contain more than one person often indicate some level of connection or relationship among the individuals captured in a still photo or motion images. These could be family, business, friendship, acquaintance, or other types of social ties. These connections can be captured if we have a method to identify the individuals in the image. This capability came in the form of Face Detection. While face detection research had an early start in US intelligence organizations in the 1960s [11], it became usable in the late 1990s and scientists now had the method to identify unique individuals in images. With face detection technology in hand, some past research has focused on building social networks from a family picture archive [16] and from movies [14]. Our research will focus on building these networks on event data using modern open-source facial detection software. Additionally, our algorithms become computationally scalable by incorporating efficient search algorithms. This algorithm is demonstrated on a specific public event, the 2022 Emmy's Awards held on 12 September 2022. The algorithm as well as the face detection embeddings and resulting social network will be made public once this paper is published. ## 2 Previous Work Face detection is one of the most studied aspects of computer vision. Face detection is used for a variety of different use cases and is the first step in most human-computer and human-robot interaction [15]. Computer vision-based faced detection began in the 1960s [4] but wasn't practical on normal everyday photos (photos "in the wild") until the Viola-Jones algorithm using boosting methods was developed in the late 1990s and early 2000s [13]. Face recognition involves several steps, each of which have a number of different algorithms to address. Facial recognition generally involves the following steps: 1. Face detection (finding and segmenting the face(s) in the image) 2. Face warping (given that faces can be at an infinite number of angles, this step identifies facial landmarks and 'warps' the head so that the eyes, nose, and mouth are centered) 3. Encode the faces (create a mathematical representation of the face) 4. Compare faces (recognize person by comparing encodings) With the growth of mobile technology today, image archives can easily contain millions of images. Using a brute force method for searching face embeddings becomes computationally intractable, as we prove below. Several methods have been developed to improve search. These include the k-dimensional tree (KD-tree) [3], which is successful at low-dimension embeddings. Ball-tree [12] and Locality Sensitive Hashing (LSH) [7] both work effectively at higher dimensions. Other methods include Approximate Nearest Neighbors (ANN) [7] which sacrifices some accuracy for speed [2]. We used the CPU version of FAISS [8] and explored the use of both the default Flat L2 Search (also a brute-force method) and the Inverted File with Product Quantization (IVFPQ) indexing. The inverted file indexing creates inverted lists where each list contains vectors that are close to a centroid and are efficient to search. This also uses product quantization as a compression technique that reduces the size of the index in memory, increasing the allowable index size. Relatively few authors have looked at building relationship networks from images. In 2003 Zhang et al used a Bayesian framework to annotate faces and create a network from a family photo album by estimating face similarity, working with missing features, and generating names for unlabeled faces by comparing their similarities to a list of labeled faces [16]. Later, Weng, Chu, and Wu created 'RoleNet', an algorithm that would automatically identify movie scenes and create connections between the actors/characters that were represented in that scene [14]. Zhang, Luo, and Loy expanded on this to describe interpersonal relationships from the facial expressions of face images in the wild [17]. This research was expanded on with networks clustering in 2020 [10]. We used the Python face-recognition package created by Adam Geitgey to conduct facial recognition [6]. Our pipeline used Histogram of Oriented Gradients (HOG) [5] for face detection, face landmark orientation for face warping [9], and Open Face [1] for facial encoding. We then used euclidean distance for face recognition given the face emebedding. Our research aims to build a relationship network of faces from a collection of photos from a specific event. From this relationship network, network science techniques such as centrality and clustering are used to understand the strength of the resulting relationship network. ## 3 Data Our research will use the pictures from the Emmy Award Show hosted in downtown Los Angeles and held on Sept 12, 2022. These images were publicly available on Getty Images ([https://www.gettyimages.com/](https://www.gettyimages.com/)). There are a total of 2,828 images from this event with 1,072 unique faces. A sample of these images is displayed in Figure 1. The Emmy's Awards provides a well-known event with known celebrity faces and known celebrity links or connections. These connections are often generated by co-starring in past or present television shows. This resulting social network will be released for open source once this paper is accepted/published. ## 4 Methods Given an archive of photos associated with a specific time, place, and event, the ArxNet approach identifies unique faces in the images and then constructs a network of co-occurring faces. This approach is illustrated in Figure 2. To conduct Figure 1: Example Getty images from the 2022 Emmy Award Show this approach, we 1) conduct face recognition to extract all face embeddings for each image in the archive, then 2) build an index for the unique faces that we find, 3) build the edge list for unique co-occurring faces, and finally 4) construct the graph from the edge list. The example relationship network in Figure 2 begins with a photo of Ben Stiller, his daughter, Ella Stiller, and Shawn Levy. The next photo includes both Ben Stiller and his daughter again and introduces Laura Linney. The third photo once again includes Ben Stiller and Ella Stiller, and introduces Martin Short. The relationship diagram then follows Martin Short to the next photo which includes Selena Gomez, Steve Martin, and of course Martin Short. The last photo in this relationship diagram includes the same three faces, but then adds a large crowd in the background. The identifiable faces in the background of the last photo are then also included in the relationship network. This example relationship network is simply a digestible snapshot of the larger Gephi Relationship diagram which covers all the Getty Images from the 2022 Emmy Award Show. Face recognition was conducted with Adam Geitgey's pipeline [6]. We modified his Face Recognition Docker Image to add compatibility with a Jupyter environment. This pipeline uses Histogram of Oriented Gradients (HOG) [5] for face detection, face landmark orientation for face warping [9], and OpenFace [1] for facial encoding. At the end of this pipeline, each face is represented with an embedding of length 128 that we store in a data structure where each image name is associated with a list of the face embeddings found in it. Face recognition was conducted using the euclidean distance between two face embeddings. If the distance between the embeddings was less than 0.5, then the embeddings were deemed to be the same face. While we conducted some exploration of the right distance threshold, we found the default setting of 0.5 to be adequate for our use case. Using the resulting data structure, we developed an algorithm that would build the face co-occurrence network. This basic algorithmic approach is provided in Algorithm 1. This algorithm takes each face for each picture, and first searches Figure 2: Illustrating the ArxNet Approach an index to see if we've already seen the face before. If found, we get the face indices and associate it with the picture. If the face is not found, we add the face to our index. Once we've identified all of the faces in the image, then we can associate them with a co-occurrence link. To do this we identify all combinations of two faces from the set of faces in the picture. For each combination, we add a link or edge to an edge list. ``` initialize index for each picture do for each face do search index if found then get key else add face to index endif endfor if faces in picture \(>1\)then calculate all combinations of 2 for each combination do append to edgelist endfor endif endfor ``` **Algorithm 1** Basic ArxNet Algorithmic Approach For our initial algorithm for the Emmy Award data, we used a brute force search algorithm in base Python with time complexity of \(O(n^{2})\). For the Getty Images dataset this ran in approximately 40 seconds on a standard laptop. This method did not scale, as illustrated in Figure 3. In order to improve the search speed, we leveraged the FAISS package created by the Facebook Team [8]. We used the CPU version of FAISS and explored the use of both the default Flat L2 Search (also a brute-force method) and the Inverted File with Product Quantization (IVFPQ) indexing. The inverted file indexing creates inverted lists where each list contains vectors that are close to a centroid and are efficient to search. The increased speed of these is clear in Figure 3. Using this method we were able to run the ArxNet Algorithm on 200K images in approximately 50 seconds. ## 5 Results The algorithm generates an edgelist and network representing co-occurring faces. The resulting network reveals that there are 1072 unique faces, of which 941 become nodes since they have relationships with at least one other faces. The 941 nodes have a total of 3726 edges which connect the nodes together of the Sept 12, 2022 Emmy Award show hosted in Downtown, Los Angeles. This network contains 88 unique communities, of which 55 only contain two nodes (or faces). The largest community contains 10% of the nodes (98/941 nodes). Another way to look at the diagram is by partitioning the diagram by edges and then specifically through the attribute 'images'. When the diagram is partitioned this way, the image with the largest number of edges is Figure 2. This image also comes from community 41 and is responsible for 2.07% of the total edges in the relationship network, or 77/3726 edges. Figure 5 depicts actress, Zendaya accepting the Emmy for Outstanding Lead Actress in a Drama Series for the television series "Euphoria". The most likely reason this photo is responsible for so many edges is because of its location. There are numerous photos that are taken in this spot because this is the spot in which Emmy award winners address the audience after winning an award. Therefore, the people in the background of this photo are also in the background of many other photos taken in this same location. Although these people are in the background of numerous photos, it is still noteworthy that Zendaya is the fifth person in this photo. Out of all the photos with these people in the background, the photo with Zendaya is responsible for the most edges. This reveals that Zendaya along with the four people in the background share the most relationships with other faces in other photos. ## 6 Conclusion and Future Work In this paper, we have addressed the previous work in the fields of face recognition, search, and building community networks from image archives. We have presented a technique that identifies faces via face embeddings, creates relationships, and then groups relationships into communities. We validated a threshold for the Euclidean distance threshold for determining the same face. After vali Figure 3: Demonstrating computational time complexity of different search index dating the threshold, we built the relationship network. The network connected 941 attendees of the 2022 Emmy Award Show. The output also revealed the strongest communities and the single image that is responsible for the most edges in the relationship network. The limiting factor to this approach is that the algorithm creates relationships between people in the foreground and the background of the photos. This limitation is prevalent in the photo with the strongest edges. This photo has four people in the background and one person in the foreground. This skews the results because the algorithm gives equal weight to relationships that consist of both faces in the foreground and relationships that consist of both foreground and background faces. It is possible that the people in the background and the foreground do have relationships, but it is not fair to assume that these relationships are as strong as relationships that are both foreground and backgrounds. Social connections provide the foundation of social and organizational interactions and evolution. Mapping these connections through facial co-location in image archives can enable the study, understanding, and mapping of these critical connections.
2309.04251
Toward Certifying Maps for Safe Registration-based Localization Under Adverse Conditions
In this paper, we propose a way to model the resilience of the Iterative Closest Point (ICP) algorithm in the presence of corrupted measurements. In the context of autonomous vehicles, certifying the safety of the localization process poses a significant challenge. As robots evolve in a complex world, various types of noise can impact the measurements. Conventionally, this noise has been assumed to be distributed according to a zero-mean Gaussian distribution. However, this assumption does not hold in numerous scenarios, including adverse weather conditions, occlusions caused by dynamic obstacles, or long-term changes in the map. In these cases, the measurements are instead affected by large and deterministic faults. This paper introduces a closed-form formula approximating the pose error resulting from an ICP algorithm when subjected to the most detrimental adverse measurements. Using this formula, we develop a metric to certify and pinpoint specific regions within the environment where the robot is more vulnerable to localization failures in the presence of faults in the measurements.
Johann Laconte, Daniil Lisus, Timothy D. Barfoot
2023-09-08T10:29:52Z
http://arxiv.org/abs/2309.04251v2
# Toward Certifying Maps for Safe Localization Under Adversarial Corruption ###### Abstract In this paper, we propose a way to model the resilience of the Iterative Closest Point (ICP) algorithm in the presence of corrupted measurements. In the context of autonomous vehicles, certifying the safety of the localization process poses a significant challenge. As robots evolve in a complex world, various types of noise can impact the measurements. Conventionally, this noise has been assumed to be distributed according to a zero-mean Gaussian distribution. However, this assumption does not hold in numerous scenarios, including adverse weather conditions, occlusions caused by dynamic obstacles, or long-term changes in the map. In these cases, the measurements are instead affected by a large, deterministic fault. This paper introduces a closed-form formula approximating the highest pose error caused by corrupted measurements using the ICP algorithm. Using this formula, we develop a metric to certify and pinpoint specific regions within the environment where the robot is more vulnerable to localization failures in the presence of faults in the measurements. ## I Introduction Reliable localization is a vital task for self-driving robots, as it plays a key role in ensuring their safety and effective operation. However, sensors are susceptible to making errors, necessitating the implementation of robust safeguards to mitigate potential risks. In this paper, we focus on the domain of range-based localization, a technique relying on range sensors such as lidar or radar. In addition to being inherently noisy, these sensors can also encounter large, deterministic errors that heavily deviate from the common Gaussian noise assumption, posing significant challenges to accurate localization. Measurement corruptions can be caused by diverse sources. For example, an occlusion may cause a portion of the environment to be completely blocked from the sensor's view, leading to missing or inaccurate range readings. Similarly, adverse weather conditions, such as heavy snowstorms or fog, can distort the sensor measurements in a nonrandom manner [1]. Consistent measurement errors can also arise from long-term changes in an outdated map, including the presence of new buildings or parked cars. These factors result in systematic and deterministic errors that are significantly different from the statistical properties of Gaussian noise. The ICP algorithm is established as one of the most popular approaches for estimating a robot's pose using range sensors [2]. Depending on the map against which ICP is trying to localize, a corruption of the pointcloud can be more or less impactful. Figure 1 depicts some examples from the self-driving application. In a large intersection, few structures are close enough to provide good constraints for ICP. As such, any faulty measurement on the structures would have a great impact on the localization output. On the contrary, smaller streets with lots of houses and other obstacles provide a much safer environment, as ICP can now rely on a variety of landmarks to localize. Even though ICP is frequently equipped with robust outlier rejection features, some faults can still be considered as inliers and impact the accuracy of the localization process. While not necessarily large enough to break the localization process, these inlier faults can degrade the localization accuracy to a degree that may easily result in a crash in autonomous driving scenarios Our paper aims to quantify the map-dependent resilience of the ICP localization algorithm. Our contributions are 1) a closed-form formula of the worst possible error on the pose for a given amount of corrupted points; 2) a visualization of the worst corruption by applying the faults to the measured pointcloud; and 3) a quantification of the resilience of the ICP for a specific map against corrupted measurements. We provide a quantitative analysis of our framework, as well as the evaluation of both structured and unstructured environments, showing that our framework can pinpoint dangerous locations in the event of corrupted measurements. ## II Related Work Extensive research has been conducted in the field of aerospace engineering to address safety considerations. In this context, the majority of safety analyses are bound to the estimation of position using Global Navigation Satellite Fig. 1: In the context of pose estimation, specific environments entail higher risks compared to others. For instance, a large intersection (left) offers limited landmarks for localization, whereas a suburban area (right) presents numerous houses and landmarks. Consequently, occlusions or map alterations at the intersection may lead to a significantly larger pose estimation error compared to the suburban area. System (GNSS) measurements. The safety of this estimation is based on the Hazardously Misleading Information (HMI) metric, which looks at the probability that the position estimate is sufficiently erroneous to be considered hazardous, while no fault detectors have been triggered. The detectors can take on various forms, but two are most prevalent: residuals and solution-separation methods [3, 4]. The residuals-based methods look at the measurement residuals coming from the estimation algorithm [5], such as the innovation in a Kalman filter [6]. In the event such residuals are higher than a threshold, a fault in the measurements is likely and an alarm is triggered. On the other hand, solution-separation-based methods try to isolate and discard a potentially faulty measurement from all measurements received at a given time; this method quickly becomes computationally expensive as every combination of measurements needs to be tested [7]. Recently, Arana _et al._[8] proposed to adapt these methods to robotics. They used the HMI framework to monitor the safety of a landmark-based localization pipeline. Their work was adapted to batch estimation by Hafez _et al._[9]. Finally, Chen _et al._[10] used this framework to propose a way to enhance the safety of maps by adding well-placed landmarks in the environment. All of these previous efforts aim at certifying that the system will be able to detect faults, which is fundamentally different from certifying the well-being of the system. Indeed, a very poor map can be classified as safe as long as is it easy for the system to detect that its estimated pose is not right. As such, we propose a novel method to directly monitor the reliability of the localization process, and not its capability to detect hazardous events. In this context, the notion of _localizability_ is the standard for range-based localization. Localizability is defined as the capability of the localization system to produce a good estimate of the robot's pose. This information is paramount in under-constrained environments, such as tunnel-like surroundings [11], where the localization process is prone to yielding bad estimates. Nubert _et al._[12] proposed a way to learn the localizability in underground environments. Ebadi _et al._[13] developed a method to monitor the degeneracy of ICP throughout the localization process by looking at the condition of the measurement matrix after linearizing the system. Similarly, Zhang _et al._[14] proposed to model the safety of the localization in an environment using Fisher information. Finally, Aldera _et al._[15] proposed to train a classifier to automatically label odometry estimates as good or bad, showing that rejecting poor solutions leads to a better overall estimate. Carson _et al._[16] used a similar idea to classify the integrity in visual localization, removing inaccurate estimates. However, the localizability analysis does not encompass the possibility of corrupted measurements that can arise in numerous situations, such as adverse weather or occlusion from other dynamic obstacles. Indeed, the analysis relies solely on the measured pointcloud, without taking into account the possibility that large errors can add malicious information. We propose a way to not only examine the safety of the system under nominal conditions, but also its performance in situations where certain measurements may provide adversarial information to the robot. Within the area of compromised data, different works were conducted to explicitly assess the resistance of algorithms to corruption by generating, altering, or removing some measurements. Kong _et al._[17] designed a benchmark that directly takes into account the corruption of the data coming from diverse sources, such as the weather or sensor failures. This benchmark was used to extensively test the algorithms and their resilience to different types of noise. Xiang _et al._[18] proposed a method to generate and perturb pointclouds that lead a classification network to mislabel the scanned object. Cao _et al._[19] developed a method to craft 3D obstacles that were not detected by deep learning algorithms working with lidar data. Yi _et al._[20] proposed novel metrics for the safety of robotics algorithms, such as the robustness of a bad prior and the absence of updates. Delecki _et al._[21] designed a method to characterize failures of lidar-based perception systems in adverse weather conditions. Using physics-based disturbances and reinforcement learning, they were able to find high-likelihood failures with relatively small input disturbances. Finally, Yoshida _et al._[22] directly attacked a pointcloud to force the ICP algorithm to move to a desired target position instead of the ground truth. Our work is similar in spirit to these methods, but we propose a closed-form algebraic method to analyze the resilience of the ICP algorithm through corruption of the measurements. ## III Preliminaries First, we offer a brief illustration of the process to transform pointcloud alignment into an approximated linear problem. Next, we introduce the methodology for modeling corruption in conventional estimation problems. Using these formulations, we derive a closed-form formula representing the maximum possible error in a pose estimate. ### _ICP Formulation_ We briefly show a linear simplification of ICP. For one iteration, the measurement model is written as \[\mathbf{q}_{i}=\mathbf{R}(\mathbf{p}_{i}+\mathbf{w}_{i})+\mathbf{t}, \tag{1}\] where \(\mathbf{p}_{i}\) are the points in the sensor frame, \(\mathbf{q}_{i}\) are the points in the map frame, and \(\mathbf{w}_{i}\sim\mathcal{N}(\mathbf{0},\sigma^{2}\mathbf{I})\) is the noise on the measured point \(\mathbf{p}_{i}\), \(\mathbf{I}\) being the identity matrix. The matrix \(\mathbf{R}\) and vector \(\mathbf{t}\), respectively, denote the rotation and translation components of the pose. Assuming the rotation \(\mathbf{R}\) between the scan and map pointclouds is small, we can use the small-angle approximation \(\mathbf{R}\approx\mathbf{I}+\mathbf{\phi}^{\wedge}\) and linearize the relation as \[\mathbf{q}_{i} \approx(\mathbf{I}+\mathbf{\phi}^{\wedge})\mathbf{p}_{i}+\mathbf{R}\mathbf{w}_{i}+\bm {t} \tag{2}\] \[=\mathbf{p}_{i}-\mathbf{p}_{i}^{\wedge}\mathbf{\phi}+\mathbf{R}\mathbf{w}_{i}+\mathbf{t}\] \[=\begin{bmatrix}\mathbf{I}&-\mathbf{p}_{i}^{\wedge}\end{bmatrix}\begin{bmatrix} \mathbf{t}\\ \mathbf{\phi}\end{bmatrix}+\mathbf{p}_{i}+\mathbf{w}_{i}^{\prime},\] where \((\cdot)^{\wedge}\) is a cross-product operator transforming the vector to a 3\(\times\)3 skew-symmetric matrix [23]. In the case of point-to-plane ICP with unit normals \(\mathbf{n}_{i}\) in the map frame, we project the measurement points \(\mathbf{p}_{i}\) onto the associated normals \(\mathbf{n}_{i}\). Rewriting (2) and stacking the \(N\) measurements into one vector \(\mathbf{y}\), we have the linear system \[\underbrace{\begin{bmatrix}\mathbf{n}_{N}^{T}(\mathbf{q}_{1}-\mathbf{p}_{1})\\ \vdots\\ \mathbf{n}_{N}^{T}(\mathbf{q}_{N}-\mathbf{p}_{N})\end{bmatrix}}_{\mathbf{y}}=\underbrace{ \begin{bmatrix}\mathbf{n}_{1}^{T}&-\mathbf{n}_{1}^{T}\mathbf{p}_{1}^{\wedge}\\ \vdots&\vdots\\ \mathbf{n}_{N}^{T}&-\mathbf{n}_{N}^{T}\mathbf{p}_{N}^{\wedge}\end{bmatrix}}_{\mathbf{A}} \underbrace{\begin{bmatrix}\mathbf{t}\\ \mathbf{\phi}\end{bmatrix}}_{\mathbf{x}}+\mathbf{w}, \tag{3}\] with \(\mathbf{w}\sim\mathcal{N}(\mathbf{0},\sigma^{2}\mathbf{I})\). This formulation has two main assumptions: known data association and a noniterative approach. While these assumptions may appear restrictive, we demonstrate in Subsection V-A that they actually establish a suitably conservative approximation of the error of the ICP algorithm. ### _Problem Statement_ In robotics, many problems can be formed as a least-squares minimization. In particular, we are interested in standard linear measurement problems of the form \[\mathbf{y}=\mathbf{A}\mathbf{x}+\mathbf{w}, \tag{4}\] where \(\mathbf{y}\in\mathbb{R}^{m}\) is the measurement, \(\mathbf{x}\in\mathbb{R}^{n}\) is the state to estimate, \(\mathbf{A}\in\mathbb{R}^{m\times n}\) is the matrix linking the state to the measurement, and \(\mathbf{w}\in\mathbb{R}^{m}\) is random Gaussian noise with covariance \(\mathbf{\Sigma}\). This formulation assumes that the error on the measurement can be modelled as a zero-mean, Gaussian random variable. Perturbations resulting from increment weather or occlusions from other dynamic obstacles can contradict this assumption. Therefore, as is done in [9], we assume a measurement can be subject to both noise and a deterministic, possibly large, fault (corruption): \[\mathbf{y}=\mathbf{A}\mathbf{x}+\mathbf{w}+\mathbf{Q}\mathbf{f}, \tag{5}\] where \(\mathbf{f}\in\mathbb{R}^{n_{f}}\) are the faults, and \(\mathbf{Q}\in\mathbb{R}^{m\times n_{f}}\) is a sparse matrix of zeros and ones applying the faults to the associated measurements. As such, a part of the measurement is subject to a possibly large fault that can hinder the estimation. In the context of linear estimation, the solution to a least-squares problem can be written as \[\hat{\mathbf{x}} =\operatorname*{arg\,min}_{\mathbf{x}}\sum_{i=1}^{m}\alpha_{i}\cdot (\mathbf{y}_{i}-\mathbf{A}_{i}\mathbf{x})^{T}\mathbf{\Sigma}_{i}^{-1}(\mathbf{y}_{i}-\mathbf{A}_{i} \mathbf{x}) \tag{6}\] \[=\operatorname*{arg\,min}_{\mathbf{x}}(\mathbf{y}-\mathbf{A}\mathbf{x})^{T}\mathbf{ \Sigma}^{-1}(\mathbf{y}-\mathbf{A}\mathbf{x}),\] where \(\mathbf{y}_{i}\) is the \(i^{\text{th}}\) measurement with its associated covariance \(\mathbf{\Sigma}_{i}\), and \(\mathbf{x}\) is the pose to estimate. Note that each measurement is weighted by a factor \(\alpha_{i}\), which is used to control the impact of each measurement on the estimate and is incorporated into the lifted matrix \(\mathbf{\Sigma}\). Robust filters have the task of discriminating between inlier and outlier measurements. Among them, the trimmed distance filter is one of the most popular [2]. Given an initial guess \(\mathbf{x}_{0}\), the trimmed distance filter removes any measurement that has a corresponding error that is larger than a fixed distance \(d\): \[\alpha_{i}=\begin{cases}1&\text{if }\left\lVert\mathbf{y}_{i}-\mathbf{A}_{i}\mathbf{x}_{0} \right\rVert_{\infty}\leq d,\\ 0&\text{otherwise.}\end{cases} \tag{7}\] As such, any fault trying to corrupt a system making use of a trimmed distance filter has to be small enough to be deemed an inlier. In the case of systems of the form defined in (6), the solution is given by \[\hat{\mathbf{x}}=\mathbf{H}\mathbf{y},\quad\text{with }\mathbf{H}=\left(\mathbf{A}^{T}\mathbf{ \Sigma}^{-1}\mathbf{A}\right)^{-1}\mathbf{A}^{T}\mathbf{\Sigma}^{-1}. \tag{8}\] The corresponding state-estimation error can be computed as \[\mathbf{e} =\hat{\mathbf{x}}-\mathbf{x} \tag{9}\] \[=\mathbf{H}\mathbf{y}-\mathbf{x}\] \[=\mathbf{H}\left(\mathbf{A}\mathbf{x}+\mathbf{w}+\mathbf{Q}\mathbf{f}\right)-\mathbf{x}\] \[=\mathbf{H}\left(\mathbf{w}+\mathbf{Q}\mathbf{f}\right).\] The faults \(\mathbf{Q}\mathbf{f}\), unlike the noise \(\mathbf{w}\), are not zero-mean random variables, but are unknown, possibly large, deterministic quantities. As such, the faults generate a bias in the estimation that could be harmful. In the following, we provide a method to compute the worst error that can occur given corrupted measurements from a defined set. ### _Safety Metric_ Aiming at certifying a localization algorithm, we define a safety metric as the error on the pose being below a certain threshold: \[|e_{j}|=|\mathbf{g}_{j}^{T}(\hat{\mathbf{x}}-\mathbf{x})|\leq r_{j}, \tag{10}\] where \(\hat{\mathbf{x}}\) is the estimated state, \(\mathbf{x}\) is the ground truth, \(\mathbf{g}_{j}^{T}\) is a row matrix that extracts the \(j^{\text{th}}\) coordinate from the state, and \(r_{j}\) is the error-specific threshold. As such, an estimate is considered safe if each of its components has an error below a certain threshold \(r_{j}\). Note that we choose to certify each component of the estimated state independently instead of looking at the \(L_{2}\) norm of the error. This is particularly useful in self-driving applications where, for instance, the lateral error is often more important than the longitudinal one. Using this definition, we seek to compute the probability that a pose estimate is safe: \[p\left(|e_{j}|\leq r_{j}\right)\geq p_{\text{safe}}. \tag{11}\] In other words, a given estimate is said to be certified if its probability to be in the safe zone of radius \(r_{j}\) is above \(p_{\text{safe}}\). In the presence of faults, we are interested in the maximum number of faulted measurements that can happen at the same time before (11) becomes false. This quantity is defined as the _resilience_ of the system. ## IV Corrupted Measurements In this section, we provide a closed-form formula for the worst pose estimate given corrupted measurements from a defined set. This closed form is used to compute the probability that a pose estimate is hazardous, as defined in (11). Then, we define a metric to certify the resilience of ICP in different scenarios. Finally, we show how to link the faults back to a meaningful perturbation in the measured pointcloud. ### _Worst Pose Estimate_ This section proposes a closed-form formula to compute the probability of a pose estimate being hazardous, given corrupted measurements from a defined set. First, we define the constraint that the faults need to satisfy so that they are not trimmed by the outlier filter. As defined in (5), the measurement \(\mathbf{y}\) contains both a probabilistic zero-mean noise \(\mathbf{w}\) and deterministic faults from a defined set \(\mathbf{Qf}\). Faults that maximize the pose error will be such that they are considered inliers according to (7), but are otherwise maximally detrimental to the accuracy of the pose estimate. Using (5) and (7), assuming the initial guess is close to the ground truth, we have the following constraint on the faults: \[\begin{split}\left\|\mathbf{y}-\mathbf{A}\mathbf{x}_{0}\right\|_{\infty}& \leq d\\ \Leftrightarrow\left\|\mathbf{w}+\mathbf{Qf}\right\|_{\infty}& \leq d,\end{split} \tag{12}\] meaning that all errors on the measurements are below the outlier detection threshold \(d\). However, this constraint acts on the whole measurement vector and not only on the corrupted subset. As such, we reduce the constraint to only act on the subset of faulted measurements, as \[\left\|\mathbf{Q}^{T}\mathbf{w}+\mathbf{f}\right\|_{\infty}\leq d, \tag{13}\] where \(\mathbf{Q}^{T}\in\mathbb{R}^{n_{f}\times m}\) extracts the noise components of the faulted measurements from the noise vector \(\mathbf{w}\in\mathbb{R}^{m}\). As such, the constraint (13) forces the faults to be small enough that the faulted measurements are still considered inliers. Using this constraint, we now seek to find the worst pose error that a defined set of faulted measurements could cause. Following (9) and (10), the maximum error that a set of faulted measurements can induce on the pose while being undetected by the outlier detector is defined by \[\max_{\left\|\mathbf{Q}^{T}\mathbf{w}+\mathbf{f}\right\|_{\infty}\leq d}\left|e_{j} \right|=\max_{\left\|\mathbf{Q}^{T}\mathbf{w}+\mathbf{f}\right\|_{\infty}\leq d}\left|\bm {g}_{j}^{T}\mathbf{H}\left(\mathbf{w}+\mathbf{Qf}\right)\right| \tag{14}\] \[=\max_{\left\|\mathbf{Q}^{T}\mathbf{w}+\mathbf{f}\right\|_{\infty}\leq d} \left|\mathbf{g}_{j}^{T}\mathbf{H}\bar{\mathbf{Q}}\mathbf{w}+\mathbf{g}_{j}^{T}\mathbf{H}\mathbf{Q}\left( \mathbf{Q}^{T}\mathbf{w}+\mathbf{f}\right)\right|,\] where \(\bar{\mathbf{Q}}=\mathbf{I}-\mathbf{QQ}^{T}\). In order to find a closed form, we recall that for all \(a\in\mathbb{R}\) and \(x\in\left[-m,m\right]\) we have \[\max_{x\in\left[-m,m\right]}\left|a+x\right|=\max\left\{\left|a\pm m\right| \right\}. \tag{15}\] As such, using (15), the worst error on the pose (14) can be rewritten as \[\max_{\left\|\mathbf{Q}^{T}\mathbf{w}+\mathbf{f}\right\|_{\infty}\leq d}\left|e_{j} \right|=\max\{\left|\mathbf{g}_{j}^{T}\mathbf{H}\bar{\mathbf{Q}}\mathbf{w}\pm m\right|\}, \tag{16}\] with \[\begin{split} m&=\max_{\left\|\mathbf{Q}^{T}\mathbf{w}+\mathbf{f }\right\|_{\infty}\leq d}\mathbf{g}_{j}^{T}\mathbf{H}\mathbf{Q}\left(\mathbf{Q}^{T}\mathbf{w}+\mathbf{ f}\right)\\ &=d\left\|\mathbf{g}_{j}^{T}\mathbf{H}\mathbf{Q}\right\|_{\infty},\end{split} \tag{17}\] where we used the fact that for any vectors \(\mathbf{a},\mathbf{b}\), we have \[\max_{\left\|\mathbf{b}\right\|_{\infty}\leq 1}\mathbf{a}^{T}\mathbf{b}=\sum_{k}\left|a_{k} \right|=\left\|\mathbf{a}^{T}\right\|_{\infty}, \tag{18}\] with \(\mathbf{a}=\left[a_{1}\cdots a_{N}\right]^{T}\). Plugging this back into (16), we finally find that the worst error on the pose is defined by \[\begin{split}\max_{\left\|\mathbf{Q}^{T}\mathbf{w}+\mathbf{f}\right\|_{ \infty}\leq d}\left|e_{j}\right|&=\max\left\{\left|\mathbf{g}_{j}^{T} \mathbf{H}\bar{\mathbf{Q}}\mathbf{w}\pm d\left\|\mathbf{g}_{j}^{T}\mathbf{H}\mathbf{Q}\right\|_{\infty }\right|\right\}\\ &=\mathbf{g}_{j}^{T}\mathbf{H}\bar{\mathbf{Q}}\mathbf{w}+s\cdot d\left\|\mathbf{g}_{j} ^{T}\mathbf{H}\mathbf{Q}\right\|_{\infty},\end{split} \tag{19}\] with \(s=\operatorname{sgn}\left(\mathbf{g}_{j}^{T}\mathbf{H}\bar{\mathbf{Q}}\mathbf{w}\right)\). Therefore, for a given noise and set of corrupted measurements, the maximum error on the pose can be found in closed-form. Note, this equation yields the maximum error on the pose and not the fault vector that induces it. In order to find the probability distribution of the worst error \(\left|e_{j}\right|\), we use the notation \(v=\mathbf{g}_{j}^{T}\mathbf{H}\bar{\mathbf{Q}}\mathbf{w}\) and do the following manipulation: \[\begin{split} p\left(\left|e_{j}\right|>r_{j}\right)& =p\left(e_{j}>r_{j},v\geq 0\right)+p\left(e_{j}<-r_{j},v<0\right)\\ &=2p\left(e_{j}>r_{j},v\geq 0\right),\end{split} \tag{20}\] where we group the \(v\geq 0\) and \(v<0\) cases as \(v\) is a linear function of the noise \(\mathbf{w}\) and thus a zero-mean Gaussian random variable. Expanding the error \(\left|e_{j}\right|\), we find \[\begin{split} p\left(\left|e_{j}\right|>r_{j}\right)& =2\,p\left(v+d\left\|\mathbf{g}_{j}^{T}\mathbf{H}\mathbf{Q}\right\|_{\infty}>r_{j},v \geq 0\right)\\ &=2\,p\left(v>\max\left\{r_{j}-d\left\|\mathbf{g}_{j}^{T}\mathbf{H}\mathbf{Q} \right\|_{\infty},0\right\}\right)\\ &=\min\left\{2\left(1-\Phi_{\mu,\sigma}(r_{j})\right),1\right\}, \end{split} \tag{21}\] where \(\Phi_{\mu,\sigma}(\cdot)\) is the standard cumulative distribution function of the normal distribution, with \[\begin{split}\mu&=d\left\|\mathbf{g}_{j}^{T}\mathbf{H}\mathbf{Q} \right\|_{\infty}\\ \sigma^{2}&=\mathbf{g}_{j}^{T}\mathbf{H}\bar{\mathbf{Q}}\mathbf{\Sigma }_{\mathbf{w}}\left(\mathbf{g}_{j}^{T}\mathbf{H}\bar{\mathbf{Q}}\right)^{T}.\end{split} \tag{22}\] In conclusion, this section provides a way to compute efficiently the probability of having a hazardous estimate of the pose given a malicious set of corrupted measurements, using (21). We show in the next section how to certify a map using this metric, before demonstrating how the faults on the point-to-plane measurements can be linked back to the measured lidar points. ### _Map Certification_ In the preceding section, we demonstrated the ability to determine efficiently the maximum localization error for a specific set of corrupted measurements. Nevertheless, due to the large number of points in a lidar scan, exhaustively testing every combination of corrupted points becomes impractical. To address this challenge, we propose to model the lidar field of view as a collection of angular sectors. All points within each sector are then either corrupted or unaltered. Although this assumption may appear restrictive, numerous events are inherently tied to a sector of corrupted points. For instance, cars parked on the side of the street will result in a coherent shift of the measured pointcloud compared to a map that was constructed without them present. Occlusions can also lead to a sector-wide pointcloud alteration, thus appearing as fully faulted sectors. Using this modeling approach, we define the resilience \(R\) at a specific position on the map as the maximum proportion of the lidar scan that can be corrupted before the validity of the certificate stated in (11) is compromised. ### _Fault Visualization_ Although computing the explicit faults is not required for the safety assessment, it can be useful for visualization and evaluation purposes. As the faults are applied on the measurement equation defined in (5), the corruption is not linked to the measured points but rather the projected distance between the lidar points and the map. As such, in this section, we show how to recover a corruption on the pointcloud from a set of faults originally applied to the point-to-plane measurements. First, we recall that the vector that maximizes (18) is given by \[\mathbf{b}=\operatorname{sgn}\mathbf{a}, \tag{23}\] where the sign function is extended to a vector, applying the sign function component-wise. As such, from (17) and (19), we can deduce that the fault vector \(\mathbf{f}\) that maximizes the error on the pose is \[\mathbf{f}=s\cdot d\cdot\operatorname{sgn}\left(\mathbf{g}_{j}^{T}\mathbf{H}\mathbf{Q}\right) ^{T}-\mathbf{Q}^{T}\mathbf{w}. \tag{24}\] Using (24), we are able to construct a fault vector on the point-to-plane measurements. However, this fault vector does not directly corrupt the measured points, which are 3D quantities, but rather the projected distances. To model point-wise corruption, we seek to find a perturbation \(\Delta\mathbf{p}_{k}\) for each measured point \(\mathbf{p}_{k}\) subject to corruption, with a perturbed point \(\mathbf{p}_{k}^{\prime}\) defined as \[\mathbf{p}_{k}^{\prime}=\mathbf{p}_{k}+\Delta\mathbf{p}_{k}. \tag{25}\] Rewriting (3) for one corrupted point, we have \[\mathbf{n}_{k}^{T}(\mathbf{q}_{k}-\mathbf{p}_{k}^{\prime}) =-\mathbf{n}_{k}^{T}\mathbf{p}_{k}^{\prime}\wedge\mathbf{\phi}+\mathbf{n}_{k}^{ T}\mathbf{t}+\mathbf{w}_{k}\] \[\Leftrightarrow \mathbf{n}_{k}^{T}(\mathbf{q}_{k}-\mathbf{p}_{k}) =-\mathbf{n}_{k}^{T}\mathbf{p}_{k}^{\wedge}\mathbf{\phi}+\mathbf{n}_{k}^{T}\mathbf{t}+ \mathbf{w}_{k} \tag{26}\] \[\qquad+\mathbf{n}_{k}^{T}\Delta\mathbf{p}_{k}-\mathbf{n}_{k}^{T}\Delta\mathbf{p} _{k}^{\wedge}\mathbf{\phi}.\] Therefore, the fault on the point-to-plane measurement \(f_{k}\) is linked to the corrupted point by the relation \[f_{k}=\mathbf{n}_{k}^{T}\Delta\mathbf{p}_{k}-\mathbf{n}_{k}^{T}\Delta\mathbf{p}_{k}^{\wedge} \mathbf{\phi}, \tag{27}\] from which we extract a viable solution \[\Delta\mathbf{p}_{k}=f_{k}\mathbf{n}_{k}, \tag{28}\] since \(\mathbf{n}_{k}^{T}\mathbf{n}_{k}^{\wedge}=\mathbf{0}\). Note that (27) possesses many solutions as we retrieve a 3D quantity from a 1D fault. From the perspective of our framework, all solutions are equally good, as all possible corruptions lead to the same corrupted point-to-plane measurements. However, (28) is particularly interesting as it is independent of the state variable \(\mathbf{\phi}\). As such, a fault on the point-to-plane measurement can be seen as a shift of the measured point along the associated map's normal. We use this fault modeling in Section V to show that the simplified ICP formulation used in the derivation of the framework is similar in behavior to a real ICP algorithm. As examples, Figure 2 depicts two submaps of the DARPA Subterranean challenge finals [24], where a sector of the lidar scan is corrupted in a way to maximize the error in the \(y\) component of the pose. These maps depict underground environments with clear features that ICP is using to localize. The inlier threshold distance has been set to \(d=1\,\mathrm{m}\). Using (24), we compute the worst faults on the \(y\) component for the given corrupted sector. The faults are then transformed back into a corruption on the pointcloud via (28). In the first example (top), the corrupted points shift and slightly rotate the pose estimate to achieve the worst \(y\) component estimate while evading an outlier detector. In the second example (bottom), the corruption takes a slightly more complex appearance. One might intuitively think that the worst pose estimate would come from shifting all the points in the same direction. However, our framework finds a more complex modification of the pointcloud that results in an even worse estimate. Indeed, a coherent shift of all points in a translation-only alignment problem would result in the worst estimate. However, as ICP has to estimate both the translation and the rotation components of the pose, shifting the whole wall in the same direction would make the ICP solution rotate instead of translate, as most of the pointcloud is left uncorrupted. Accordingly, the worst corruption modifies the pointcloud in such a way as to prevent rotation, resulting in a pure shift along the \(y\) axis. ## V Experiments First, we evaluate the quality of our assumptions in the context of a real, iterative ICP. Then, we show the applicability of our method on real-world maps using the Boreas [25] and Montmoreny Forest Wintertime [26] datasets. Fig. 2: Examples of faulted pointclouds from underground passages of the DARPA Subterranean challenge finals [24]. The faults try to corrupt the \(y\) component of the pose, with the corrupted pose estimated by ICP in red, and the ground truth pose in black. Black points correspond to the reference map. The lidar point cloud is illustrated in green, with a sector (shaded blue) being corrupted. The blue and red points correspond to the sector before and after corruption, with gray arrows showing the shift induced by the corruption. Points corresponding to the ceiling and floor are omitted for better visualization. ### _Quality of Approximations_ As is done in HMI-based safety analysis (e.g., [8, 9]), our method reduces ICP to a linear, noniterative problem. This framework assumes a known data association between the map and the scan, which is impossible to have in practice. We show in this section that our framework yields an approximate upper-bound to a real iterative ICP algorithm, in which data association must be solved at each iteration. To do this, we sample \(250\) submaps from the Boreas dataset [25], in which we simulate a lidar scan from real lidar maps by subsampling the pointcloud. In this evaluation, \(25\,\%\) of the lidar scan is corrupted and perturbed using our framework, as this value is the highest resilience recorded in the experiments presented in Subsection V-B. Then, we feed this corrupted pointcloud to a vanilla ICP and compare the produced error with the theoretical one given by (19). Vanilla ICP is equipped with a trimmed distance filter, set to the same value \(d\) as in our framework. Figure 3 depicts the signed difference between the theoretical errors predicted by our framework, and the real ICP errors generated from the corrupted scans, for different inlier threshold distances \(d\). Overall, the difference in error exhibits a linear increase as the threshold distance \(d\) increases. It can be seen that our framework yields higher errors compared to those obtained by the ICP algorithm in more than \(98\,\%\) of cases. In cases where our framework predicts a lower error than the real ICP algorithm produces, the magnitude of error underestimation is on the order of centimeters and milliradians. As shown in the next section, our framework provides an estimate that is tight enough to pinpoint safe and hazardous regions within the maps. Future work will focus on providing a provable, tight upper bound. For large inlier threshold distances, our framework overestimates the linear error up to \(80\,\mathrm{cm}\) in rare cases. We theorize this discrepancy can be attributed to the fact that our framework assumes known data association and optimizes the perturbations accordingly. As the inlier threshold distance increases, the corrupted points deviate further from their true associated points in the map. Consequently, the likelihood of incorrect data associations also rises. In some instances where wrong data association occurs, it unexpectedly favours the ICP algorithm, as the perturbations applied were not optimized for that particular association. As a result, an overall reduction in the pose error may occur. On the other hand, the difference in rotational errors remains small. Owing to the nature of the environment, the considered scans contain discernible features far away from the robot. This makes it harder to corrupt the rotational component as compared to the translational one. As such, assuming a known data association yields a reasonably conservative estimation on the error of ICP. This assumption is sensible as long as the environment features clear planes, as incorrectly associated points with the same plane will not affect the error terms. However, in the case of noisy normal estimates or unstructured objects, an incorrect data association can match points associated with different planes and thus returns a different error term. Such behavior is seen when our framework corrupts sectors containing bushes on the side of the road, or any other highly unstructured obstacles. These cases are responsible for the long tails of the distributions seen in Figure 3. ### _Map Certification_ To demonstrate the applicability of our framework for a given lidar map, we use the Boreas dataset [25] and certify the Glenshield trajectory. We sample poses along the trajectory and simulate a live scan by subsampling the map for each pose. The scan is then used to build the matrices in (3). For this experiment, we certify the map for both the longitudinal and lateral directions, ensuring that the probability of deviating from the true position by more than \(20\,\mathrm{cm}\) is less than \(1\,\%\). The outlier filter distance is set to \(d=30\,\mathrm{cm}\) and the noise \(\mathbf{w}\) on the lidar measurements is equal to \(\sigma=10\,\mathrm{cm}\). The lidar field of view is split into \(30\) sectors of approximately \(0.1\,\mathrm{rad}\). We compute the degree of resilience \(R\) for each pose and depict it in Figure 4. The vehicle starts in a parking lot before turning onto a main street (Figure 4.a). Then, the vehicle drives on smaller streets with better structural definition (Figure 4.b). Overall, the map is robust to an average corruption of around \(15\,\%\) of the pointcloud, with some exceptions. When turning onto the main road, the vehicle has to cross a large crossroad with almost no structure present in the pointcloud. As such, a small amount of faults is enough to push the estimate outside the safe bounds. While on the main street, few structures are close enough to help constrain ICP in the longitudinal direction, resulting in a drop of resilience to around \(12\,\%\). In contrast, once the vehicle is on smaller streets with a lot of structural definition, ICP becomes resilient to a greater amount of corruption. Even if \(20\,\%\) of the scan is corrupted, there is still enough structure left untouched in the pointcloud to constrain ICP. Note that the two pointclouds depicted in Figure 4 correspond the same places pictured in Figure 1. Fig. 3: Signed difference of ICP errors between our theoretical estimate and real ICP for different inlier threshold distances. The violins depict the overall distribution of the difference, whereas the white dots and black boxes stand for the median and quartiles. Our estimate typically yields larger errors compared to real ICP. We additionally analyze a trajectory of the Montmorery Forest Wintertime dataset [26]. This dataset was taken in subarctic environments in Northern Quebec, Canada, and consists of both semi-structured surroundings and unstructured trails in wintertime. Figure 5 depicts the resulting resilience analysis on run A of the dataset. The robot starts at the bottom of the map inside a garage. It then drives next to a building before accessing a ski trail surrounded by tall pine trees. Overall, the resilience to faults is on the same order as in urban environments, as long as the robot remains close to structures. Once the robot reaches the ski trails and loses sight of the building, the resilience drops. Intuitively, the resilience of ICP is high when the robot is inside the garage, as there are good, clear structures against which to match. As the robot leaves the garage, the only structured obstacle is one wall of the building, as all other surrounding obstacles are snowbanks. Since ICP relies heavily on one locally concentrated obstacle, a fault in this part of the map leads to a large error in the pose estimate. As noted in [26], such faulted measurements were in fact observed during the recording of the dataset: a truck parked next to the wall resulted in the measurements being offset by a small value, yielding the same type of corruption as the one theorized in this paper. Harsh weather can also create similar effects, such as in the case of snow accumulating next to a building after a snowstorm overnight. As the robot drives toward the buildings on the top part of the figure, the resilience increases. The ICP algorithm can indeed rely on a diverse source of information to estimate the pose. As the robot continues driving, enough buildings are visible in the distance to maintain reasonable resilience even after the garage becomes out of view. The resilience goes down once again when the robot enters the ski trails, as it again only sees one part of a building. Once the robot drives along the trails, the resilience tends to drop down as it loses sight of the buildings. In conclusion, our resilience analysis is able to highlight dangerous locations where a small amount of faults, coming from occlusion, noise, or changes in the environment, could drastically hinder the localization process and lead the robot to hazardous behaviors. These situations can either come from a clear lack of good features in the environment, such as in the case of the large crossroad, or result from ICP relying too much on a single environmental feature. This second case can happen in semi-structured environment, as seen in the Montmorey Forest Wintertime dataset analysis. ## VI Conclusion In this paper, we present a novel way to analyze the resilience of the ICP algorithm. Resilience is defined as the maximum amount of faults that can be injected into the measurements before the localization estimate is dangerous for the robot. We model faults as the most severe modifications to the measurements that can go undetected by an outlier filter. Through this framework, we verify the quality of maps in both structured and unstructured environments, and demonstrate that environments lacking distinct, evenly distributed structures are more susceptible to inaccurate Fig. 4: Overall resilience of the Glenshield trajectory. The vehicle starts on the top right of the map, before driving on a large main road. It then leaves the road for smaller streets with better features. Left: overview of the trajectory, coloured by its resilience \(R\). Right: Example of submaps with the colours representing the \(z\) component of the pointcloud for better visualization. Fig. 5: Path A of the Montmorey Forest Wintertime dataset, coloured by its resilience to corruption \(R\). The robot starts in a garage at the bottom of the map, proceeds to drive by several buildings, and finally enters a narrow ski trail surrounded by tall pine trees. estimates in the event of measurement corruption. Future research will focus on addressing the assumptions made in this paper. We will account for the iterative nature of the ICP algorithm and explore a broader range of robust cost functions. Also, this paper assumes that the robot has a good initial guess. However, hazardous behavior can also occur during a sequence of bad ICP solutions, each driving the robot farther from the ground truth and feeding worse and worse initial guesses to the next iteration. As such, taking into consideration a possibly wrong initial guess will also help certify against a broader range of failures. Finally, future work will involve expanding the framework to include other types of sensor failures, such as scenarios where a portion of the lidar becomes completely unavailable. ## Acknowledgment We would like to thank the Natural Sciences and Engineering Research Council of Canada (NSERC) and the Ontario Research Fund: Research Excellence (ORF-RE) program for supporting this work. We also thank the Northern Robotics Laboratory (Norlab) for their help with the datasets.
2310.00154
Primal Dual Continual Learning: Balancing Stability and Plasticity through Adaptive Memory Allocation
Continual learning is inherently a constrained learning problem. The goal is to learn a predictor under a no-forgetting requirement. Although several prior studies formulate it as such, they do not solve the constrained problem explicitly. In this work, we show that it is both possible and beneficial to undertake the constrained optimization problem directly. To do this, we leverage recent results in constrained learning through Lagrangian duality. We focus on memory-based methods, where a small subset of samples from previous tasks can be stored in a replay buffer. In this setting, we analyze two versions of the continual learning problem: a coarse approach with constraints at the task level and a fine approach with constraints at the sample level. We show that dual variables indicate the sensitivity of the optimal value of the continual learning problem with respect to constraint perturbations. We then leverage this result to partition the buffer in the coarse approach, allocating more resources to harder tasks, and to populate the buffer in the fine approach, including only impactful samples. We derive a deviation bound on dual variables as sensitivity indicators, and empirically corroborate this result in diverse continual learning benchmarks. We also discuss the limitations of these methods with respect to the amount of memory available and the expressiveness of the parametrization.
Juan Elenter, Navid NaderiAlizadeh, Tara Javidi, Alejandro Ribeiro
2023-09-29T21:23:27Z
http://arxiv.org/abs/2310.00154v2
# Primal-Dual Continual Learning: Stability and Plasticity through Lagrange Multipliers ###### Abstract Continual learning is inherently a constrained learning problem. The goal is to learn a predictor under a _no-forgetting_ requirement. Although several prior studies formulate it as such, they do not solve the constrained problem explicitly. In this work, we show that it is both possible and beneficial to undertake the constrained optimization problem directly. To do this, we leverage recent results in constrained learning through Lagrangian duality. We focus on memory-based methods, where a small subset of samples from previous tasks can be stored in a replay buffer. In this setting, we analyze two versions of the continual learning problem: a coarse approach with constraints at the task level and a fine approach with constraints at the sample level. We show that dual variables indicate the sensitivity of the optimal value with respect to constraint perturbations. We then leverage this result to partition the buffer in the coarse approach, allocating more resources to harder tasks, and to populate the buffer in the fine approach, including only impactful samples. We derive sub-optimality bounds, and empirically corroborate our theoretical results in various continual learning benchmarks. We also discuss the limitations of these methods with respect to the amount of memory available and the number of constraints involved in the optimization problem. ## 1 Introduction In real-world settings, agents need to adapt to a dynamic stream of observations they receive from the environment. This has led to a plethora of research in _continual learning_, where the goal is to train agents to solve a set of diverse tasks presented sequentially (Thrun Mitchell, 1995). Since the capacity of machine learning models is limited, the challenge in continual learning is balancing the acquisition of new knowledge (plasticity) and the consolidation of previously integrated knowledge (stability). A potential consequence of poorly handling the so-called stability-plasticity dilemma is severe performance degradation in past tasks. Avoiding this phenomenon--referred to as _catastrophic forgetting_(McCloskey Cohen, 1989; French, 1999)--naturally leads to constrained optimization formulations, which have appeared extensively in the continual learning literature (Aljundi et al., 2019; Chaudhry et al., 2018; Lopez-Paz and Ranzato, 2017; Peng et al., 2023). Most approaches do not solve this constrained optimization problem explicitly. Instead, they use gradient projections (Lopez-Paz and Ranzato, 2017; Chaudhry et al., 2018) or promote proximity in the parameter space (Wang et al., 2021; Kirkpatrick et al., 2017). This work shows that it is both possible and beneficial to undertake the constrained learning problem directly (_Contribution 1_). To do this, we leverage recent advances in constrained learning through Lagrangian duality (Chamon and Ribeiro, 2020) and build a framework that contemplates both task-level and sample-level forgetting. State-of-the-art continual learning methods tend to include replay buffers, in which agents store a small subset of the previously seen instances. These methods have become ubiquitous, since they generally outperform their memoryless counterparts (Masana et al., 2022; Zhou et al., 2023; De Lange et al., 2021). The _principled_ constrained learning framework proposed in this paper enables an adaptive and efficient management of the memory buffer. Specifically, we first show that Lagrangian dual variables resulting from the proposed primal-dual method capture the stability-plasticity trade-off, since they indicate the sensitivity of the optimal value with respect to constraint perturbations (_Contribution 2_). At the task level, we leverage this result to partition the buffer, allocating more resources to harder tasks; and at the sample level, we use it to populate the buffer, including only impactful samples (_Contribution 3_). These techniques give us a direct handle on the stability-plasticity trade-off incurred in the learning process. We showcase the benefits of the proposed method for several memory budgets in a diverse set of continual learning benchmarks, including image, audio, and medical datasets. We also study the sensitivity to the forgetting tolerance allowed and discuss the limitations of the proposed primal-dual method with respect to the number of constraints, the sampling of outliers, and the underestimation of task difficulties. ## 2 Continual Learning is a Constrained Learning Problem In continual learning, the goal is to learn a predictor that minimizes the expected risk over a set of _tasks_, \[f^{\star}=\operatorname*{arg\,min}_{f\in\mathcal{F}}\sum_{t=1}^{T}\mathbb{E} _{\mathfrak{D}_{t}}\left[\ell(f(x),y)\right],\] where \(T\) is the number of tasks, \(\mathfrak{D}_{t}\) is the data distribution associated to task \(t\) and \(\mathcal{F}\) is a functional space. The tasks and their corresponding data distributions are observed sequentially. That is, at time \(t\), data from previous tasks (i.e., \(\mathfrak{D}_{1},\cdots,\mathfrak{D}_{t-1}\)) and from future tasks (i.e., \(\mathfrak{D}_{t+1},\cdots,\mathfrak{D}_{T}\)) are not available. In this setting, the main issue that arises is catastrophic forgetting: if we sequentially fine-tune \(f\) on each incoming distribution, the performance on previous tasks could drop severely. A continual learner is one that is stable enough to retain acquired knowledge and malleable enough to gain new knowledge. If the no-forgetting requirement is enforced at the task level, we can formulate the continual learning problem as minimizing the statistical risk on the current task without harming the performance of the model on previous tasks, i.e., \[P_{t}^{\star}= \operatorname*{arg\,min}_{f\in\mathcal{F}} \mathbb{E}_{\mathfrak{D}_{t}}[\ell(f(x),y)],\] ( \[P_{t}\] ) s.t. \[\mathbb{E}_{\mathfrak{D}_{k}}[\ell(f(x),y)]\leq\epsilon_{k}, \quad\forall\,k\in\{1,\dots,t-1\},\] where \(\epsilon_{k}\in\mathbb{R}\) is the _forgetting tolerance_ of task \(k\), i.e., the worse average loss that is admissible in a certain task. In many cases, this is a _design_ requirement, and not a tunable parameter. For instance, in medical applications, \(\epsilon_{k}\) can be tied to regulatory constraints. If the upper bound is set to the unconstrained minimum (i.e., \(\epsilon_{k}=\min_{f\in\mathcal{F}}\mathbb{E}_{\mathfrak{D}_{k}}[\ell(f(x),y)]\)), then we are implementing an _ideal continual learner_(Peng et al., 2023). However, we do not have access to \(\mathfrak{D}_{k}\) for \(k\neq t\), but only to a _memory buffer_\(\mathcal{B}_{t}=\cup_{k=1}^{t-1}\mathcal{B}_{t}^{k}\), where \(\mathcal{B}_{t}^{k}\) denotes the subset of the buffer allocated to task \(k\) while observing task \(t\). When possible, we will obviate the dependence on the index \(t\) to ease the notation. In this setting, the main questions that arise are: (i) When is the constrained learning problem (\(P_{t}\)) solvable? (ii) How to solve it? (iii) How to partition the buffer \(\mathcal{B}\) across the different tasks? (iv) Which samples from each task should be stored in the buffer? This paper is structured as follows: in Section 3, we present the Lagrangian duality framework used to undertake the constrained learning problem. In Section 4, we turn our attention to the buffer partition strategy, and in Section 5 we discuss the dual variable-based approach to sample selection. ### Setting For continual learning to be justified, tasks need to be similar. The following assumptions characterize this similarity in terms of the distance between sets of optimal predictors. **Assumption 2.1**: _(Task Similarity): Let \(\mathcal{F}_{t}^{\star}=\{f\in\mathcal{F}:\mathbb{E}_{\mathfrak{D}_{t}}[\ell( f(x),y)]=\min_{f}\mathbb{E}_{\mathfrak{D}_{t}}[\ell(f(x),y)]\}\) be the set of optimal predictors associated to task \(t\). The pairwise distance between these sets across different tasks is upper-bounded by a constant \(\delta>0\), i.e.,_ \[d(\mathcal{F}_{i}^{\star},\mathcal{F}_{j}^{\star})\leq\delta,\quad\forall i,j \in\{1,\cdots,T\}.\] Several task similarity assumptions have been proposed in the literature, most of which can be formulated as Assumption 2.1 with an appropriate choice of \(d(\cdot,\cdot)\) and \(\delta\). In this work, we use the standard (Haussdorf) distance between non-empty sets: \(d(X,Y)=\max\left\{\sup_{x\in X}d(x,Y),\sup_{y\in Y}d(X,y)\right\}\). In over-parameterized settings, deep neural networks attain near-interpolation regimes and this assumption is not strict (Liu et al., 2021). **Assumption 2.2**: _(Constraint Qualification): The loss \(\ell\) and functional space \(\mathcal{F}\) are convex, and there exists a strictly feasible solution (i.e., \(\exists\ f\in\mathcal{F}\) such that \(\mathbb{E}_{\mathbb{Q}_{k}}[\ell(f(x),y)]<\epsilon_{k},\ \forall k\))._ Note that convexity is assumed with respect to function \(f\), not model parameters, and is satisfied by typical losses, such as mean-squared error and cross-entropy loss. We will consider that the functional space \(\mathcal{F}\) is endowed with the \(L_{2}\) norm, and we also consider a parameterization (e.g., a neural network) \(\mathbf{\Theta}\), such that \(\mathcal{F}_{\mathbf{\Theta}}=\{f_{\mathbf{\theta}}\ :\ \mathbf{\theta}\in\mathbf{\Theta}\} \subseteq\mathcal{F}\). **Assumption 2.3**: _(Near-Universality of the Parameterization): For all \(f\in\mathcal{F}\), there exists \(\mathbf{\theta}\in\mathbf{\Theta}\) such that for a constant \(\nu>0\), we have \(\|f-f_{\mathbf{\theta}}\|_{L_{2}}\leq\nu\)._ The near-universality assumption is directly related to the richness of the parameterization (or model capacity). In over-parameterized models, such as deep neural networks, this assumption is expected to hold with a small \(\nu\). **Assumption 2.4**: _(Uniform Convergence): There exists \(R>0\) such that \(\|f\|_{L_{2}}\leq R\) for every \(f\in\mathcal{F}\) and the loss \(\ell(f(x),y)\) is \(M\)-Lipschitz in \(f\)._ This assumption is standard in statistical learning (Shalev-Shwartz et al., 2009) and guarantees uniform convergence. ## 3 Continual Learning in the Dual Domain The following proposition sheds light on the dependence between the forgetting tolerance \(\epsilon_{k}\) and the task similarity magnitude \(\delta\). **Proposition 3.1**: _Let \(m_{k}\) be the unconstrained minimum associated to task \(k\). Under Assumptions 2.1 and 2.4, \(\exists\ f\in\mathcal{F}\) such that,_ \[\mathbb{E}_{\mathcal{D}_{k}}[\ell(f(x),y)]\leq m_{k}+\frac{T-1}{T}M\delta,\quad \forall k\in\{1,\cdots,T\}. \tag{1}\] Proposition 1 suggests that for Problem (\(P_{t}\)) to be solvable, the forgetting tolerances \(\{\epsilon_{k}\}\) need to match the task similarity \(\delta\). For instance, if \(\epsilon_{k}=m_{k}+M\delta\) for all \(k\), then problem (\(P_{t}\)) is feasible at all iterations, and its solution is \(M\delta\) close to the optimum in terms of the expected loss on the current task. In what follows, we explain how to undertake this constrained learning problem once the forgetting tolerances are set. As done in standard supervised learning, to solve problem (\(P_{t}\)), the function class \(\mathcal{F}\) is parameterized (e.g., by a neural network), and expectations are approximated by sample means. (\(P_{t}\)) is a statistical constrained optimization problem, whose Lagrangian empirical dual can be written as \[\hat{D}_{t}^{\star}=\max_{\mathbf{\lambda}\in\mathbb{R}_{+}^{\star}}\min_{\mathbf{ \theta}\in\mathbf{\Theta}}\hat{\mathcal{L}}(\mathbf{\theta},\mathbf{\lambda}):=\frac{1}{ n_{t}}\sum_{i=1}^{n_{t}}[\ell(f_{\mathbf{\theta}}(x_{i}),y_{i})]+\sum_{k=1}^{t} \lambda_{k}\left(\frac{1}{n_{k}}\sum_{i=1}^{n_{k}}[\ell(f_{\mathbf{\theta}}(x_{i} ),y_{i})]-\epsilon_{k}\right),\] ( \[\hat{D}_{t}\] ) where \(\hat{\mathcal{L}}(\mathbf{\theta},\mathbf{\lambda})\) denotes the empirical Lagrangian, \(n_{k}\) denotes the number of samples from task \(k\) available at iteration \(t\), and \(\mathbf{\lambda}=[\lambda_{1}\ \dots\ \lambda_{t}]^{T}\) denotes the set of dual variables corresponding to the task-level constraints. For a fixed \(\mathbf{\lambda}\), the Lagrangian \(\hat{\mathcal{L}}(\mathbf{\theta},\mathbf{\lambda})\) is a regularized objective, where the losses on previous tasks act as regularizing functionals. Thus, the saddle point problem in (\(\hat{D}_{t}\)) can be viewed as a two-player gamer, or as a regularized minimization, where the regularization weight \(\mathbf{\lambda}\) is updated during the training procedure according to the degree of constraint satisfaction or violation. This contrast with several replay and knowledge distillation approaches that augment the loss using manually-tuned hyperparameters (Buzzega et al., 2020; Michieli and Zanuttigh, 2021). In general, the dual problem yields a lower bound on \(P_{t}^{\star}\), which is known as _weak duality_. However, under certain conditions, \(D_{t}^{\star}\) attains \(P_{t}^{\star}\) and the optimal dual variable \(\mathbf{\lambda}^{\star}\) indicates the sensitivity of \(P_{t}^{\star}\) with respect to constraint perturbations (Rockafellar, 1997). More precisely, we state the following theorem, whose proof can be found in Appendix A.2, which characterizes the variations of \(P_{t}^{\star}\) as a function of the constraint levels \(\{\epsilon_{k}\}_{k=1}^{t}\), and serves as a motivation for the proposed memory partition and sample selection methods. **Theorem 3.2**: _Under Assumption 2.2, we have_ \[-\lambda_{k}^{\star}\;\in\;\partial P_{t}^{\star}(\epsilon_{k}),\;\forall k\in \{1,\ldots,t\}, \tag{2}\] _where \(\partial P_{t}^{\star}(\epsilon_{k})\) denotes the sub-differential of \(P_{t}^{\star}\) with respect to \(\epsilon_{k}\), and \(\lambda_{k}^{\star}\) is the optimal dual variable associated to the constraint on task \(k\)._ Provided we have enough samples per task and the parameterization is rich enough, (\(\hat{D}_{t}\)) can approximate the constrained statistical problem (\(P_{t}\)). More precisely, the empirical duality gap, defined as the difference between the optimal value of the empirical dual and the statistical primal, is bounded (Chamon & Ribeiro, 2020). Furthermore, the dual function \(g_{p}(\lambda)=\min_{\mathbf{\theta}\in\mathbf{\Theta}}\hat{\mathcal{L}}(\mathbf{\theta}, \mathbf{\lambda})\) is the minimum of a family of affine functions on \(\mathbf{\lambda}\), and thus is concave. Consequently, the outer problem corresponds to the maximization of a concave function and can be solved via sub-gradient ascent (Nedic & Ozdaglar, 2009). The inner minimization, however, is generally non-convex, but there is ample empirical evidence that deep neural networks can attain _good_ local minima when trained with stochastic gradient descent (Zhang et al., 2016). The max-min problem (\(\hat{D}_{t}\)) can be undertaken by alternating the minimization with respect to \(\mathbf{\theta}\) and the maximization with respect to \(\mathbf{\lambda}\)(K. J. Arrow & Uzawa, 1960). We elaborate on this when we present the algorithm in Section 4. ## 4 Optimal Buffer Partition ### Dual Variables Capture the Stability-Plasticity Trade-off In Section 3, we argued that continual learning can be tackled in the dual domain, resulting in primal and dual variables \(f^{\star}\) and \(\mathbf{\lambda}^{\star}\). Throughout the analysis, we treated the number of samples per task in the buffer \(\{n_{k}\}_{k=1}^{t}\) as fixed parameters. However, we can also treat them as optimization variables, leading to a memory partition strategy. Different tasks have different intrinsic difficulties and sample complexities. Thus, random or uniform partitions are typically sub-optimal. Theorem 3.2 implies that for any task \(k\), \(-\lambda_{k}^{\star}\) yields a global linear under-estimator of \(P_{t}^{\star}\) at \(\epsilon_{k}\), i.e., for any \(\gamma\in\mathbb{R}\), \[P_{t}^{\star}(\epsilon_{k}+\gamma)-P_{t}^{\star}(\epsilon_{k})\geq\langle- \lambda_{k}^{\star},\;\gamma\,\rangle. \tag{3}\] This means that the optimal dual variable \(\lambda_{k}^{\star}\) carries information about the difficulty of task \(k\). Specifically, tightening the constraint associated to task \(k\) (\(\gamma<0\)) would restrict the feasible set, causing a degradation of the optimal value of (\(P_{t}\)) at a rate larger than \(\lambda_{k}^{\star}\). That is, optimal dual variables reflect how hard it is to achieve good performance in the current task (_plasticity_), while maintaining the performance on a previous task (_stability_). Therefore, \(\lambda_{k}^{\star}\) captures the stability-plasticity trade-off associated to task \(k\). In light of this result, it is sensible to partition the buffer across different tasks as an increasing function of \(\mathbf{\lambda}^{\star}\), allocating more resources to tasks with higher associated dual variables. In what follows, we propose an approach that leverages the information provided by \(\mathbf{\lambda}^{\star}\) and also contemplates the Lagrangian generalization gap. ### Memory Partition and Generalization Assumption 2.4 implies that for any \(\delta\in(0,1)\) and \(f\in\mathcal{F}\), with probability at least \(1-\delta\), we have \[\left|\mathbb{E}_{\mathcal{D}_{k}}[\ell(f(x),y)]-\frac{1}{n_{k}}\sum_{i=1}^{n_ {k}}\ell(f(x_{i}),y_{i})\right|\leq\zeta\left(n_{k}\right),\;\forall k\in\{1, \ldots,t\}, \tag{4}\] where \(\zeta\left(n_{k},\delta\right)=O\left(\frac{RM\sqrt{d\log(n_{k})\log(d/\delta )}}{\sqrt{n_{k}}}\right)\) approaches zero as the sample size \(n_{k}\) goes to infinity (Shalev-Shwartz et al., 2009, Theorem 5). Applying this bound, the generalization gap associated with the Lagrangian can be written as: \[\left|\sum_{k=1}^{t}\lambda_{k}\mathbb{E}_{\mathfrak{D}_{k}}[\ell(f(x),y)]-\sum_{ k=1}^{t}\frac{\lambda_{k}}{n_{k}}\sum_{i=1}^{n_{k}}\ell(f(x_{i}),y_{i})\right|\leq \sum_{k=1}^{t}\lambda_{k}\zeta(n_{k}). \tag{5}\] where for task \(t\), we replace \(\lambda_{t}\leftarrow\lambda_{t}+1\), as its loss appears in both the objective and the constraints. Therefore, we propose to find the buffer partition that minimizes this generalization gap by solving the following non-linear constrained optimization problem, \[n_{1}^{\star},\ldots,n_{t}^{\star}=\operatorname*{arg\,min}_{n_{1},\ldots,n_{ t}\geq n_{\text{min}}} \sum_{k=1}^{t}\lambda_{k}\;\zeta(n_{k}),\] (BP) s.t. \[\sum_{k=1}^{t}n_{k}=|\mathcal{B}|.\] As explained in Section 4.1, the difficulty of a task can be assessed through its corresponding dual variable, since it captures the stability-plasticity tradeoff. In (BP), we minimize the sum of task sample complexities, weighting each one by its corresponding dual variable and restricting the total number of samples to the memory budget. We elaborate on how to solve this optimization problem and the role of \(n_{\text{min}}\) in Appendix A.8. An overview of the proposed primal-dual continual learning method (PDCL) is provided in Algorithm 1, where \(FB\) represents a generic mechanism for populating the buffer with samples from the previously-observed tasks given a specific memory partition \(\{n_{1},\cdots,n_{t}\}\). As shown in Figure 1, when isolating the effect of the buffer partition, allocating more resources to tasks with higher dual variables is beneficial in terms of final mean error. In this experiment, we isolate the effect of buffer partition by comparing to Experience Replay (Rolnick et al., 2018) with Ring and Reservoir sampling (see Appendix A.1). Figure 1: TIL performance of PDCL vs. two baseline memory partition methods on two image and audio datasets. Ring leads to a uniform partition and Reservoir approximates \(\mathfrak{B}(x,y)\) to \(\mathfrak{D}(x,y)\). ### Empirical Optimal Dual Variables The sensitivity result in Theorem 3.2 holds for the optimal _statistical_ dual variables \(\lambda_{u}^{\star}\) of problem (\(P_{l}\)). However, in practice, we have access to the _empirical_ parameterized dual variables \(\hat{\lambda}_{p}^{\star}\) of problem (\(\hat{D}_{t}\)). In this section, we characterize the distance between these two quantities, showing that, under mild assumptions, \(\hat{\lambda}_{p}^{\star}\) is not far from \(\lambda_{u}^{\star}\). Let \(g_{p}(\lambda):=\min_{\theta\in\Theta}\mathcal{L}(\theta,\lambda)\) and \(g_{u}(\lambda):=\min_{f\in\mathcal{F}}\mathcal{L}(f,\lambda)\) denote the parameterized and unparameterized dual functions. **Proposition 4.1**: _Under Assumptions 2.3 and 2.2, the pointwise distance between the parameterized and unparameterized dual functions is bounded by an affine function on \(\|\lambda\|_{1}\),_ \[g_{p}(\lambda)-g_{u}(\lambda)\leq M\nu(1+\|\lambda\|_{1}),\qquad\forall\, \lambda\succeq 0.\] Optimal dual variables indicate the sensitivity of the optimal value with respect to constraint perturbations (see Section 3.2). Thus, the term \((1+\|\lambda\|_{1})\) can be seen as an indicator of the sensitivity of the optimization problem. Let \(\mathcal{B}_{\lambda}\) denote the segment connecting \(\lambda_{u}^{\star}\) and \(\hat{\lambda}_{p}^{\star}\). The following theorem, whose proof can be found in Appendix A.3, captures the impact on optimal dual variables of approximating the expected values over \(\mathfrak{D}_{k}\) by sample means. **Theorem 4.2**: _Let \(c\) denote the strong concavity constant of \(g_{u}(\lambda)\) in \(\mathcal{B}_{\lambda}\). Under Assumptions 2.2, 2.3, and 2.4, with probability at least \(1-t\delta\), we have:_ \[\|\hat{\lambda}_{p}^{\star}-\lambda_{u}^{\star}\|_{2}^{2}\leq\frac{2}{c}\left[ M\nu(1+\|\hat{\lambda}_{p}^{\star}\|_{1})+6\zeta(\tilde{n},\delta)(1+\|\lambda^{ \prime}\|_{1})\right],\] _where \(\|\lambda^{\prime}\|_{1}=\max\{\|\lambda_{p}^{\star}\|,\|\hat{\lambda}_{p}^{ \star}\|\}\) and \(\tilde{n}=\min_{i=1,\ldots,t}n_{i}\)_ The first term in this bound reflects the sub-optimality of \(\hat{\lambda}_{p}^{\star}\) with respect to \(\lambda_{u}^{\star}\), while the second term captures the effect of estimating expectations with sample means. We analyze the concavity constant \(c\) in detail in Appendix A.5. Theorem 4.2 implies that as the number of samples grows, and the capacity of the model increases (i.e., \(\nu\) decreases), \(\hat{\lambda}_{p}^{\star}\) approaches \(\lambda_{u}^{\star}\). Thus, provided our model has enough capacity and the number of samples per task is large enough, \(\hat{\lambda}_{p}^{\star}\) can be used as a sensitivity indicator of \(P_{t}^{\star}\). A weak aspect of the bound in Theorem 4.2 is that the sample complexity that dominates it is the one associated with the task with the least number of samples. This can be fixed by replacing the minimum with the average sample complexity, but we pay the price of having the bound grow linearly with the number of tasks. ## 5 Impactful Sample Selection When filling the buffer with random samples from each distribution, there is no sampling bias (i.e., \(\mathfrak{B}_{k}(x,y)=\mathfrak{D}_{k}(x,y)\)), and the solution of (\(P_{t}\)) has the no-forgetting guarantees from statistical constrained learning (Peng et al., 2023). However, performing sample selection can be beneficial due to the following reasons: * The _i.i.d._ assumption may not hold, in which case sample selection has theoretical and empirical benefits, particularly as an outlier detection mechanism (Sun et al., 2022; Peng et al., 2023; Borsoso et al., 2020). * Random sampling is not optimal in terms of expected risk decrease rate, which is the main property exploited in active and curriculum learning (Bengio et al., 2009; Gentile et al., 2022; Elenter et al., 2022). ### Identifying Impactful Samples Instead of task-level constraints, one could enforce a no-forgetting requirement at the sample level. For a fixed tightness \(\epsilon\), this constraint is stricter than the task-level constraint and will enable sample selection. Figure 2: Evolution of buffer partition in SpeechCommands with \(|\mathcal{B}|=2000\). The no-forgetting requirement can be written as: \[\ell(f(x),y)\leq\epsilon(x,y),\quad\mathfrak{B}_{t}^{k}\text{-a.e.}\quad\forall \;k=1,\cdots,t, \tag{6}\] where \(\mathfrak{B}_{t}^{k}\) is the distribution induced by sampling \(\mathfrak{D}_{k}\) to fill the memory buffer at iteration \(t\). As explained in the beginning of Section 5, sampling non-randomly induces a bias in the buffer distribution: \(\mathfrak{B}_{t}(x,y)\neq D_{t}(x,y)\). In what follows, we explore a dual variable-based sampling strategy that leverages the sensitivity of the optimization problem. In this case, the update rule for the dual variables is given by \[\lambda^{i+1}(x,y)=\left[\lambda^{i}(x,y)+\eta_{d}(\ell(f_{\lambda^{i}}(x),y)- \epsilon(x,y))\right]_{+},\] where \(f_{\lambda}\) is the Lagrangian minimizer associated to \(\lambda\), i.e: \(f_{\lambda}=\operatorname*{arg\,min}_{f\in\mathcal{F}}\mathcal{L}(f,\lambda)\). Thus, in this formulation, dual variables accumulate the individual slacks over the entire learning procedure. This allows dual variables to be used as a measure of informativeness, while at the same time affecting the local optimum to which the algorithm converges. Similar ideas on monitoring the evolution of the loss--or training dynamics--for specific training samples in order to recognize impactful instances have been used in generalization analyses (Toneva et al., 2019; Katharopoulos and Fleuret, 2018) and active learning methods (Wang et al., 2021; Elenter et al., 2022). In this case, a similar sensitivity analysis as in Section 3 holds at the sample level: **Proposition 5.1**: _Under Assumption 2.2, for all \((x,y)\):_ \[-\lambda_{t}^{\star}(x,y)\in\partial P_{t}^{\star}(\epsilon(x,y)), \tag{7}\] _where \(\partial P_{t}^{\star}(\epsilon(x,y))\) denotes the Frechet subdifferential of \(P_{t}^{\star}\) at \(\epsilon(x,y)\)._ Proposition 5.1 implies that the constraint whose perturbation has the most potential impact on \(P_{t}^{\star}\) is the constraint with the highest associated optimal dual variable. As in the task-level constraints, infinitesimally tightening the constraint in a neighborhood \((x,y)\) would restrict the feasible set, causing an increase of the optimal value of \(P_{t}^{\star}\) at a rate larger than \(\lambda(x,y)\). In that sense, the magnitude of the dual variables can be used as a measure of informativeness of a training sample. Similarly to non-support vectors in SVMs, samples associated to inactive constraints (i.e., \(\{(x,y):\lambda_{t}^{\star}(x,y)=0\}\)), are considered uninformative. This notion of informativeness is illustrated in Figure 3. As shown in the figure, large dual variables correspond to both outliers and inliers. Indeed, informative samples and outliers (such as mislabeled samples) may be hard to distinguish. Recent empirical findings indicate that many active and continual learning algorithms consistently prefer to acquire samples that traditional models fail to learn (Karamcheti et al., 2021). Figure 3: Informativeness of dual variables for sample selection. In (b), samples with large associated dual variables tend to accumulate in the task decision boundary and edges of the class cluster, while (a) shows that storing these samples, as opposed to storing those in the center of the cluster, is beneficial in terms of forgetting. In Primal-Dual Continual Learning with Sample selection (PDCL-S), the buffer is filled by leveraging the per-sample dual variables \(\lambda(x,y)\). Specifically, given a buffer partition \(n_{1},\cdots,n_{t}\), the generic mechanism \(FB\) for populating the buffer in Algorithm 1 is particularized to filling the buffer with the samples with the highest associated dual variable from each class. Thus, this method can be interpreted as a near-matching between the buffer-induced distribution \(\mathcal{B}_{t}(x,y)\) and the optimal dual variable function \(\lambda_{t}^{*}(x,y)\). In order to avoid sampling outliers, we discard samples with extremely high dual variables before sampling. ## 6 Experimental validation To highlight the versatility of the proposed approach, we evaluate it in four continual learning benchmarks, two image classification tasks (MNIST LeCun and Cortes (2010) and Tiny-ImageNet Le and Yang (2015)), one speech classification task (SpeechCommands Warden (2018)) and one medical (Abdominal CT Scan) dataset (OrganA Yang et al. (2021)). Each dataset is split into disjoint sets each containg a subset of the classes. MNIST, SpeechCommands, and OrganA are split into 5 tasks with 2 classes each. The more challenging task, Tiny ImageNet, is split into 10 tasks, each with 20 classes. We adopt standard neural network architectures and match the model complexity to the difficulty of the problem at hand. In MNIST, we use a three-layer MLP with ReLU activations. In Seq. SpeechCommands and OrganA, we use four- and five-layer CNNs, respectively, with ReLU activations, Batch Normalization and MaxPooling. In TinyImagenet, we use a ResNet-18 architecture (He et al., 2016). At each iteration \(t\), models are trained using \(f_{t-1}\) as initialization with a primal learning rate of \(\eta_{p}=0.001\) in all datasets except TinyImagenet, where \(\eta_{p}=0.01\) is used. The dual learning rate is set to \(\eta_{d}=0.05\) or \(\eta_{d}=0.5\) respectively. We adopt the baseline implementations of Mammoth1, and use their reported hyperparameters for the baselines. We measure final average accuracy both in the Class Incremental Learning (CIL) and Task Incremental Learning (TIL) settings across 5 random seeds. More details about the forgetting tolerance parameter \(\epsilon_{k}\) are presented in Section 6.1. Footnote 1: [https://github.com/aimagelab/mammoth](https://github.com/aimagelab/mammoth) We evaluate both the memory partition (PDCL) and memory partition with sample selection (PDCL-S) methods, and compare their performance with the baseline approaches presented in Appendix A.1 that are most related to our work, namely Experience Replay (Rolnick et al., 2018) with Reservoir sampling, X-DER (Boschini et al., 2022), GSS (Aljundi et al., 2019) and iCARL (Rebuffi et al., 2016). Additional experimental details and results can be found in Appendix A.7. Figure 4: Error in the Class Incremental Learning (**top row**) and Task Incremental Learning (**bottom row**) settings for two different buffer sizes across four benchmarks (lower is better). Results for additional buffer sizes are presented in Appendix A.7. Figure 6 compares the performance comparison of these continual learning methods in the CIL and TIL settings. We can observe that undertaking the continual learning problem with a primal-dual algorithm, and leveraging the information provided by dual variables leads to comparatively low forgetting in almost all buffer sizes and benchmarks. It is important to note that sample selection does not always improve the performance of the method. This is consistent with previous works on the effectiveness of sample selection (Araujo et al., 2022), and with the fact that the datasets used do not have many outliers. Moreover, in settings such as CIL Tiny Imagenet, no method outperforms Reservoir by a significant margin, which is consistent with recent surveys (Zhou et al., 2023). ### Ablation on the forgetting tolerance As explained in Section 2, the forgetting tolerances \(\{\epsilon_{k}\}_{k=1}^{T}\) correspond to the worst average loss that one requires of a past task. In many cases, this is a _design_ requirement, and not a tunable parameter. For extremely large values of epsilon, the constraint slacks are always negative and dual variables quickly go to zero (analogous to an unconstrained problem), which makes them uniformative. On the other hand, extremely low values of \(\epsilon_{k}\) might also be inadequate, since the tightness of these constraints can make the problem infeasible and make dual variables diverge. As shown in Figure 5, the method is not extremely sensitive to \(\epsilon_{k}\) in the range \([0.15,0.45]\). Our ablations suggest that values in the range \([1.05m_{k},1.25m_{k}]\), where \(m_{k}\) is the average loss observed when training the model without constraints, work well in practice. ## 7 Discussion In this work we presented a principled primal dual approach to continual learning that explicitly tackles learning under a no-forgetting requirement. We showed that dual variables play a key role in this framework, since they give us a handle on the stability-plasticity tradeoff by assessing the relative difficulty of a task and the impactfulness of a given sample. One of the drawbacks exhibited by the proposed method is dual variable underestimation. It is possible that the difficulty of a task \(k\) at a certain iteration \(t_{0}\) is underestimated, and that the corresponding dual variable \(\lambda_{k}\) re-grows at a future iteration \(t_{1}\). This is an issue since we have already discarded the non-selected samples from task \(k\), meaning that a portion of the buffer--characterized by \(\lambda_{k}(t_{1})-\lambda_{k}(t_{0})\)--would remain empty. To deal with this issue, one can fill the empty portion of the buffer with either: augmented samples from the previously selected ones or samples from the current task, whose dataset is entirely available. Another downside of our approach is that the number of constraints involved in the optimization problem can be very large, particularly when doing sample selection. This can increase the sensitivity of the optimization process to the learning rates and forgetting tolerances. In this work, we have uniformly set the forgetting tolerances for all tasks or samples. However, a pretraining method that yields non-uniform, feasible, and informative constraint upper bounds could improve the performance of the proposed approach. Moreover, understanding the conditions under which sample selection is provably beneficial is also a promising direction for future work. Figure 5: Ablation on the forgetting tolerance \(\epsilon_{k}\) in Seq-MNIST.
2309.13739
Vacuum polarization correction to atomic energy levels in the path integral formalism
Vacuum polarization corrections to the energy levels of bound electrons are calculated using a perturbative path integral formalism. We apply quantum electrodynamics in a framework which treats the strong binding nuclear field to all orders. The effective potential, derived from the Dyson-Schwinger equation for the photon propagator, is then considered pertubatively. Expressions for the vacuum polarization shift of binding energies is obtained from the poles of the spectral function up to second order. Numerical results are provided to select candidates for novel tests of strong-field quantum electrodynamics by means of precision mass spectrometry.
Sreya Banerjee, Zoltán Harman
2023-09-24T19:56:45Z
http://arxiv.org/abs/2309.13739v1
# Vacuum polarization correction to atomic energy levels in the path integral formalism ###### Abstract Vacuum polarization corrections to the energy levels of bound electrons are calculated using a perturbative path integral formalism. We apply quantum electrodynamics in a framework which treats the strong binding nuclear field to all orders. The effective potential, derived from the Dyson-Schwinger equation for the photon propagator, is then considered pertubatively. Expressions for the vacuum polarization shift of binding energies is obtained from the poles of the spectral function up to second order. Numerical results are provided to select candidates for novel tests of strong-field quantum electrodynamics by means of precision mass spectrometry. ## I Introduction Radiative corrections to energy levels of atoms, and, more recently, highly charged ions (HCI), have been at the focal point of theoretical and experimental studies over the past decades. At the one-loop level, there are two types of contributions, the vacuum polarization (VP) and the self-energy effect, which both scale, to leading order, with \(Z^{4}\), i.e. with the forth power of the atomic number \(Z\), rendering heavy HCI an ideal test bench for the study of radiative effects. Experimental advances in the production, trapping and storing of HCI and the investigation of their properties with unprecedented accuracy [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13] call for versatile theoretical frameworks for the description of such systems. Very recently, precision mass spectrometry with Penning-trap setups have reached the 1-eV level of accuracy [14; 15; 16; 17], allowing the study of QED effects in heavy HCI by means of electron binding energy determinations through mass measurements and the mass-energy equivalence relation. In this article, we develop a functional integral formalism for the calculation of VP corrections for bound atomic states in the Furry picture [18]. The usual perturbative framework of quantum field theory overlooks the intricacies of the non-perturbative effects that show up in the study of bound systems. We overcome this existing disagreement between the intrinsically nonperturbative bound states and the perturbative nature of quantum electrodynamics (QED). We use Feynman's path integral formalism [19] in the relativistic regime [20; 21; 22; 23; 24; 25; 26; 27], wherein, the time-sliced formulation of path integrals ensures Lorentz invariance. We treat the path integrals perturbatively and we proceed by summing the perturbative expansion to all orders [28; 29]. The introduction of functional integral methods to atomic systems is also motivated by the prospects of incorporating non-electromagnetic interactions into the precision theory of atoms and ions. As an example, hadronic vacuum polarization effects have also been computed by means of an ab initio quantum chronodynamic (QCD) Schwinger-Dyson approach [30; 31]. The continuing increase in experimental accuracy may necessitate in future the inclusion of such QCD corrections in atomic spectra [32; 33; 34; 35]. Furthermore, prospects of new physics searches with low-energy atomic precision experiments (see e.g. [36; 37; 38; 39]) also suggest to employ a versatile field theoretical formalism enabling the inclusion of various types of gauge-boson propagators. This article is organized as follows. In Section II, we derive the effective potential describing the VP correction to the nuclear potential in the framework of path integrals using the Schwinger-Dyson equations. In Section III, we discuss the general perturbative formulation of path integrals. In Section IV, we describe our computations of the VP contributions to the Lamb shift of energy levels using the formalism introduced in Section III. In Section V, we tabulate and discuss our numerical results of the Uehling contribution to the energy shift and provide concluding remarks. Through the article, we use natural units, unless stated otherwise. ## II Schwinger-Dyson equations for the photon self-energy We begin by summarizing the derivation of the complete expression for the VP correction to the photon propagator, also known as the photon self-energy. This is performed by defining the Schwinger-Dyson equation for the photon propagator using path integrals in analogy to [40]. The gauge-fixed Lagrangian of quantum electrodynamics (QED) is given as \[\mathcal{L}_{\text{QED}}(x)= A_{\mu}(x)\left[g^{\mu\nu}\partial^{2}+\left(\frac{1}{\xi}-1 \right)\partial^{\mu}\partial^{\nu}\right]A_{\nu}(x) \tag{1}\] \[+\bar{\psi}(x)(i\not{D}-m)\psi(x)\,,\] where \(A_{\mu}\) is the gauge field operator for the photon field, \(\psi(x)\) is the Dirac field of the electron (or other bound lepton) in coordinate space, \(m\) is its bare mass, \(g^{\mu\nu}\) is the metric tensor with the \(\mu,\nu\in\{0,1,2,3\}\) being Lorentz indices, \(\not{D}=\partial_{\mu}-ieA_{\mu}\), and \(\xi\) the gauge fixing parameter. The generating functional \(Z\) is constructed using this Lagrangian, and the external source \(J_{\mu}\) of the photon field, and the sources \(\eta\) and \(\bar{\eta}\) of the fermion fields \(\bar{\psi}\) and \(\psi\), respectively, as the functional integral \[Z[\eta,\bar{\eta},J_{\mu}] =\int{\cal D}\bar{\psi}{\cal D}\psi{\cal D}A\exp\biggl{\{}i\int d^{4 }x[{\cal L}_{\rm QED}\] \[+J_{\mu}(x)A^{\mu}(x)+\bar{\psi}(x)\eta(x)+\bar{\eta}(x)\psi(x)] \biggr{\}}\,.\] The fermion fields and their sources are anti-commuting Grassmann variables. To arrive at the Schwinger-Dyson equation, we consider that the functional integral of a total derivative is zero, \[\int{\cal D}[\phi]\frac{\delta}{\delta\phi}=0\,, \tag{3}\] where \(\phi\) is an arbitrary field variable. For the photon propagator, the derivative is taken with respect to the gauge field \(A_{\mu}(x)\): \[\int{\cal D}[\bar{\psi}\psi A]\frac{\delta}{\delta A_{\mu}(x)} \exp\{i[S(\bar{\psi},\psi,A)\] \[+\int d^{4}xJ_{\mu}(x)A^{\mu}(x)+\bar{\psi}(x)\eta(x)+\bar{\eta}(x) \psi(x)]\}=0\,.\] Eq. (II) can be written as a differential equation for the generating functional \(Z\): \[\Biggl{[}\frac{\delta S}{\delta A_{\mu}(x)} \left(-i\frac{\delta}{\delta J_{\mu}},i\frac{\delta}{\delta\eta},-i \frac{\delta}{\delta\bar{\eta}}\right)\] \[+J^{\mu}(x)\Biggr{]}Z[\eta,\bar{\eta},J_{\mu}]=0\,,\] where we have established a correspondence between fields and their source terms through functional derivatives \[\psi(x)\leftrightarrow-\frac{i\delta}{\delta\bar{\eta}(x)}\,,\quad\bar{\psi}( x)\leftrightarrow\frac{i\delta}{\delta\eta(x)}\,,\quad A^{\mu}(x)\leftrightarrow- \frac{i\delta}{\delta J_{\mu}(x)}\,.\] The first term on the left-hand side of Eq. (II) is solved by implementing the Gateux derivative method: \[\frac{{\rm d}S(\phi+\epsilon\tau)}{{\rm d}\epsilon}|_{\epsilon=0}=\int{\rm d }^{4}x\,\frac{\partial S}{\partial\phi}\tau\,\,, \tag{6}\] yielding \[\frac{\delta S}{\delta A_{\mu}(x)}\] \[=\left\{\left[g^{\mu\nu}\partial^{2}+\left(\frac{1}{\xi}-1\right) \partial^{\mu}\partial^{\nu}\right]A_{\nu}(x)+e\psi(x)\gamma^{\mu}\bar{\psi}( x)\right\}\,.\] Thus, Eq. (II), written in terms of the source terms, becomes \[\left\{\left[g^{\mu\nu}\partial^{2}+\left(\frac{1}{\xi}-1\right) \partial^{\mu}\partial^{\nu}\right]\left(\frac{-i\delta}{\delta J^{\nu}(x)}\right)\right.\] \[\left.+e\frac{-i\delta}{\delta\bar{\eta}(x)}\gamma^{\mu}\frac{i \delta}{\delta\eta(x)}+J^{\mu}(x)\right\}\!Z[\eta,\bar{\eta},J_{\mu}]=0\,.\] In terms of the generating functional \(W\) for the Green's functions of the connected Feynman diagrams, \(W[\eta,\bar{\eta},J_{\mu}]=-i\ln Z[\eta,\bar{\eta},J_{\mu}]\), Eq. (II) has the form \[\left\{\left[g^{\mu\nu}\partial^{2}+\left(\frac{1}{\xi}-1\right) \partial^{\mu}\partial^{\nu}\right]\left(\frac{\delta}{\delta J^{\nu}(x)}\right)\right. \tag{9}\] \[\left.+ie\frac{-i\delta}{\delta\bar{\eta}(x)}\gamma^{\mu}\frac{i \delta}{\delta\eta(x)}\right\}\!W-ie\frac{-i\delta W}{\delta\bar{\eta}(x)} \gamma^{\mu}\,\frac{i\delta W}{\delta\eta(x)}=-J^{\mu}(x)\,.\] We now introduce the effective action \(\Gamma\), defining the one-particle-irreducible (1PI) Green's functions, in terms of the generating functional \(W\) through a Legendre transformation, following [40]: \[\Gamma[\bar{\psi},\psi,A_{\mu}]=W[\eta,\bar{\eta},J_{\mu}]-\int{\rm d}^{4}x \left(\bar{\psi}\eta+\psi\bar{\eta}+A^{\mu}J_{\mu}\right). \tag{10}\] Owing to the above transformation, we can define the field terms \((\bar{\psi},\psi,A_{\mu})\) in terms of the source terms \((\eta,\bar{\eta},J_{\mu})\): \[A_{\mu}=-i\frac{\delta W[J]}{\partial J^{\mu}}\,,\quad\psi=-i \frac{\delta W[\eta]}{\delta\bar{\eta}}\,,\quad\bar{\psi}=i\frac{\delta W[\bar{ \eta}]}{\delta\eta}\,.\] \[J_{\mu}=-\frac{\delta\Gamma[A]}{\partial A^{\mu}}\,,\quad\eta=- \frac{\delta\Gamma[\bar{\psi}]}{\delta\bar{\psi}}\,,\quad\bar{\eta}=\frac{ \delta\Gamma[\psi]}{\delta\bar{\psi}}\,.\] Setting the fermion sources to zero and using the above expressions, Eq. (II) becomes \[\frac{\delta\Gamma[A]}{\partial A_{\mu}(x)}=i\left[g^{\mu\nu} \partial^{2}+\left(\frac{1}{\xi}-1\right)\partial^{\mu}\partial^{\nu}\right]A_ {\mu}(x)\] \[+ie\frac{-i\delta}{\delta\bar{\eta}(x)}\gamma^{\mu}\frac{i\delta} {\delta\eta(x)}W\,. \tag{11}\] From the above expression, the connected two-point function - or, in this case, the complete electron propagator - in an external field \(A_{\mu}\) can be identified easily as \[S(x,y)=i\frac{\delta^{2}W[\eta,\bar{\eta},J_{\mu}]}{\delta\eta(y)\delta\bar{ \eta}(x)}|_{\psi=\bar{\psi}=0}=\left(\frac{\delta^{2}\Gamma}{\delta\psi(x) \delta\bar{\psi}(y)}\right)^{-1}\,. \tag{12}\] Using the identity as derived in Ref. [41], \[\frac{-i\delta}{\delta\bar{\eta}(x)}\gamma^{\mu}\frac{i\delta W}{ \delta\eta(y)} ={\rm Tr}\biggl{\{}\frac{-i\delta}{\delta\bar{\eta}(x)}\gamma^{ \mu}\frac{i\delta}{\delta\eta(y)}W[\eta,\bar{\eta},J^{\mu}]\biggr{\}} \tag{13}\] \[=-{\rm Tr}\{\gamma^{\mu}S(x,y)\}\,,\] (with \({\rm Tr}\{\ldots\}\) representing the trace of the matrix in the curly brackets) and Eq. (12), Eq. (II) reduces to \[\frac{\delta\Gamma[A]}{\partial A_{\mu}(x)}\] \[=i\left[g^{\mu\nu}\partial^{2}+\left(\frac{1}{\xi}-1\right) \partial^{\mu}\partial^{\nu}\right]A_{\mu}(x)-ie\,{\rm Tr}\{\gamma^{\mu}S(x,x )\}\,.\] Since our aim is to determine the photon propagator, we take the second derivative of the effective action \(\Gamma\) with respect to the photon field, and thus Eq. (14) changes to \[\frac{\delta^{2}\Gamma[A]}{\partial A_{\mu}(x)\delta A_{\nu}(y)}=i \left[g^{\mu\nu}\partial^{2}+\left(\frac{1}{\xi}-1\right)\partial^{\mu}\partial^ {\nu}\right]\delta(x-y)\] \[-ie\,\mathrm{Tr}\Bigg{\{}\gamma^{\mu}\frac{\delta}{\delta A_{\nu} (y)}\left(\frac{\delta^{2}\Gamma}{\delta\bar{\psi}(x)\delta\bar{\psi}(x)} \right)^{-1}\Bigg{\}}\,. \tag{15}\] Here, \(\delta(x-y)\) represents a four-dimensional Dirac delta. The derivative of an inverse matrix is given as \[\frac{\mathrm{d}B^{-1}}{\mathrm{d}t}=-B^{-1}\frac{\mathrm{d}B}{\mathrm{d}t}B^ {-1}\,,\] where \[B=\frac{\delta^{2}\Gamma}{\delta\bar{\psi}(x)\delta\bar{\psi}(x)}\,.\] Applying this to the functional derivative in Eq. (15), referring to Eq. (12), and considering that similar to the fermionic case, \[\frac{\delta^{2}\Gamma[A]}{\partial A_{\mu}(x)\delta A_{\nu}(y)}=D^{-1}_{\mu \nu}(x-y)\,,\] we obtain a modified version of Eq. (15) for the inverse of the photon propagator: \[D^{-1}_{\mu\nu}(x-y)=i\left[g_{\mu\nu}\partial^{2}+\left(\frac{ 1}{\xi}-1\right)\partial_{\mu}\partial_{\nu}\right]\delta(x-y) \tag{16}\] \[-ie\int\mathrm{d}^{4}x_{1}\,\mathrm{d}^{4}x_{2}\,\mathrm{Tr} \Bigg{\{}\gamma_{\mu}\left(\frac{\delta^{2}\Gamma}{\delta\bar{\psi}(x)\delta \psi(x_{1})}\right)^{-1}\] \[\times\left(\frac{\delta^{3}\Gamma}{\delta A_{\nu}(y)\delta\bar{ \psi}(x_{1})\delta\psi(x_{2})}\right)\left(\frac{\delta^{2}\Gamma}{\delta\bar {\psi}(x)\delta\bar{\psi}(x_{2})}\right)^{-1}\Bigg{\}}\,.\] The third derivative of \(\Gamma\) gives us the connected three-point Green's function, or the electron-photon vertex function \[\left(\frac{\delta^{3}\Gamma}{\delta A_{\nu}(y)\delta\bar{\psi}(x_{1})\delta \bar{\psi}(x_{2})}\right)=e\Gamma_{\nu}(y;x_{1},x_{2})\,, \tag{17}\] and using the expressions for the complete electron propagator from Eq. (12), we can write Eq. (17) as \[D^{-1}_{\mu\nu}(x-y)=i\left[g_{\mu\nu}\partial^{2}+\left(\frac{ 1}{\xi}-1\right)\partial_{\mu}\partial_{\nu}\right]\delta(x-y) \tag{18}\] \[-ie^{2}\int\mathrm{d}^{4}x_{1}\,\mathrm{d}^{4}x_{2}\,\mathrm{Tr} \Bigg{\{}S(x_{1},x)\gamma_{\mu}S(x,x_{2})\Gamma_{\nu}(x_{2},x_{1};x)\Bigg{\}}\,.\] The second term on the right-hand side is the VP tensor in coordinate space: \[\Pi_{\mu\nu}(x,y) \tag{19}\] \[\equiv-ie^{2}\int\mathrm{d}^{4}x_{1}\,\mathrm{d}^{4}x_{2}\,\mathrm{ Tr}\Bigg{\{}S(x_{1},x)\gamma_{\mu}S(x,x_{2})\Gamma_{\nu}(x_{2},x_{1};x)\Bigg{\}}\,.\] Fourier transforming to momentum space, we obtain [42] the polarization tensor as a function of the photon four-momentum \(k\), \[\Pi_{\mu\nu}(k)=-ie^{2}\int\frac{\mathrm{d}^{4}p}{(2\pi)^{4}}\,\mathrm{Tr} \Bigg{\{}S(p)\gamma_{\mu}S(p-k)\Gamma_{\nu}(p,k;p-k)\Bigg{\}}\,, \tag{20}\] where \(p\) is the four-momentum of one of the virtual leptons. The inverse photon propagator in momentum space thus becomes \[D^{-1}_{\mu\nu}(k)=i\left[g_{\mu\nu}k^{2}+\left(\frac{1}{\xi}-1\right)k_{\mu}k_ {\nu}\right]+\Pi_{\mu\nu}(k)\,. \tag{21}\] This gives us the Schwinger-Dyson equation for the unquenched (dressed) photon propagator in momentum space, where the second term on the right-hand side describes, to lowest order, the photon propagator perturbed by the creation and annihilation of a virtual lepton-antilepton pair. Considering that the photon self-energy or the VP is necessarily transverse [43; 44], we obtain for the full photon propagator, as illustrated in Fig. 1, the equation \[D_{\mu\nu}(k)=-i\left[\frac{g_{\mu\nu}}{k^{2}}-\frac{k_{\mu}k_{\nu}}{k^{4}} \right]\frac{1}{1+\Pi(k^{2})}-i\xi\frac{k_{\mu}k_{\nu}}{k^{4}}\,. \tag{22}\] The gauge fixing parameter is set to \(\xi=0,1\) in the Landau and Feynman gauges, respectively. \(\Pi(k^{2})\) is the polarization function, defined as in e.g. [42] as \(\Pi_{\mu\nu}(k)=(k_{\mu}k_{\nu}-g_{\mu\nu}k^{2})\Pi(k^{2})\). The well-known free photon propagator can be reproduced ed by setting \(\Pi(k^{2})=0\). One can now calculate the four-potential [42; 43] induced by a charge density \(j^{\nu}(q)\), \[A^{{}^{\prime}}_{\mu}(x)=\int\frac{\mathrm{d}^{4}k}{(2\pi)^{4}}e^{-ik\cdot x}D_{ \mu\nu}(k)j^{\nu}(k)\,, \tag{23}\] with \(D_{\mu\nu}(k)\) being the complete, dressed photon propagator. In our special case of a static nucleus, \(j^{\nu}(q)=-Ze\delta_{\nu 0}\). Taking the VP-perturbed propagator to _leading order_, the interaction potential in Eq. (23) is reduced following [42] to the well-known expression with an integration variable \(y\) \[A^{{}^{\prime}}_{0}(r) = -\frac{Ze}{r}\bigg{[}1+\frac{2\alpha}{3\pi}\int_{1}^{\infty}dy \left(1+\frac{1}{2y^{2}}\right)\frac{\sqrt{y^{2}-1}}{y^{2}}e^{-2myr}\bigg{]} \tag{24}\] \[\equiv -\frac{Ze}{r}+U^{\mathrm{U}\mathrm{e}h}_{\mathrm{VP}}(r)\,,\] with \(\alpha\) being the fine-structure constant. This interaction potential is a sum of two distinct terms, with the second defining the Uehling potential \(U^{\mathrm{U}\mathrm{e}h}_{\mathrm{VP}}\). Following the work of [45], the integral in Eq. (24) is reduced using the modified Bessel functions (or Bickley-Naylor functions), \(Ki\) Figure 1: Diagrammatic representation of the Schwinger-Dyson equation for the photon propagator. and defining integrals to the form \[I_{U}(a)=\int_{1}^{\infty}d\xi\left(1+\frac{1}{2\xi^{2}}\right) \frac{\sqrt{\xi^{2}-1}}{\xi^{2}}e^{-a\xi} \tag{25}\] \[=\int_{0}^{\infty}dx\,e^{-a\cosh(x)}\left(1-\frac{1}{2\cosh^{2}(x) }-\frac{1}{2\cosh^{4}(x)}\right)\,,\] (26) \[=Ki_{0}(a)-\frac{1}{2}Ki_{2}(a)-\frac{1}{2}Ki_{4}(a)\,, \tag{27}\] where \(a=2\alpha^{-1}r,\text{and}\,Ki_{n}(z)=\int_{0}^{\infty}\frac{e^{-z\,\cosh(x)}}{ \cosh^{n}x}dx\,,n\geq 1\). Using these functions, the closed-form expression of the Uehling potential is obtained as in [45] \[U_{\text{VP}}^{\text{Ueh}}(a)=-\frac{2Ze\alpha}{3\pi r}\bigg{[} \bigg{(}1+\frac{a^{2}}{12}\bigg{)}\,K_{0}(a)\\ -\frac{a}{12}Ki_{1}(a)-\left(\frac{5}{6}+\frac{a^{2}}{12}\right) Ki_{2}(a)\bigg{]}\,. \tag{28}\] We work with this closed expression for the vacuum polarization potential to obtain the corresponding energy shift using path integral formalism, as outlined in the following Section. We note that higher-order terms in the dressed photon propagator would lead to the Kallen-Sabry type corrections [46] to the potential. ## III Perturbative path integrals for bound states This Section lays the groundwork for the _perturbative approach_ to the path integral formalism, following the articles [28; 29; 47]. We begin with the modification of the relativistic action integral, wherein the action \(S\) is perturbed by a potential \(\Delta V\), and the perturbation expansion is summed to all orders: \[S[\mathbf{r}(t)](\mathbf{r}_{b},\mathbf{r}_{a},t_{b},t_{a}) \tag{29}\] \[=S^{(0)}[\mathbf{r}(t)](\mathbf{r}_{b},\mathbf{r}_{a},t_{b},t_{a })-\int_{t_{a}}^{t_{b}}\,\Delta V(\mathbf{r}(t),t)\,dt\,.\] \(S[\mathbf{r}(t)]\) is the classical action functional defined for the path \(\mathbf{r}(t)\), connecting the starting point \(\mathbf{r}_{a}=\mathbf{r}(t_{a})\) to the end point \(\mathbf{r}_{b}=\mathbf{r}(t_{b})\). In order to establish gauge invariance and to ensure operator ordering, we proceed with the discretization of the time variable. The time interval \(T=t_{b}-t_{a}\) is divided into \(N\) small discrete intervals. The time lattice is divided into \(N+1\) equidistant points denoted by \(u_{j}\), as in Fig. 2, such that \[u_{j}=t_{a}+jt_{j}\,,\quad\text{where},\quad t_{j}=\frac{t_{b}- t_{a}}{N}=u_{j}-u_{j-1}\,,\] \[u=\sum_{j=1}t_{j}\,.\] With this discretization of the time interval, the lattice action reads \[S^{(N)}[\mathbf{r}_{1},\ldots,\mathbf{r}_{N-1}](\mathbf{r}_{b},\mathbf{r}_{a},t_{b},t_{a})\] \[=S^{(0;N)}[\mathbf{r}_{1},\ldots,\mathbf{r}_{N-1}](\mathbf{r}_{b},\mathbf{r}_{a},t_{b},t_{a})\] \[\qquad\qquad\qquad\qquad+\Delta S^{(N)}[\mathbf{r}_{1},\ldots, \mathbf{r}_{N-1}](\mathbf{r}_{b},\mathbf{r}_{a},t_{b},t_{a})\,,\] \[=S^{(0;N)}[\mathbf{r}_{1},\ldots,\mathbf{r}_{N-1}](\mathbf{r}_{b},\mathbf{r}_{a},t_{b},t_{a})-t_{j}\sum_{j=0}^{N-1}\,\Delta V(\mathbf{r}_{j},u _{j})\,, \tag{30}\] where \(S^{(0;N)}\) is the unperturbed action in the time-lattice definition. Using this time-lattice version of the action, if we construct the action integral for the action perturbed by an arbitrary potential, we obtain \[e^{\frac{i}{\hbar}S^{(N)}}=e^{\frac{i}{\hbar}S^{(0;N)}}+e^{\frac{i}{\hbar} \Delta S^{(N)}}\,. \tag{31}\] We expand the perturbation exponential up to all orders as \[e^{\frac{i}{\hbar}\Delta S^{(N)}}\] \[=\sum_{n=0}^{\infty}\frac{t_{j}^{n}}{n!}\left[\sum_{j=0}^{N-1} \frac{\Delta V(\mathbf{r}_{j},u_{j})}{i\hbar}\right]^{n}\] \[=\sum_{n=0}^{\infty}\frac{t_{j}^{n}}{n!}\left[\sum_{j_{n}=0}^{N-1 }\cdots\sum_{j_{1}=0}^{N-1}\frac{\Delta V(\mathbf{r}_{j_{n}},u_{j_{n}})}{i \hbar}\ldots\frac{\Delta V(\mathbf{r}_{j_{1}},u_{j_{1}})}{i\hbar}\right]\] \[=\sum_{n=0}^{\infty}t_{j}^{n}\left[\sum_{j_{n}=0}^{N-1}\cdots \sum_{j_{1}=0}^{N-1}\frac{\Delta V(\mathbf{r}_{j_{n}},u_{j_{n}})}{i\hbar} \ldots\frac{\Delta V(\mathbf{r}_{j_{1}},u_{j_{1}})}{i\hbar}\right]\] \[\qquad\times\left[1+\mathcal{O}\left(\frac{1}{N}\right)\right]\,, \tag{32}\] where the correction terms of order \(O\left(\frac{1}{N}\right)\) are to take care of the repeated indices. In the limit \(N\to\infty\), Eq. (32) becomes \[\sum_{n=0}^{\infty}t_{j}^{n}\left[\sum_{j_{n}=0}^{N-1}\cdots\sum_{ j_{1}=0}^{N-1}\frac{\Delta V(\mathbf{r}_{j_{n}},u_{j_{n}})}{i\hbar}\ldots \frac{\Delta V(\mathbf{r}_{j_{1}},u_{j_{1}})}{i\hbar}\right]\] \[\qquad\qquad\times\left[1+\mathcal{O}\left(\frac{1}{N}\right)\right]\] \[\approx_{N\to\infty}\int_{t_{a}}^{t_{b}}du_{n}\int_{t_{a}}^{u_{n}}du _{n-1}\ldots\] \[\qquad\qquad\times\int_{t_{a}}^{u_{2}}du_{1}\frac{\Delta V( \mathbf{r}_{n},u_{n})}{i\hbar}\ldots\frac{\Delta V(\mathbf{r}_{1},u_{1})}{i \hbar}\,. \tag{33}\] In order to simplify the notation in Eq. (III), in the context of perturbation theory, we redefine the time intervals as \[u_{(0)}=u_{0}=t_{a}\,,\qquad u_{(n+1)}=u_{N}=t_{b}\,,\] \[u_{(k)}=u_{j_{k}}\,,\quad\mathbf{r}_{(k)}=\mathbf{r}_{j_{k}}\,, \qquad k=1,\ldots,n\,.\] At each order of perturbation, the time interval \(T=t_{b}-t_{a}\) is divided into \(n+1\) subintervals. The subscript \(j\) defines the discretization of the time lattice into \(N\) discrete intervals, and the subscript \(k\) denotes the order of perturbation. Thus, we discretize time, and calculate the path integrals, for each order of perturbation theory. As such, the lattice unperturbed action at each order of perturbation theory is given as \[S^{(0;N)}[\mathbf{r}_{1},\ldots,\mathbf{r}_{N-1}](\mathbf{r}_{b}, \mathbf{r}_{a},t_{b},t_{a}) \tag{34}\] \[\qquad=\sum_{k=0}^{n}S^{(0;N)}[\mathbf{r}_{1},\ldots,\mathbf{r}_{ N-1}](\mathbf{r}_{(k+1)},\mathbf{r}_{(k)};u_{(k+1)},u_{(k)})\,.\] We can define the time-lattice division as per the following equation: \[T^{N}(t_{b},t_{a})= \tag{35}\] \[\qquad\{T^{(N_{0})}(t_{a},u_{(k)});T^{(N_{1})}(u_{1},u_{2}); \ldots;T^{(N_{n})}(u_{n},t_{b})\}\,,\] where \[T^{(N_{k})}(u_{(k+1)},u_{(k)}) \tag{36}\] \[=(u_{k,j_{0}}\equiv u_{(k)},u_{k,j_{1}},\ldots,u_{k,j_{N_{k-1}}}u _{k,j_{N_{k}}}\equiv u_{(k+1)})\,.\] the entire interval has been divided into \(n+1\) subintervals and each of these subintervals, \([u_{(k+1)},u_{(k)}]\) have been discretized into the usual \(N_{k}\) parts, at each order of perturbation theory. Owing to this time-lattice divisions, the integration measure for the path integrals can be given in terms of a partial measure for each order of perturbation \[\int_{\mathbf{r}_{a}}^{\mathbf{r}_{b}}\mathcal{D}^{N}\mathbf{r}(\mathbf{u})= \prod_{k=1}^{n}\left[\int d\mathbf{r}_{k}\right]\prod_{i=0}^{n}\left[\int_{ \mathbf{r}_{i}}^{\mathbf{r}_{i+1}}\mathcal{D}^{(N_{i})}\mathbf{r}(u)\right]\,. \tag{37}\] Knowing that the Feynman path integral kernel has the form \[K(\mathbf{r}_{b},\mathbf{r}_{a};t_{b},t_{a}) \tag{38}\] \[=\lim_{N\to\infty}\int_{\mathbf{r}_{a}}^{\mathbf{r}_{b}}\mathcal{ D}\mathbf{r}(t)\exp\left\{\frac{i}{\hbar}S[\mathbf{r}(t)](\mathbf{r}_{b}, \mathbf{r}_{a},t_{b},t_{a})\right\}\,,\] with the integration measure of Eq. (37) and with the help of Eq. (30) and Eq. (33), the Feynman kernel can be rewritten as \[K^{(n)}(\mathbf{r}_{b},\mathbf{r}_{a};t_{b},t_{a})=\int_{t_{a}} ^{t_{b}}du_{n}\int_{t_{a}}^{u_{n}}du_{n-1}\cdots\int_{t_{a}}^{u_{2}}du_{1}\] \[\qquad\qquad\qquad\times\prod_{k=1}^{n}\left[\int d\mathbf{r}_{k} \frac{\Delta V(\mathbf{r}_{k},u_{k})}{i\hbar}\right]\] \[\qquad\qquad\qquad\times\prod_{i=0}^{n}\left[K^{(0)}(\mathbf{r}_{ i+1},\mathbf{r}_{i};u_{i+1},u_{i})\right] \tag{39}\] where the kernel \(K^{(0)}\) corresponds to the unperturbed action \(S^{(N,0)}[\mathbf{r}(u)]\), defined in the limit \(N\to\infty\). In case of systems with a time-independent perturbing potential, the Feynman kernel in Eq. (39) can be expressed as \[K^{(n)}(\mathbf{r}_{b},\mathbf{r}_{a};T)=\prod_{k=1}^{n}\left[ \int d\mathbf{r}_{k}\frac{\Delta V(\mathbf{r}_{k})}{i\hbar}\right] \tag{40}\] \[\qquad\qquad\times\prod_{i=0}^{n}\left[\int_{0}^{\infty}dT_{i}\, K^{(0)}(\mathbf{r}_{i+1},\mathbf{r}_{i};T_{i})\right]\delta\left(T-\sum_{ \gamma=0}^{n}T_{\gamma}\right)\,.\] Correspondingly, the \(n\)-th order energy-dependent Green's function is obtained by taking the Fourier transform of the propagator in the above equation with respect to time, yielding \[G^{(n)}(\mathbf{r}_{b},\mathbf{r}_{a};E) \tag{41}\] \[\qquad=\prod_{k=1}^{n}\left[\int_{0}^{\infty}d\mathbf{r}_{k}\, \Delta V(\mathbf{r}_{k})\right]\prod_{i=1}^{n}\left[G^{(0)}(\mathbf{r}_{i+1}, \mathbf{r}_{i};E)\right]\,.\] ## IV Uehling contribution to bound-state energy shifts The infinite summation of a perturbation expansion using path integrals developed in the last Section is now implemented to study the correction to the binding energy of an electron due to the perturbing Uehling potential. As in the Coulomb case, we start from the Dirac equation with a given potential \(V(\mathbf{r})\). The time-independent Dirac equation is customarily expressed, in natural units (\(c=1\), \(\hbar=1\)), as \[(E-\mathbf{\alpha}\cdot\hat{\mathbf{p}}-\beta m-V(\mathbf{r}))\Psi=0\,, \tag{42}\] Figure 3: Time-lattice division for first-order perturbation. Figure 2: Time-lattice division for a given path. The lattice is divided into \(N+1\) equidistant points, with the width given by \(t_{j}\). where \(\Psi\) is the bispinor wave function of the Dirac particle, \(m\) is the mass, \(\mathbf{\alpha}\) and \(\beta\) are the usual \(4\times 4\) Dirac matrices, and \(E\) is the energy. In order to calculate the contribution of the Uehling correction to the binding energy, we treat the Uehling potential defined in Eq. (28) as the perturbation. As in Eq. (30), the lattice-perturbed action is given as a sum of the unperturbed action and the action due to the perturbing potential. For our case, i.e., in the Furry picture, we consider the action due to the nuclear Coulomb potential as the unperturbed action \(S^{(0)}\)[48; 49]. The unperturbed action of the bound electron is \[S^{(0)}(t_{j})=\frac{m(\Delta r_{j})^{2}}{2t_{j}}-\frac{\lambda(\lambda+1)t_{ j}}{2mr_{j}r_{j-1}}+\frac{Ze^{2}Et_{j}}{mr_{j}}+\frac{(E^{2}-m^{2})t_{j}}{2m}\,, \tag{43}\] where, \(\lambda\equiv|\gamma|+\frac{1}{2}(\text{sign}\,\gamma-1)\), \(\gamma\) being the eigenvalue of the Martin-Glauber operator [50], which is defined as \(\mathcal{L}=-(\beta K+iZe^{2}\alpha_{r})\), with \(K\) being the Dirac operator and \(\alpha_{r}=\mathbf{\alpha}\cdot\mathbf{r}/r\)[49]. Coupled with the perturbing potential this yields the lattice-perturbed action. From Eq. (30) we obtain \[S[\mathbf{r}(t)](\mathbf{r}_{b},\mathbf{r}_{a},t_{b},t_{a})= \frac{m(\Delta r_{j})^{2}}{2t_{j}}-\frac{\lambda(\lambda+1)t_{j}}{2mr_{j}r_{j- 1}}+\frac{Ze^{2}Et_{j}}{mr_{j}}\] \[+\frac{(E^{2}-m^{2})t_{j}}{2m}-\int_{t_{a}}^{t_{b}}\,\Delta V( \mathbf{r}(t))\,dt\,, \tag{44}\] where \[\Delta V(\mathbf{r})=U_{\rm VP}^{\rm Veh} \tag{45}\] is the perturbing Uehling potential. The corresponding Feynman kernel can be derived using Eq. (40) and eventually the energy-dependent Green's function is given by Eq. (41). Thus, the \(n\)-th order Green's function is obtained in terms of the Dirac-Coulomb propagator, which has been determined using the path integral formalism [49], as \[G^{(n)}(\mathbf{r}_{b},\mathbf{r}_{a};E)\] \[=\prod_{k=1}^{n}\left[\int_{0}^{\infty}d\mathbf{r}_{k}\,\Delta V (\mathbf{r}_{k})\right]\prod_{i=0}^{n}\left[G^{(0)}(\mathbf{r}_{i+1},\mathbf{ r}_{i};E)\right]\] \[=\prod_{k=1}^{n}\Biggl{[}\int_{0}^{\infty}d\mathbf{r}_{k}\left\{- \frac{2\alpha Ze}{3\pi r_{k}}\Biggl{[}\left(1+\frac{r_{k}^{2}}{3\alpha^{2}} \right)K_{0}-\frac{r_{k}}{6\alpha}Ki_{1}\right.\right.\] \[\left.\left.\left.-\left(\frac{5}{6}+\frac{r_{k}^{2}}{3\alpha^{2} }\right)Ki_{2}\right]\right\}\Biggr{]}\] \[\times\prod_{i=0}^{n}\Biggl{[}\sum_{j,\kappa}\frac{\Gamma(p+ \lambda+1)}{2\iota kr_{1}r_{2}\Gamma(2\lambda+2)}W_{-p,\lambda+1/2}(-2\iota kr _{1})\] \[\times\Biggl{\{}\left[m-\frac{\kappa E}{\gamma}\right]M_{-p, \lambda+1/2}(-2\iota kr_{2})\Omega_{\kappa,\kappa}^{j}(\theta_{2}\phi_{2}| \theta_{1}\phi_{1})\beta^{2}\] \[-k\,\tilde{\gamma}M_{-p,\tilde{\lambda}+1/2}(-2\iota kr_{2}) \Omega_{\kappa,-\kappa}^{j}(\theta_{2}\phi_{2}|\theta_{1}\phi_{1})\beta^{2} \gamma_{5}\Biggr{\}}\Biggr{]} \tag{46}\] The energy-dependent Green's function is then \[G(\mathbf{r}_{b},\mathbf{r}_{a};E)=\sum_{n=0}^{\infty}G^{(n)}(\mathbf{r}_{b}, \mathbf{r}_{a};E)\,. \tag{47}\] Considering up to second order, the Green's function in Eq. (47) can be written as \[G(\mathbf{r}_{b},\mathbf{r}_{a};E) =G^{(0)}(\mathbf{r}_{b},\mathbf{r}_{a};E) \tag{48}\] \[+G^{(1)}(\mathbf{r}_{b},\mathbf{r}_{a};E)+G^{(2)}(\mathbf{r}_{b}, \mathbf{r}_{a};E)\,,\] where \[G^{(1)}(\mathbf{r}_{b},\mathbf{r}_{a};E)=\left[\int_{0}^{\infty}d\mathbf{r}_{1 }\,\Delta V(\mathbf{r}_{1})\right]\prod_{i=0}^{1}\left[G^{(0)}(\mathbf{r}_{i+ 1},\mathbf{r}_{i};E)\right]\,, \tag{49}\] and \[G^{(2)}(\mathbf{r}_{b},\mathbf{r}_{a};E) =\prod_{k=1}^{2}\left[\int_{0}^{\infty}d\mathbf{r}_{k}\,\Delta V( \mathbf{r}_{k})\right]\] \[\times\prod_{i=0}^{2}\left[G^{(0)}(\mathbf{r}_{i+1},\mathbf{r}_{ i};E)\right]\,.\] In analogy to the H-atom problem [49], the discrete energy spectrum can be obtained from the poles of the spectral function, which is defined as \[G(E)=\int G(\mathbf{r},\mathbf{r};E)\,d\mathbf{r} \tag{51}\] with the Green's function [e.g. in Eq. 47] in coordinate representation. The form of the Green's function in Eq. (46) does not allow a direct evaluation, therefore, we apply the basis-state decomposition of the spectral function in terms of the perturbed states \(\phi_{n}\) and the corresponding eigenenergies \(E_{n}\), given as \[G(E)=\int\sum_{n}\frac{\phi_{n}(\mathbf{r})\phi_{n}^{\dagger}(\mathbf{r})}{E-E _{n}(1-i0^{+})}\,d\mathbf{r}\,. \tag{52}\] In analogy to Refs. [51; 52; 53; 54], we introduce the spectral function projected to a single reference state \(\left|a\right\rangle\), \(G_{a}(E)=\left\langle a\right|G\left|a\right\rangle\). This function \(G(E)\) possesses a pole at the eigenenergy \(E_{a}\) of the reference state: \[G_{a}(E)\approx\frac{C_{a}}{E-E_{a}}\,, \tag{53}\] where the constant \(C_{a}\) is the residue term. The energies are now determined using complex contour integration by considering a small contour \(\Gamma\) which encircles an isolated pole at the perturbed bound-state energy \(E_{a}\). From the formulas \[\frac{1}{2\pi i}\oint_{\Gamma}dE\,E\,G_{a}(E)=E_{a}C_{a}\,, \tag{54}\] and \[\frac{1}{2\pi i}\oint_{\Gamma}dE\,G_{a}(E)=C_{a}\,. \tag{55}\] one can extract the level energy as \[E_{a}=\frac{\frac{1}{2\pi i}\oint_{\Gamma}dE\,E\,G_{a}(E)}{\frac{1}{2\pi i}\oint_{ \Gamma}dE\,G_{a}(E)}\,. \tag{56}\] It is more practical to directly adopt the expression for the perturbative energy shift \[\Delta E_{a}=\frac{\frac{1}{2\pi i}\oint_{\Gamma}dE\,\Delta E\,\Delta G_{a}(E)} {1+\frac{1}{2\pi i}\oint_{\Gamma}dE\,\Delta G_{a}(E)}\,, \tag{57}\] where \[\Delta E_{a} =\Delta E_{a}^{(1)}+\Delta E_{a}^{(2)}+\ldots\,,\] \[\Delta G_{a} =\Delta G_{a}^{(1)}+\Delta G_{a}^{(2)}+\ldots\,,\] are expansions in terms of the perturbing potential. We expand Eq. (57) in a geometric series and obtain, to first order, \[\Delta E_{a}^{(1)}=\frac{1}{2\pi i}\oint_{\Gamma}dE\,\Delta E\,\Delta G_{a}^{( 1)}(E)\,. \tag{58}\] Thus the perturbative change of the spectral function is \[\Delta G_{a}^{(1)}(E)\sim\frac{\left\langle a\right|V\left|a\right\rangle}{(E- E_{a})^{2}}\,, \tag{59}\] and from Eq. (58), one obtains the perturbative energy shift \[\Delta E_{a}^{(1)}=\left\langle a\right|V\left|a\right\rangle\,, \tag{60}\] with \(\Delta E_{a}=E-E_{a}^{(0)}\). The second-order energy shift can be obtained from the next terms of the geometric expansion of Eq. (57) as \[\Delta E_{a}^{(2)}=\frac{1}{2\pi i}\oint_{\Gamma}dE\,\Delta E\, \Delta G_{a}^{(2)}(E) \tag{61}\] \[-\left(\frac{1}{2\pi i}\oint_{\Gamma}dE\,\Delta E\,\Delta G_{a}^{ (1)}(E)\right)\left(\frac{1}{2\pi i}\oint_{\Gamma}dE\,\Delta G_{a}^{(1)}(E) \right)\,.\] The first term in Eq. (61) is the irreducible (non-degenerate) part and the second term is the reducible (degenerate) part. For the irreducible part, we obtain \[\frac{1}{2\pi i}\oint_{\Gamma}dE\,\Delta E\,\Delta G_{a}^{(2)}(E)=\] \[\sum_{i\neq a}\frac{\left\langle a\right|U_{\rm VP}^{\rm Ueh} \left|i\right\rangle\left\langle i\right|U_{\rm VP}^{\rm Ueh}\left|a\right\rangle }{E_{a}-E_{i}}\,, \tag{62}\] and the reducible part is given as \[\left(\frac{1}{2\pi i}\oint_{\Gamma}dE\,\Delta E\,\Delta G_{a}^{( 1)}(E)\right)\left(\frac{1}{2\pi i}\oint_{\Gamma}dE\,\Delta G_{a}^{(1)}(E)\right)\] \[\qquad=\Delta E_{a}^{(1)}\left(\frac{1}{2\pi i}\oint_{\Gamma}dE \,\frac{\left\langle a\right|U_{\rm VP}^{\rm Ueh}\left|a\right\rangle}{(E-E_ {a})^{2}}\right)\] \[\qquad=\left\langle a\right|U_{\rm VP}^{\rm Ueh}\left|a\right\rangle \left\langle a\right|(dU_{\rm VP}^{\rm Ueh}/dE)E_{E=E_{a}}\left|a\right\rangle\,. \tag{63}\] The Uehling potential, as defined in Eq. (45), is independent of the bound-state energy and as such the derivative term in Eq. (63) vanishes, and the second-order correction is then given by the usual Rayleigh-Schrodinger perturbative expression \[\Delta E_{a}^{(2)}=\sum_{i\neq a}\frac{\left\langle a\right|U_{\rm VP}^{\rm Ueh }\left|i\right\rangle\left\langle i\right|U_{\rm VP}^{\rm Ueh}\left|a\right\rangle }{E_{a}-E_{i}}\,, \tag{64}\] in agreement with Ref. [55]. ## V Numerical results To confirm the validity of our approach, in Table V, we present numerical results of first-order perturbative Uehling shifts [Eq. (60)] for heavy hydrogenlike ions and certain alkali ions, and compare them earlier evaluations of this correction (see e.g. Ref. [56]). A good agreement can be stated. We assume for simplicity a point-like nuclear charge distribution. The results in Table V reiterate the well-known fact that with the increase in the atomic number, the VP effects become more and more pronounced; VP effects scale as \(\sim\left(Z\alpha\right)^{4}\) to leading order. For most elements and charge states, the VP shift is experimentally discernible with modern Penning-trap mass spectrometric methods with experimental uncertainties on the 1-eV level or below [14; 15; 16; 17]. We thus identify ions which allow, for the first time, a test of QED via measuring the electronic binding energy of a valence electron by determining the mass difference of two ions, one with a single valence electron and one without. The values in the Table also show that it is not only H-like ions in the \(1s\) ground state which feature VP effects well observable by these experimental methods, but also excited states possess observable radiative shifts. E.g. the VP shift given for the \(2s\) state approximates the VP correction to the (negative of the) binding energy of a Li-like ion, which can be spectromrically determined by measuring the mass difference of the Li- and the He-like ions in their ground states. In our approach presented here, we fully neglect many-electron (screening) effects, which, for the high-\(Z\) ions presented here, is a justified first approximation, as they contribute with a relative order of \(\frac{1}{Z}\). Our path integral formalism however may be extended in future to many-electron systems, by allowing the exchange of photons between different electrons. Similarly, the VP results given for the \(3s\) state approximate the radiative shift of the binding energy of the valence electron in the Na-like charge state, and the values given for the \(2p_{1/2}\) and \(2p_{3/2}\) orbitals give the first approximation for B- and N-like ions, respectively. Ions in these charge states are easier to produce experimentally, and Na- and B-like very heavy species feature observable VP contributions, enabling a proof-of-the-principle demonstration of QED tests via mass measurements. ## VI Summary We have calculated the VP correction to energy levels of highly charged ions employing a path integral formalism. First, the evaluation of the leptonic loop correction to the photon propagator by means of the Dyson-Schwinger equation is summarized. The effective potential describing the correction to the nuclear potential in the lowest order in \(Z\alpha\), i.e. the Uehling potential, is used in an analytical form given by Frolov and Wardlow [45]. The contribution to the energy level of a hydrogenlike ion due to the perturbing Uehling potential is derived from the perturbed action using a relativistic path integral method. We show that the VP level shifts - or any energy shift induced by a local potential - can be extracted from the poles of the Green's function. The VP correction is given numerically for a range of heavy ions, concluding that in sufficiently highly charged ions, the VP effect is observable with state-of-the-art mass spectrometric methods. ## Acknowledgements Supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Project-ID 273811115 - SFB 1225.
2309.12427
Crossing singularities in the saddle point approximation
We describe a new phenomenon in the study of the real-time path integral, where complex classical paths hit singularities of the potential and need to be analytically continued beyond the space for which they solve the boundary value problem. We show that the behavior is universal and central to the problem of quantum tunneling. These analytically continued complex classical paths enrich the study of real-time Feynman path integrals.
Job Feldbrugge, Dylan L. Jow, Ue-Li Pen
2023-09-21T18:50:12Z
http://arxiv.org/abs/2309.12427v1
# Crossing singularities in the saddle point approximation ###### Abstract We describe a new phenomenon in the study of the real-time path integral, where complex classical paths hit singularities of the potential and need to be analytically continued beyond the space for which they solve the boundary value problem. We show that the behavior is universal and central to the problem of quantum tunneling. These analytically continued complex classical paths enrich the study of real-time Feynman path integrals. **Introduction:** In the sum over histories formulation of quantum physics, evolution arises through the constructive interference of histories around classical paths. Classically allowed transitions are governed by real classical paths. Classically forbidden transitions - such as quantum tunneling - are often associated with complex solutions of the equations of motion interpolating between boundary conditions. Indeed, complex classical paths - sometimes known as instantons - are frequently studied in quantum mechanics, quantum field theory, and theories of quantum gravity. However, it is often an open question whether a complex classical solution is relevant to the real-time Feynman path integral and contributes to the corresponding amplitude. As we show in this letter, the situation is even more intricate, as the main contribution to the path integral may come from analytically continued complex classical paths that do not solve the classical boundary value problem. In this letter, we demonstrate this phenomenon for several quantum systems and discuss its various implications. We consider tunneling of a wave packet through a barrier centered at \(x=0\). This can be considered an initial value problem of the time-dependent Schrodinger equation, with an initial Gaussian, minimum-uncertainty wave packet on the left with positive momentum. For high-momentum wave packets, most of the probability passes over the barrier, similar to the classical equation of motion. For small momenta, with energies less than the height of the barrier, most of the wave packet reflects back, and a small amount tunnels. The time-dependent Schrodinger equation, \[i\hbar\frac{\partial\psi_{t}(x)}{\partial t}=\hat{H}\psi_{t}(x), \tag{1}\] is solved by the propagator \[\psi_{T}(x_{1}) =\int\psi_{0}(x_{0})G[x_{0},x_{1};T]\mathrm{d}x_{0}\,, \tag{2}\] \[G[x_{0},x_{1};T] =\int_{x(0)=x_{0}}^{x(T)=x_{1}}e^{iS[x]/\hbar}\mathcal{D}x\,. \tag{3}\] The second equality (3) is the Feynman path integral formulation, which for small \(\hbar\) is solved by stationary points of the action \(\delta S/\delta x=0\) resulting in Euler-Lagrange (EL) equations of motion \[m\ddot{x}=-V^{\prime}(x)\,, \tag{4}\] with the mass \(m\), the Heaviside theta function \(\Theta_{H}\) and the dots denoting time derivatives, satisfying boundary conditions \(x(0)=x_{0},\ x(T)=x_{1}\)[1; 2; 3]. In the Picard-Lefshetz (PL) picture, one deforms the real space path integral (3) contour into the sum of complex path segments (thimbles), each of which is non-oscillatory [4; 5]. For small \(\hbar\), each thimble is dominated by its saddle point. This saddle may be complex, even though the original integral was defined over real paths. It is tempting to consider complex solutions (saddles) of (4) satisfying the real boundary conditions for complex energies and evaluate the propagator (3) using the corresponding action [6]. Two technical complications arise: 1. while PL saddles are solutions to the EL equations, the converse is generally not true, and 2., no constructive example has been found where the propagator from the Schrodinger equation agrees with the complex saddle. In this paper we revisit this problem, systematically examining the real and complex paths for fixed \(x_{0}\) while varying \(x_{1}\). The Rosen-Morse barrier (see below) has a particularly simple exact solution (saddle) set, and corresponding exact actions at those saddles. This allows us to examine the propagator under these continuous deformations. Several conceptual issues arise: 1. complex saddles depend on variable rescalings, and 2. the evaluation of the action at a complex saddle gives a different value from the analytic continuation of the action along the deformation path of (real) \(x_{1}\). We find these are reconciled due to branch cuts and singularity crossings for the saddles. This Letter addresses this novel approach of treating branch cuts in the action, explicated by the exact Rosen-Morse barrier solution, but generalizable to generic potentials. The analysis is simplified using the propagator in Energy \(E\), conjugate to time \(T\). **The Rosen-Morse barrier:** The symmetric Rosen-Morse [7] or modified Poschl-Teller system [8], describing the evolution of a non-relativistic particle in the potential \[V(x)=V_{0}\operatorname{sech}^{2}(x)\,, \tag{5}\] is one of the few fully solvable non-linear quantum systems. For simplicity, we will restrict ourselves to the barrier problem with \(V_{0}\geq\hbar^{2}/(8m)\). The real-time energy propagator of a non-relativistic particle with mass \(m\) propagating form \(x_{0}\) to \(x_{1}\) at energy \(E\) is solved in closed form [9], \[K[x_{1},x_{0};E] =\int_{0}^{\infty}\int_{x(0)=x_{0}}^{x(1)=x_{1}}e^{i(S[x]+ET)/\hbar }\mathcal{D}x\,\mathrm{d}T \tag{6}\] \[=-\frac{im\Gamma(-ik_{E}-N)\Gamma(-ik_{E}+N+1)}{\hbar}\] \[\quad\times P_{N}^{ik_{E}}(\tanh x_{>})P_{N}^{ik_{E}}(-\tanh x_{< })\,, \tag{7}\] where the path integral ranges over the continuous paths interpolating between \(x(0)=x_{0}\) and \(x(1)=x_{1}\) parametrized by \(\lambda\in[0,1]\), and the action \[S[x]=\int_{0}^{1}\left[\frac{mx^{\prime}(\lambda)^{2}}{2T}-TV(x(\lambda)) \right]\mathrm{d}\lambda\,, \tag{8}\] with the propagation time \(T\), and the prime denoting derivatives with respect to the parameter \(\lambda\). The closed-form solution includes the associated Legendre function \(P_{\lambda}^{\mu}(x)\) with the parameter \(N=-\frac{1}{2}+\frac{i}{2\hbar}\sqrt{8mV_{0}-\hbar^{2}}\,,\) the minimum \(x_{<}\) and the maximum \(x_{>}\) of the boundary conditions \((x_{0},x_{1})\), and dimensionless momentum \(k_{E}=\sqrt{2mE}/\hbar\). The Legendre function with degree \(\mathrm{Re}[\lambda]=-1/2\) is known as the conical or Mehler function. The associated classical system is given by the Euler-Lagrange and Hamilton-Jacobi equations \[\frac{\delta S}{\delta x}=0\,,\quad E+\frac{\partial S}{\partial T}=0\,, \tag{9}\] which for the Rosen-Morse system assumes the form \[\frac{mx^{\prime\prime}}{T^{2}}=\frac{2V_{0}\tanh x}{\cosh^{2}x}\,,\quad E= \frac{mx^{\prime}(0)^{2}}{2T^{2}}+\frac{V_{0}}{\cosh^{2}x_{0}}\,. \tag{10}\] The Euler-Lagrange equation is solved by \[\sinh x_{C}(\lambda)=c\sqrt{\frac{E-V_{0}}{E}}\sinh\left[\sqrt{\frac{2E}{m}}( T\lambda-C)\right]\,, \tag{11}\] with the sign \(c=\pm 1\) and the shift parameter \(C\), with the associated classical action \[S_{C}=ET-\sqrt{2mV_{0}}\] \[\quad\times\tanh^{-1}\left[\sqrt{\frac{V_{0}}{E}}\tanh\left[ \sqrt{\frac{2E}{m}}(T\lambda-C)\right]\right]\Bigg{|}_{0}^{1}. \tag{12}\] We solve the initial value problem, where the Hamilton-Jacobi equation yields the initial velocity \[v_{0}=\pm\sqrt{\frac{2}{m}\left[E-V_{0}\operatorname{sech}^{2}\!x(0)\right]}\, T\,, \tag{13}\] for complex \(T\) to find the solutions to the boundary value problem associated with the energy propagator. Candidate classical solutions lay on the curves for which the final position \(x(\lambda=1)\) is real (see the green curves in the left panel of fig. 1). When the energy is below the top of the barrier (\(E<V_{0}\)) and the initial and final position lay in the same classically allowed region (either \(x_{0},x_{1}\leq-x_{c}\) or \(x_{c}\leq x_{0},x_{1}\) with the turning point \(x_{c}=\cosh^{-1}\sqrt{V_{0}/E}\)) there exist two real classical solutions to the boundary value problem. The red and blue points in fig. 1 represent the direct and bouncing solution of the boundary value problem (see the left panel of fig. 2). When keeping the initial position fixed in the left region and letting the final position \(x_{1}\) approach the turning point \(x_{c}\), the two real classical paths coalesce. For larger final positions, no classical solution exists, as the real paths have formed a conjugate pair of complex classical paths (see the brown and orange points in the left panel of fig. 1). According to Picard-Lefschetz theory, the complex classical path for which the exponent of the path integral \(i(S+ET)\) has a negative real part (the orange path) remains relevant after this transition. The other complex path (marked by the brown points) is irrelevant to the path integral. As we increase the final position further, the complex classical path moves further into the complex plane along the vertical green line, until it collides with a discontinuity in the final position in the complex \(T\)-plane. For larger \(x_{1}\), the solution to the boundary value problem ceases to exist. This dramatic transition occurs as the complex classical path intersects one of the singularities of the analytic continuation of the potential at \(i\pi(\frac{1}{2}+n)\) with \(n\in\mathbb{Z}\) (see the right panel of fig. 2). At this _singularity crossing_, the naive saddle point approximation of the path integral [6], \[K\sim\sum_{(x_{C},T_{C})}\sqrt{-\frac{\partial^{2}S_{C}/\partial x_{0}\partial x _{1}}{\partial^{2}S_{C}/\partial T^{2}}}e^{i(S_{C}+ET_{C})/\hbar}\,, \tag{14}\] ranging over the relevant classical paths fails to approximate the tunneling behavior of the energy propagator. To continue the saddle point approximation through this singularity crossing, we interpret the discontinuity (see the left panel of fig. 1) as a branch cut and analytically Figure 1: The final position of the classical paths starting at \(x_{0}=-5\) for an energy \(E=0.9\), mass \(m=1\), and a barrier strength \(V_{0}=1\) in the complex \(T\) plane. The green lines indicate times that result into a real final position. Figure 2: The classical paths of the energy propagator at \(E=0.9\) for a particle starting at \(x_{0}=-5\) with mass \(m=1\) in a barrier with strength \(V_{0}=1\). _Left:_ the real classical paths for the final positions \(x_{1}=-2,-1,-0.5,-x_{c}\). Right: the complex classical paths for the final positions \(x_{1}=-0.3,-0.15,0,0.15,0.2\) and the analytically continued paths for \(x_{1}=0.21,0.3,1,2,3,4,5\). The orange and brown paths are conjugate pairs. The black dot represents the turning point, the background is the absolute valued squared of the potential and the green points are the singularities of the potential. continue the saddle point beyond the space of classical paths satisfying the boundary conditions. For the Rosen-Morse system, this analytical continuation is most easily implemented by working with the hyperbolic sine of the final position \(\sinh x_{1}\) (see the right panel of fig. 1). The hyperbolic sine function unveils the analytic continuation saddle point and highlights the occurrence of a complex caustic, where the horizontal and vertical green lines intersect away from the real line, corresponding to \(x_{1}\) moving into the classically allowed region on the right. As far as we are aware, this is the first example of a system with a complex caustic, where complex saddle points coalesce. After the complex caustic, the relevant saddle point moves to the left along the horizontal green line. Beyond the singularity crossing, the complex \(T\) only serves as a label of the analytically continued saddle point, as the corresponding initial value problem does not terminate at the boundary condition \(x(1)=x_{1}\). Instead, the solution to the equation of motion approaches \(x(1)=-x_{1}-i\pi\) (see the right panel of fig. 2), as \(\sinh(-x_{1}-i\pi)=\sinh(x_{1})\). Note that when approximating the energy propagator for the configuration \(x_{1}=-x_{0}\) for small \(x_{0}\), the solution to the initial value problem linearly interpolates between \(x_{0}\) and \(x_{0}-i\pi\). To describe the tunneling behavior, the classical action also needs to be analytically continued at the singularity crossing. In the classical action (12), the singularity crossing corresponds to a branch-cut of the arctanh function, yielding a correction \(i\sqrt{2mV_{0}}\pi\) to the classical action of the solution of the initial value problem. We compare the exact energy propagator for a particle below the top of the barrier in fig. 3. In the classically allowed region, we observe an interference pattern that can be understood in terms of the direct and bouncing real classical paths. At the turning point, \(x_{1}=x_{c}\), the saddle point approximation diverges in a fold caustic. For larger \(x_{1}\) the real saddle point approximation vanishes. The complex saddles reasonably approximate the exact propagator until the singularity crossing, where the green line drops to zero. After the singularity crossing, the saddle point approximation with the analytically continued classical path diverges in a second complex caustic, when \(x_{1}\) enters the classically allowed region after which the approximation approaches the non-zero exact propagator (this value is proportional to the tunneling amplitude). From this analysis, it is clear that the singularity crossing and the corresponding analytic continuation are central to quantum tunneling in the real-time Feynman path integral. The same can be said for quantum reflections when the energy of the particle exceeds the potential strength, \(E>V_{0}\). Indeed, as we show in [10], the WKB tunneling rate is only recovered when including the correction \(i\sqrt{2mV_{0}}\pi\) in the saddle point approximation. Note that the saddle point approximation and the exact result do not coincide as we are working at finite \(\hbar\) and with an energy close to the top of the barrier. The saddle point approximation converges to the exact propagator away from the caustics for lower energies and reduced Planck constants, however, these changes will also suppress the tunneling amplitude and energy propagator for large \(x_{1}\). **The Gaussian barrier:** The path integral of the Rosen-Morse potential exhibits a singularity crossing when a relevant complex classical path intersects a singularity of the analytically continued action. Similar behavior can be observed for rational potentials, such as the Lorentzian potential \(V(x)=V_{0}/(1+x^{2})\) with singularities at \(x=\pm i\). However, singularity crossings are not restricted to real-time path integrals of theories whose analytically continued potential has singularities at finite distances in the complex plane. To illustrate this, consider a particle interacting with a Gaussian barrier \(V(x)=V_{0}\exp(-x^{2})\) for \(V_{0}>0\), which has an essential singularity at complex infinity. Let's consider the real-time path integral \[G[x_{1},x_{0};T]=\int_{x(0)=0}^{x(1)=x_{1}}e^{iS[x]/\hbar}\mathcal{D}x\,, \tag{15}\] with the associated Euler-Lagrange equation \[\frac{\delta S}{\delta x}=0:\quad mx^{\prime\prime}=2V_{0}T^{2}xe^{-x^{2}}\,, \tag{16}\] with the boundary conditions \(x(0)=x_{0}\) and \(x(1)=x_{1}\) for fixed real propagation time \(T\). We look for solutions to the boundary value problem by solving the initial value problem in the space of complex initial velocities \(x^{\prime}(0)=v_{0}\) (see fig. 4). Figure 3: A comparison of the energy propagator \(E=0.9\) as a funciton of \(x_{1}\) and the saddle point approximations for the initial position \(x_{0}=-5\), for a particle of mass \(m=1\) in a barrier of strength \(V_{0}=1\) and the reduced Planck constant \(\hbar=0.5\). The energy propagator (blue), the real saddle point approximation (dotted black), the saddle point approximation with complex solutions to the boundary value problem (dashed green), and the saddle point approximation including the analytic continuations of the complex saddle points (red). When the initial and final positions reside on the same side of the barrier, there either exists a single or three real classical paths separated by a caustic. When there are three real classical paths, the particle can either travel directly between the boundary points, travel with a light bounce, or spend a significant amount of time near the top of the barrier before rolling back to the final position. Keeping the initial position fixed on the left of the barrier, and moving the final position to the top of the barrier, two of the three real classical paths coalesce in a caustic at the turning point. For larger \(x_{1}\), the real classical paths form complex conjugate pairs of which one remains relevant to the Feynman path integral. As we see in fig. 4 the complex path again hits discontinuities of the final position as a function of the complex initial velocity. As the complex path approaches the discontinuity (see fig. 4), the classical path travels to complex infinity and back in finite time (see the paths in fig. 5). The real-time Feynman path integral of the Gaussian barrier thus includes complex paths undergoing singularity crossings where the path runs off to complex infinity. **The discretized path integral:** Singularity crossings of complex classical paths are universal in the continuum path integral but absent in the simplest discretized version. Consider the time-discretized path integral \[\int e^{\frac{i\mathcal{H}}{\hbar}\sum_{j=0}^{n-1}\left[\frac{\mathrm{m}}{2} \left(\frac{y_{j+1}-y_{j}}{\epsilon T}\right)^{2}-V(y_{j})\right]}\mathrm{d}y _{1}\mathrm{d}y_{2}\ldots\mathrm{d}y_{n-1}\,, \tag{17}\] with \(n\) equally space timesteps \(y_{j}=x(\epsilon j)\) for \(j=0,1,\ldots,n\) of size \(\epsilon=1/n\) with \(y_{0}=x_{0}\) and \(y_{n}=x_{1}\). This discretization resembles a multi-plane lens system in wave optics [11]. The variation of the action yields the discrete equation of motion \[\frac{y_{j+1}-y_{j}}{\epsilon T}=\frac{y_{j}-y_{j-1}}{\epsilon T}-\frac{ \epsilon T}{m}V^{\prime}(y_{i})\,, \tag{18}\] for \(j=1,2,\ldots,n-1\). This set of equations has a unique recursive solution for \(y_{2},\ldots,y_{n}\) given the initial position \(y_{0}\) and the discrete initial velocity \(\bar{v}_{0}=(y_{1}-y_{0})/\epsilon\) resembling the initial value problem, \[y_{j}=x_{0}+j\epsilon\bar{v}_{0}-\frac{\epsilon^{2}T^{2}}{m}\sum_{k=1}^{j-1}( j-k)V^{\prime}(y_{k})\,, \tag{19}\] for \(j=2,\ldots,n\). When the potential is analytic and free of branch cuts, such as \(V(x)=\mathrm{sech}^{2}(x)\), so is the final position \(y_{n}\) as a function of the initial position and discretized initial velocity. Consequently, the solutions of the discretized Rosen-Morse system never undergo singularity crossings, making it qualitatively different from the continuum theory. While the discretized theory does not have singularity crossings, the final position \(y_{n}\) does develop a large set of singularities in the complex \(\bar{v}_{0}\) plane and a large set of additional complex solutions to the boundary value problem, rapidly increasing with the number of discretization steps. When tracking a singularity crossing event in the continuum theory, we find that the corresponding discretized path develops large jumps, invalidating the discretization condition. Further research is required to verify whether the absence of singularity crossings in the discretized theory is replaced by additional relevant complex discretized classical paths. **Implications:** The real-time Feynman path integral studied in terms of classical paths not only includes real and complex but also analytically continued paths. Beyond the singularity crossing, the complex classical path no longer solves the boundary value problem, yet can dominate the path integral. We believe that previous attempts to express classically forbidden phenomena with Figure 4: Classical paths for the Gaussian barrier in the complex initial velocity plane for the initial position \(x_{0}=-5\), particle mass \(m=1\), the barrier strength \(V_{0}=1\), and the total propagation time \(T=10\). The green curves indicate initial velocities that lead to a real final position. Figure 5: Complex classical paths (orange and green are conjugate paths) for the Gaussian barrier in the complex initial velocity plane for the initial position \(x_{0}=-5\), particle mass \(m=1\), the barrier strength \(V_{0}=1\), the total propagation time \(T=10\), and the final position \(x_{1}=-1.6\), \(-1\), \(-0.5\), \(-0.25\), \(-0.1\), \(-0.03\), \(-0.02\), \(-0.01\) complex paths have been frustrated by the lack of this observation. When interpreting such an analytically continued path in terms of an initial value problem, the termination point is altered and additional terms need to be added to the classical action and functional determinant for the saddle point approximation to hold. This phenomenon has large implications in the study of instantons, particularly in quantum gravity where complex solutions of the Einstein field equations are commonly studied [12; 13; 14; 15; 16]. Since general relativity predicts curvature singularities, complex solutions will likely undergo singularity crossings affecting the path integral for gravity. Witten recently proposed a complex space-time selection criterion [17] based on the study of quantum fields on complex metrics [18; 19] which will need to be reconsidered in light of these singularity crossings; how does one embed a quantum field on an analytically continued spacetime, when the solution to the Einstein field equations satisfying the boundary conditions does not exist? ## Acknowledgements The work of JF is supported by the STFC Consolidated Grant 'Particle Physics at the Higgs Centre,' and, respectively, by a Higgs Fellowship and the Higgs Chair of Theoretical Physics at the University of Edinburgh. For the purpose of open access, the author has applied a Creative Commons Attribution (CC BY) license to any Author Accepted Manuscript version arising from this submission.
2310.00140
GASS: Generalizing Audio Source Separation with Large-scale Data
Universal source separation targets at separating the audio sources of an arbitrary mix, removing the constraint to operate on a specific domain like speech or music. Yet, the potential of universal source separation is limited because most existing works focus on mixes with predominantly sound events, and small training datasets also limit its potential for supervised learning. Here, we study a single general audio source separation (GASS) model trained to separate speech, music, and sound events in a supervised fashion with a large-scale dataset. We assess GASS models on a diverse set of tasks. Our strong in-distribution results show the feasibility of GASS models, and the competitive out-of-distribution performance in sound event and speech separation shows its generalization abilities. Yet, it is challenging for GASS models to generalize for separating out-of-distribution cinematic and music content. We also fine-tune GASS models on each dataset and consistently outperform the ones without pre-training. All fine-tuned models (except the music separation one) obtain state-of-the-art results in their respective benchmarks.
Jordi Pons, Xiaoyu Liu, Santiago Pascual, Joan Serrà
2023-09-29T21:02:07Z
http://arxiv.org/abs/2310.00140v1
# GASS: Generalizing Audio Source Separation with Large-Scale Data ###### Abstract Universal source separation targets at separating the audio sources of an arbitrary mix, removing the constraint to operate on a specific domain like speech or music. Yet, the potential of universal source separation is limited because most existing works focus on mixes with predominantly sound events, and small training datasets also limit its potential for supervised learning. Here, we study a single general audio source separation (GASS) model trained to separate speech, music, and sound events in a supervised fashion with a large-scale dataset. We assess GASS models on a diverse set of tasks. Our strong in-distribution results show the feasibility of GASS models, and the competitive out-of-distribution performance in sound event and speech separation shows its generalization abilities. Yet, it is challenging for GASS models to generalize for separating out-of-distribution cinematic and music content. We also fine-tune GASS models on each dataset and consistently outperform the ones without pre-training. All fine-tuned models (except the music separation one) obtain state-of-the-art results in their respective benchmarks. Jordi Pons\({}^{*}\) Xiaoyu Liu* Santiago Pascual & Joan Serra+Dolby Laboratories General audio source separation, deep learning. Footnote *: Equal contribution ## 1 Introduction Audio source separation consists of isolating the sources present in an audio mix. Most previous works frame the problem as a source-specific task, as in speech source separation [1] (separating various speakers), or music source separation [2, 3] (separating vocals, bass, and drums). For such tasks, a source-specific model is trained on dedicated datasets tailored to the task at hand. In contrast to source-specific separation tasks, universal source separation was recently proposed [4, 5], which consists of building source-agnostic models that are not constrained to a specific domain (like music or speech), and targets at separating an unknown number of sources given an arbitrary mix. However, existing universal source separation works predominantly focus on separating mixes similar to field recordings (with mostly sound events like dog barking or alarms). Further, most supervised learning methods for this task rely on small training sets [4, 5, 6, 7, 8]. For instance, the commonly-used FUSS dataset contains only 23 hours of single-source recordings [5]. Considering the number of different sounds in the world, most audio sources might be under-represented in such small datasets. Hence, the potential of universal source separation is yet to be fully explored because (i) most previous works separate mixes with predominantly sound events instead of simultaneously separating a broader set of sources including speech, music, and sound events, and (ii) supervised universal source separation models have never been trained with large-scale data. Here, we explore training a unified model with large-scale data to address general audio source separation holistically1, with the goal of separating any source from a given mix, including speech, music, and/or sound events. First, we scale up our audio source separation dataset by collecting 15,499 hours of recordings including speech, music, and sound events. Note that our dataset contains 3 orders of magnitude more data than FUSS [5], the commonly-used dataset for supervised learning (Table 1). Next, to investigate the feasibility of general audio source separation1, we train 3 state-of-the-art models with our large and diverse dataset. We are also interested in the generalization capabilities of the trained models. Hence, in addition to evaluating the models on different partitions of the same dataset (in-distribution), we also evaluate them on 4 standard downstream test sets, each one representing a different use case with different data and mixing pipelines (out-of-distribution). While in some cases the out-of-distribution results are competitive, in some others the separation results are not as satisfactory. Finally, we show that out-of-distribution performance can be improved by fine-tuning the pre-trained general audio source separation models on each task. Footnote 1: We use the term “universal source separation” when separating mixes with predominantly sound events (to be consistent with previous works [4]) and use the term “general audio source separation” when separating mixes containing speech, music, and/or sound events (our proposal). To our best knowledge, we offer the first study on supervised general audio source separation at scale without prior knowledge about the sources. Previous works [9, 10] also consider speech and music in supervised universal source separation, but they assume the availability of a target source embedding to identify and separate the desired source from a mix. Unsupervised approaches can also leverage large scale (noisy) data, but they tend to under-form supervised methods [11, 12, 13]. Previous research also looked at the generalization capability of speech separation models [14, 15], but here we study the general audio source separation problem with a much more diverse set of out-of-distribution downstream tasks. Finally, our work is also conceptually similar to fine-tuning problem-agnostic self-supervised models [16], since we fine-tune source-agnostic audio separation models on source-specific tasks. ## 2 Methodology ### Creating a Large-scale Source Separation Dataset We collect recordings from public and licensed datasets to scale up general audio source separation with \(\approx\) 1.9 M recordings of speech, music, and sound events. We mix recordings \(r_{k}\) at various gains \(g_{k}\): \[m=\sum_{k=1}^{K}s_{k}=\sum_{k=1}^{K}g_{k}r_{k},\] where we normalize \(r_{k}\)'s amplitudes to 1 before mixing, and \(K\) is the number of resulting sources \(s_{k}\) present in the mix. Note that \(K\) is assumed to be unknown during training/inference and, following common practice [5], we set \(K\in\{1,2,3,4\}\). Also, defining what constitutes a source is a significant challenge. We find that the definition of "any recording with one source" might be impractical. For instance, considering separating two speakers talking in a cafeteria, it may be unnecessary to separate every individual sound in the background like the cutlery and the crowd noise. Similarly, in a mix with background music, it may not be desirable to separate out each instrument. In our view, incorporating low-volume, non-dominant background sounds as a single, combined source to be separated together could enhance the realism of the resulting mixes. Hence, to build our dataset, we rely on the following definition: "any recording with one source, except for low-volume background events that can contain one or more sources". We distinguish between foreground and background sources by simply applying higher gains to foreground sources. Table 1 presents those gains \(g_{k}\), together with the number of collected recordings \(r_{k}\) and their source types: * **Speech foreground** is a multilingual collection of public and licensed clean speech recordings, each with 1 speaker. A large portion of the recordings we use are public: AVSpeech [17], VCTK [18], DAPS [19], and TIMIT [20]. * **Sound event foreground and background** are a combination of public and licensed datasets. The largest public dataset we use is (most of the content in) Freesound. Extensive listening finds that shorter Freesound recordings tend to be single-source, and longer ones tend to contain multiple sources. Hence, Freesound recordings shorter than 8 sec are used as foreground, and longer ones as background. We also use other background datasets, including: WHAM! [21] and DEMAND [22]. * **Music foreground and background** are a combination of public and licensed datasets. Public single-source datasets include: Slakh [23], ENST-drums [24], VocalSet [25], QMUL singing database [26], MUSIC [27], and EGFxSet [28]. Hence, foreground music mostly contains vocals, bass, drums, guitar, and keys, but also includes synthesizers, percussion, and classical instruments. Background music includes licensed music mixes. Note that our collection is significantly larger than FUSS [5], the most common benchmark for universal source separation. After collecting our data, we define a set of rules to create the artificial mixes. These rules can be summarized into the following 3 upstream tasks: * **Speech separation**. These mixes always contain at least 1 speech foreground source. Other sources are sampled from the following sets: speech foreground, sound events foreground/background, and music background to create mixes for speech denoising and speech source separation (from 1 to 4 speakers) use cases. * **Sound event separation**. Sources are sampled from sound events foreground/background and music background to create mixes similar to previous universal source separation works [4, 5]. * **Music separation**. Sources are sampled from music foreground and sound events background to create mixes for music denoising and music source separation (from 1 to 4 sources) use cases. Hence, to generate training data we randomly select: an upstream task (speech, sound event, or music separation with a probability of 0.25, 0.25, and 0.5, respectively), the number of sources \(K\) (uniformly from 1 to 4), the recordings \(r_{k}\) (which fragments and when they start in the mix), and the gains \(g_{k}\) (sampled from a Beta distribution \(Beta(2,1)\) within the ranges in Table 1). We then down-mix all the data to mono, zero-pad or truncate each sample to 8 sec, and resample them to 48 kHz. Note that our large-scale dataset covers various sampling rates and bandwidths, all resampled to 48 kHz, since we observe in preliminary experiments that models trained on this dataset perform competently at various (lower) sampling rates. ### Models and Upstream Training **TDANet-Wav** (10.8 M parameters). TDANet [1] is a state-of-the-art waveform-based speech source separation model based on an encoder-separator-decoder architecture. We adopt the official implementation2 and increase the encoder dimension to 1024 and proportionally double the dimension of the separator layers. Footnote 2: [https://github.com/JusperLee/TDANet](https://github.com/JusperLee/TDANet) **TDANet-STFT** (7.4 M parameters). We modify TDANet-Wav such that the encoder/decoder are replaced by STFT/STFT, and reuse the phase of the mixture for the iSTFT. The separator then outputs a mask over the STFT domain, not over a latent space as in TDANet-Wav. We use 32 and 8 ms frame length and stride, respectively. The bottleneck size is 384 and the separator layers follow the recommended ratio of feature maps with respect to the bottleneck size [1]. **BSRNN** (21.8 M parameters). Band-Split RNN is a powerful model for music source separation [3] and speech enhancement [29], also based on an encoder-separator-decoder architecture. Its encoder splits complex-valued STFT bins into bands and projects each band to a latent. We create 43 bands for our 48 kHz model, 2 more bands on top of the setup proposed for separating vocals from music at 44.1 kHz [3]. The separator consists of 12 interleaved band-level and sequence-level blocks with bidirectional LSTMs. The decoder undoes the band splitting and predicts complex-valued STFT masks. We adopt an available open-source implementation3. Footnote 3: [https://github.com/sunwon23/BSRNN](https://github.com/sunwon23/BSRNN) **IRM** (oracle). We compute the Ideal Ratio Mask (IRM) as an oracle upper bound using the magnitude STFT of the ground truth sources. **Upstream training.** All models are trained on the upstream large-scale dataset for 10 M steps using the Adam optimizer with a batch size of 10 and a cyclical learning rate between \(10^{-7}\) and \(10^{-4}\) spanning 400 k steps per cycle. All models predict 4 sources \(\hat{s}_{k}\) given a mix \(m\). When there are fewer targets during training (\(K{<}4\)), the extra targets are set to zeros. Permutation invariant training [30] (PIT) aligns the predictions with the targets, and we minimize the logarithmic-MSE loss with a threshold \(\tau\) set to \(-\)30 dB [5]: \[\mathcal{L}(s_{k},\hat{s}_{k})=\begin{cases}10\log_{10}\left(\|\hat{s}_{k}\|^{ 2}+\tau\|m\|^{2}\right)&\text{if }s_{k}=0,\\ 10\log_{10}\left(\|s_{k}-\hat{s}_{k}\|^{2}+\tau\|s_{k}\|^{2}\right)&\text{ otherwise.}\end{cases}\] ### Evaluation Framework **Upstream (in-distribution) evaluation**. For each upstream task (speech, sound event, and music separation), we set aside 3,000 mixes made of unseen recordings, which are sampled and mixed based on the same pipeline used for upstream training. **Downstream (out-of-distribution) evaluation**. We study the generalization capabilities of our models with out-of-distribution datasets. We consider the following 4 downstream tasks: \begin{table} \begin{tabular}{c c c c} \hline \hline Source type & \(g_{k}\) (dB) & Single-source & \# Recordings \\ \hline Speech foreground & \([-10,0\,]\) & ✓ & 759,397 \\ Sound event foreground & \([-10,0\,]\) & ✓ & 314,652 \\ Sound event background & \([-20,-10\,]\) & ✗ & 398,360 \\ Music foreground & \([-3,0\,]\) & ✓ & 75,639 \\ Music background & \([-20,-10\,]\) & ✗ & 379,565 \\ \hline All dataset & & & 15,499 hours \\ FUSS [5] & & & 23 hours \\ \hline \hline \end{tabular} \end{table} Table 1: Our large-scale general audio source separation dataset. * **FUSS** is a universal source separation dataset with 1 to 4 sources, with mixes at 16 kHz similar to field recordings [5] (mostly sound events). We select the standard reverberated FUSS version for our downstream evaluation. Since FUSS is a subset of FSD50K [31], we exclude FSD50K from our upstream dataset. * **Libri2Mix** is a common benchmark for speech source separation, with recordings at 16 kHz containing 2 clean speech sources [15]. All LibriSpeech [32] data is excluded from our upstream dataset. * **DnR** dataset targets at separating cinematic mixes at 44.1 kHz into speech, music, and sound effects [33]. Again, all involved datasets in DnR are excluded from our upstream dataset. Also note that DnR is a particular out-of-distribution case because it violates our source definition. We expect our models to separate each speaker, musical sources, and sound effect sources unless the music and sound effects are low-volume background events. However, DnR separates a mix into 3 combined stems: speech (with all speakers), music (with all musical sources), and sound effects (all together). * **MUSDB** is a music source separation dataset at 44.1 kHz with 4 sources: vocals, bass, drum, and 'other' [34]. Yet, note that our models are trained to separate more musical sources, including vocals, bass, drums, keys, guitar, synthesizers, and classical instruments. Further, the 'other' stem in MUSDB also violates our source definition, since such sources come grouped in one stem. We exclude both MUSDB and MedleyDB from our upstream data. Although DnR (all stems) and MUSDB ('other' stem) violate our source definition, we are still interested in those to study fine-tuning a pre-trained (upstream) general model on a separation task defined differently. We conduct 3 evaluations for each downstream task: * **No-tuning**. The pre-trained upstream models are assessed without any modification. This setup can also be seen as a zero-shot source separation case, where the models are pre-trained on a large dataset and then evaluated on new datasets without any adaptation. * **Fine-tuning**. The pre-trained upstream models are fine-tuned on the new downstream task alone with PIT. This setup studies the upstream model as a general model that can be pre-trained on a large dataset and then fine-tuned on a new use case. When there are fewer training targets (\(K{<}4\)), the extra targets are set to zeros. * **From-scratch**. The models are trained from-scratch on each downstream task. This setup studies the performance of the models when they are not pre-trained on a large dataset. Note, however, that the downstream datasets have different sampling rates. To unify our evaluation framework, we upsample the mixes and targets to 48 kHz. In that way, we can compute the loss against the upsampled targets when fine-tuning and training from-scratch. To compute metrics with the original ground truth, we downsample the predicted sources back to the original sampling rates. In preliminary experiments, we observe that models trained from-scratch and evaluated in this way yield similar results as those obtained by models trained on the original datasets without resampling. **Evaluation metrics**. We use the standard metrics for each task: * **SI-SDR** (dB) in DnR. We use scale-invariant signal-to-distortion ratio [35] (SI-SDR) to measure the quality of the separations. * **SI-SDRs** (dB) in FUSS and upstream. For mixes with one source, we compute SI-SDRs \(=\) SI-SDR\((s_{k},\hat{s}_{k})=\) SI-SDR\((m,\hat{s}_{k})\)[5], since with one-source mixes the goal is to bypass the mix. The's' sub-index stands for single-source. * **SI-SDRi** (dB) in FUSS, Libri2Mix, and upstream. For mixes with 2 to 4 sources, we report SI-SDRi \(=\) SI-SDR\((s_{k},\hat{s}_{k})-\) SI-SDR\((s_{k},m)\)[5, 8]. The 'i' sub-index stands for improvement. To account for inactive sources, estimate-target pairs that have silent target sources are discarded. * **US, ES, OS** (%) in FUSS and upstream. Note that our models implicitly count the number of sources to separate. To evaluate source counting, we compute the proportion of the samples for which the number of nonzero predictions are fewer than (under-separation, US), equal to (equal-separation, ES), or more than (over-separation, OS) the number of nonzero targets [5]. A prediction is considered nonzero if its average energy is above \(-20\) dB relative to the softest nonzero target source [5]. * **SDR** (dB) in MUSDB. Defined in [36], is the per-source median across the median SDR over all 1 second chunks in each song. ## 3 Results Separations produced by our models are available on our website4. Footnote 4: [http://www.jordipons.me/apps/GASS](http://www.jordipons.me/apps/GASS) ### Upstream (In-distribution) Evaluation Table 2 lists the results for the 3 in-distribution tasks, showing that it is possible, with a single model, to perform general audio source separation (including speech, sound events, and music) without prior knowledge about the source types and the number of sources (up to 4). Comparing with the IRM, we see that the models are competitive. Interestingly, each model stands out at a different task: TDANet-Wav for speech separation, TDANet-STFTF for sound event \begin{table} \begin{tabular}{c c c c c c} \hline \hline Task & Model & SI-SDR \(\uparrow\) & US \(\downarrow\) & ES \(\uparrow\) & OS \(\downarrow\) \\ \hline \multirow{4}{*}{Speech} & TDANet-Wav & 53.5/**14.3** & **6.9** & **87.8** & 5.3 \\ & TDANet-STFTFT & 80.6/13.8 & 14.1 & 83.3 & **2.6** \\ & BSRNN & 44.3/12.8 & 13.8 & 80.1 & 6.1 \\ & IRM & 85.7/19.3 & 0 & 100 & 0 \\ \hline \multirow{4}{*}{Sound events} & TDANet-Wav & 49.1/20.1 & 14.2 & 79.6 & 6.2 \\ & TDANet-STFTFT & 71.8/**22.1** & 17.8 & 78.0 & **4.2** \\ & BSRNN & 49.9/20.3 & **12.6** & **81.6** & 5.8 \\ & IRM & 78.0/28.3 & 0 & 100 & 0 \\ \hline \multirow{4}{*}{Music} & TDANet-Wav & 52.6/14.6 & 5.9 & 90.9 & 3.2 \\ & TDANet-STFTFT & 80.8/14.6 & 9.1 & 89.1 & **1.8** \\ & BSRNN & 46.2/**18.2** & **3.9** & **93.2** & 2.9 \\ & IRM & 88.8/17.8 & 0 & 100 & 0 \\ \hline \hline \end{tabular} \end{table} Table 2: Upstream (in-distribution) results for speech, sound event, and music separation. SI-SDRś: SI-SDR� (dB). US/ES/OS: source count rate (%). \begin{table} \begin{tabular}{c c c c c c} \hline \hline Evaluation & Model & SI-SDR \(\uparrow\) & US \(\downarrow\) & ES \(\uparrow\) & OS \(\downarrow\) \\ \hline \multirow{4}{*}{No-tuning} & TDANet-Wav & 32.7/15.1 & 39.3 & 54.7 & **6.0** \\ & TDANet-STFTFT & 30.0/**16.4** & 38.6 & 55.0 & 6.4 \\ & BSRNN & 30.5/16.0 & **36.6** & **57.0** & 6.4 \\ \hline \multirow{4}{*}{Fine-tuning} & TDANet-Wav & 33.2/17.7 & **11.8** & 77.5 & 10.7 \\ & TDANet-STFTFT & 34.0/18.1 & 16.5 & 73.1 & 10.4 \\ & BSRNN & 33.7/**18.6** & 14.0 & **78.5** & **7.5** \\ \hline \multirow{4}{*}{From-scratch} & TDANet-Wav & 33.0/13.7 & 22.2 & 65.2 & 12.5 \\ & TDANet-STFTFT & 33.1/14.4 & 20.6 & 67.7 & **11.7** \\ \cline{1-1} & BSRNN & 32.4/**14.4** & **13.7** & **70.6** & 15.7 \\ \hline \multirow{4}{*}{Fine-tuning} & SOTA & Postolache et al. [8] & 35.3/13.8 & 23.6 & 63.9 & 12.5 \\ Oracle & IRM & 39.9/25.3 & 0 & 100 & 0 \\ \hline \hline \end{tabular} \end{table} Table 3: Downstream (out-of-distribution) results on FUSS. SI-SDR� (column: SI-SDR� (dB). US/ES/OS: source count rate (%). separation, and BSRNN for music separation. BSRNN outperforms IRM for music separation, showing the advantage of operating on the complex STFT for this task. Also, the relatively high equal-separation rates (ES) show that the models are often able to count/separate the sources correctly. Among the miscounting cases, models tend to under-separate (US). Finally, the high SI-SDRs values show that the models are able to bypass single-source inputs.
2309.06452
Solar Atmospheric Heating Due to Small-scale Events in an Emerging Flux Region
We investigate the thermal, kinematic and magnetic structure of small-scale heating events in an emerging flux region (EFR). We use high-resolution multi-line observations (including Ca II 8542~\AA, Ca II K, and Fe I 6301~\AA line pair) of an EFR located close to the disk center from the CRISP and CHROMIS instruments at the Swedish 1-m Solar Telescope. We perform non-LTE inversions of multiple spectral lines to infer the temperature, velocity, and magnetic field structure of the heating events. Additionally, we use the data-driven Coronal Global Evolutionary Model to simulate the evolution of the 3D magnetic field configuration above the events and understand their dynamics. Furthermore, we analyze the differential emission measure to gain insights into the heating of the coronal plasma in the EFR. Our analysis reveals the presence of numerous small-scale heating events in the EFR, primarily located at polarity inversion lines of bipolar structures. These events not only heat the lower atmosphere but also significantly heat the corona. The data-driven simulations, along with the observed enhancement of currents and Poynting flux, suggest that magnetic reconnection in the lower atmosphere is likely responsible for the observed heating at these sites.
Rahul Yadav, Maria D. Kazachenko, Andrey N. Afanasyev, Jaime de la Cruz Rodríguez, Jorrit Leenaarts
2023-09-12T04:53:01Z
http://arxiv.org/abs/2309.06452v1
# Solar Atmospheric Heating Due to Small-scale Events in an Emerging Flux Region ###### Abstract We investigate the thermal, kinematic and magnetic structure of small-scale heating events in an emerging flux region (EFR). We use high-resolution multi-line observations (including Ca II 8542 A, Ca II K, and Fe I 6301 A line pair) of an EFR located close to the disk center from the CRISP and CHROMIS instruments at the Swedish 1-m Solar Telescope. We perform non-LTE inversions of multiple spectral lines to infer the temperature, velocity, and magnetic field structure of the heating events. Additionally, we use the data-driven Coronal Global Evolutionary Model to simulate the evolution of the 3D magnetic field configuration above the events and understand their dynamics. Furthermore, we analyze the differential emission measure to gain insights into the heating of the coronal plasma in the EFR. Our analysis reveals the presence of numerous small-scale heating events in the EFR, primarily located at polarity inversion lines of bipolar structures. These events not only heat the lower atmosphere but also significantly heat the corona. The data-driven simulations, along with the observed enhancement of currents and Poynting flux, suggest that magnetic reconnection in the lower atmosphere is likely responsible for the observed heating at these sites. Sun: magnetic fields - Sun: chromosphere 0000-0002-4880-7880]Rahul Yadav 0000-0002-4888-7880]Maria D. Kazachenko 0000-0002-4880-7880]Andrey N. Afanasyev 0000-0002-4888-7880]Jamde la Cruz Rodriguez 0000-0002-1883-7880]Jorrit Leenaarts ## 1 Introduction Emerging flux regions (EFRs), which are commonly found on the solar surface, are formed when flux tubes rise from the convection zone to the solar surface due to magnetic buoyancy or Parker instability (Parker, 1955). A wide range of solar activities occurs when the emerging field lines rise and pass through different layers of the solar atmosphere (Chou, 1993; Cheung and Isobe, 2014). Therefore, EFRs play an important role in understanding the interplay between different layers of the solar atmosphere. The magnetic field can emerge anywhere on the solar surface at various spatial scales (Parnell et al., 2009; Otsuji et al., 2011). Typically, during the emergence of field lines, two main patches with opposite polarities move apart from each other, and multiple small-scale magnetic bipolar regions appear between them in the photosphere. Moreover, the magnetic field associated with these bipolar regions rises and interacts with pre-existing magnetic field in the chromosphere or corona, leading to magnetic reconnection. This reconnection process gives rise to various heating events such as Ellerman bombs, UV bursts, transient brightenings, chromospheric jets, or flares. Such scenario of EFR has been seen in various observations (Shimizu et al., 2002; Peter et al., 2014; Vissers et al., 2015; Chitta et al., 2017; Toriumi et al., 2017; Guglielmino et al., 2018; Leenaarts et al., 2018; Tiwari et al., 2019; Yadav et al., 2019; Ortiz et al., 2020; Moore et al., 2022; Tiwari et al., 2022; Rouppe van der Voort et al., 2023) and numerical simulations (Cheung et al., 2008; Danilovic, 2017; Hansteen et al., 2019). To understand small-scale heating events, several mechanisms have been proposed (e.g., MHD wave and magnetic heating, see Narain and Ulmschneider, 1990; Priest et al., 2018 and references therein). However, the process by which energy is transported from the photosphere to the higher layers during the emergence of small-scale bipolar regions in EFRs remains unclear (Withbroe and Noyes, 1977; Narain and Ulmschneider, 1990). Recently, Priest et al. 2018 proposed a theoretical model for chromospheric and coronal heating by considering a bipolar converging region. They demonstrated that two opposite-polarity regions having equal magnetic flux, situated below horizontal magnetic field, will undergo magnetic reconnection driven by flux cancellation if their separation is smaller than the flux interaction distance (Longcope, 1998). The energy released during the magnetic reconnection can then heat the chromosphere and corona located above. The magnetic field plays a crucial role in explaining the observed heating events in different layers. Recent observations have also shown that magnetic flux cancellation in the photosphere, both in quiet-Sun regions and EFRs, is associated with intense brightening observed in the chromosphere and corona (Gosic et al., 2018; Leenaarts et al., 2018; Tiwari et al., 2019; Diaz Baso et al., 2021; Muglach, 2021; Panesar et al., 2021; Kaithakkal et al., 2023). Although extensive studies have been conducted on the bipolar regions within EFRs in the photosphere, utilizing data from various ground-and-space based telescopes, simultaneous investigations involving photospheric and chromospheric vector magnetograms are still rare due to limited chromospheric observations. Recently, utilizing multi-line spectropolarimetric observations of an EFR, Leenaarts et al. 2018, demonstrated a correlation between the Ca ii K intensity and the horizontal field strength in the chromosphere. Furthermore, for the same EFR, Diaz Baso et al. 2021 found that the radiative losses in the chromosphere can reach up to 160 kW m\({}^{-2}\) at the reconnection site. In this study, we present an analysis of small-scale bipolar regions within EFR to understand their thermal, kinematic, and magnetic structure. We also investigate their impact on the chromospheric and coronal heating. To achieve this, we employ a combination of multi-line spectropolarimetric observations in the photosphere and chromosphere, co-aligned coronal images, and data-driven simulations to elucidate the field topology above the bipolar regions. Non-local thermodynamic equilibrium inversions of multiple spectral lines, including the Fe i 6301 A line pair, Ca ii 8542 A, and Ca ii K, are utilized to derive the stratification of physical parameters such as temperature, line-of-sight velocity, magnetic field, and micro-turbulent velocity. These derived parameters are then employed to investigate the signatures of magnetic reconnection at different heights. Additionally, we estimate the physical quantities described in the heating model proposed by Priest et al. 2018. Section 2 describe our observations. Sections 3 and 4 present our analysis and results. The obtained results are discussed in Section 5, and summarized in Section 6. ## 2 Observations ### Target and Data Reduction We use observations of the active region (AR) NOAA 12593, located close to disk center (\(\mu\)=1.0), recorded between 09:31 and 09:57 UT on September 19, 2016, with the CRisp Imaging SpectroPolarimeter (CRISP; Scharmer et al., 2008) and the CHROMospheric Imaging Spectrometer (CHROMIS; Scharmer, 2017) instruments at the Swedish Solar Telescope (SST; Scharmer et al., 2003). The CRISP simultaneously recorded full spectropolarimetric data in the Ca ii 8542 A and Fe i 6301 A line pair. The Ca ii 8542 A line scans consisted of 21 wavelength positions spanning a range of 1.7 A around line center, with steps of 0.765 A in the inner wings and two wavelength positions at \(\pm\)1.7 A relative to line center. The Fe i 6301 A spectral line was scanned with nine equidistant wavelength positions spanning a range of 0.19 A around line center, whereas the Fe i 6302 A was scanned with seven equidistant wavelength positions spanning a range of 0.28 A around line center. The CRISP data were obtained with a cadence of 36.6 s. The CHROMIS recorded Ca ii K intensity profiles at 39 wavelength positions spanning a range of 1.33 A around line center, with 37 evenly spaced steps of 0.038A in the inner wings and two wavelength positions at \(\pm\)1.33 A relative to the line center. In addition to this, one point in the continuum at 4000 A was also observed with the CHROMIS instrument. The CRISP data were reduced using the CRISPRED (de la Cruz Rodriguez et al., 2015) post-processing pipeline, which includes image reconstruction through multi-object multi-frame blind deconvolution (MOMFBD; van Noort et al., 2005) and removal of small-scale seeing-induced deformations. The CHROMIS data were reduced using the CHROMISRED pipeline (Lofdahl et al., 2021). The CRISP data were aligned with the CHROMIS data and resampled to the CHROMIS pixel scale of 0.0375\({}^{\prime\prime}\). As the CRISP data were obtained with a lower cadence, we interpolated the CRISP data to the CHROMIS cadence using nearest-neighbor interpolation. For all data, the intensity calibration was performed with the quiet Sun data located close to the disk center after taking into account the limb darkening, whereas the absolute wavelengths were calibrated with the atlas profiles given by Neckel Labs (1984). This AR is also analyzed by Leenaarts et al. 2018 and Diaz Baso et al. 2021. During our analysis, we also utilized ultraviolet (UV) and extreme ultraviolet (EUV) images observed by the Atmosphere Imaging Assembly (AIA; Lemen et al. 2012), as well as full-disk continuum images and vector magnetograms from the Helioseismic and Magnetic Imager (HMI; Scherrer et al., 2012) aboard the Solar Dynamic Observatory (SDO; Pesnell et al. (2012)). The AIA takes full-disk images in seven EUV bands with a cadence of 12 sec and in two UV bands at 1600 A and 1700 A with a cadence of 24 sec. The spatial scale of AIA images is 0.6''per pixel. All AIA and HMI images were corrected using the standard solar software (SSW) routines (e.g., aia_prep.pro and hmi_prep.pro). Finally, all AIA, HMI, CRISP, and CHROMIS data were co-aligned through image cross-correlation. ### Overview of Observations Figure 1 shows an overview of the AR 12593 that started emerging on September 18, 2016 around 04:00 UT. In order to study the history of the AR, we calculated the magnetic flux evolution in the FOV using the magnetic parameters obtained from the Space-weather HMI Active Region Patch (SHARP, Bobra et al., 2014). The calculated magnetic flux for positive and negative polarities, using the B\({}_{z}\) component of the magnetic field strength, are depicted in Fig. 1. During the flux emergence period, the flux of either polarity has increased up to \(\sim\)10\({}^{21}\) Mx. Furthermore, the peak flux emergence rate for the AR is 2.2\(\times\)10\({}^{17}\) Mx s\({}^{-1}\). As shown in the Fig. 1 the SST observations were performed as the emergence of AR 12593 was ending. The SST recorded FOV closer to the negative polarity of the EFR including regions having bipolar structures as shown in Fig. 2, that we refer to as mixed polarity regions. During our observations, brightening events were noticed in different SDO/AIA filtergrams mainly close to these mixed polarity regions highlighted by blue and red contours. The intense brightening in Ca ii 8542, Ca ii K and AIA images also indicates that the region located above the mixed polarity region is heated significantly. ## 3 Methods and Data Analysis ### Inversion of the Spectropolarimetric Data The physical parameters such as magnetic field vector and line-of-sight (LOS) velocity were inferred by inverting the photospheric spectral line Fe i 6301 A using a Milne-Eddington SPIN code (Yadav et al., 2017). Then we resolved the 180\({}^{\circ}\) azimuthal ambiguity using the automated ambiguity resolution code (Leka et al., 2014), which is based on the minimum energy method (Metcalf, 1994). Furthermore, we employed a spatially regularized Weak Field Approximation (WFA; Morosin et al., 2020) method to infer the LOS magnetic field from the Ca ii 8542 observations. The linear polarization signal was not sufficient to infer the magnetic field vector in the chromosphere using the Ca ii 8542 line. To estimate the stratification of the physical parameters such as temperature, magnetic field, LOS velocity, and microturbulent velocity, we inverted the Stokes profiles of Fe i 6301 line pair, Ca ii 8542 and Ca ii K line simultaneously using the multi-line inversion STiC code de la Cruz Rodriguez (2019). We inverted all four Stokes parameters in the Fe i 6173 A and Ca ii 8542 A lines, but only Stokes \(I\) in the Ca ii K line. The STiC inversion code is built around a modified version of the RH code (Uitenbroek, 2001) in order to derive the atomic populations by assuming statistical equilibrium and a plane-parallel geometry. The equation of state is borrowed from the Spectroscopy Made Easy (SME) computer code described in Piskunov and Valenti (2017). The radiative transport equation is solved using cubic Bezier solvers (de la Cruz Rodriguez and Piskunov, 2013). During inversion, we considered the Ca ii 8542 A line in non-LTE conditions, under the assumption of complete frequency redistribution, while the Ca ii K line was synthesized in non-LTE conditions with partial redistribution effects of scattered photons (Leenaarts et al., 2012). Figure 1: Overview of emerging flux region (EFR). _Left and middle panels_: HMI continuum image (left) and LOS magnetogram (middle) saturated at \(\pm\)1000 G, where black and white colors indicate negative and positive polarities, respectively. The orange box outlines the FOV of SST. _Right panel:_ The temporal evolution of positive and negative magnetic fluxes in the EFR are indicated by solid and dashed black line, respectively. The vertical gray line marks the time of the SST observation. ### Differential Emission Measure We perform the Differential Emission Measure (DEM) analysis of AIA/SDO data to investigate the temperature distribution of the plasma. The DEM analysis involves solving an inverse problem: inferring the temperature distribution of the plasma from the observed intensities. The measured intensity (\(I_{\lambda}\)) for a given AIA channel can be expressed as \[I_{\lambda}=\int K_{\lambda}(T)DEM(T)dT, \tag{1}\] where \(K_{\lambda}(T)\) refers to the response function of the corresponding AIA channel. We utilized the regularized inversion code developed by Hannah & Kontar (2012) to derive the DEM maps from the aligned AIA channels. We employed six EUV channels (94, 131, 171, 193, 211, and 355 A) of the AIA instrument aboard the SDO. These specific wavelength channels are sensitive to emissions originating from different ionization states of various elements, providing a wide temperature coverage. ### Data-driven Simulation of the AR 12593 To understand the evolution of the 3D magnetic field configuration above the AR, we performed a data-driven simulation using the Coronal Global Evolutionary Model (CGEM; Hoeksema et al., 2020). The CGEM uses a time sequence of electric field maps derived from photospheric vector magnetograms and Doppler Figure 2: Overview of SST and SDO observations taken on 2016-09-19. _Panel a_: continuum intensity at Fe i 6302 Å super-imposed on the HMI continuum intensity; _Panel b_: Line-of-sight magnetic field inferred from the Milne-Eddington inversion of the Fe i 6301 line pair superimposed over HMI magnetogram; _Panels c and d_: chromospheric intensity maps, observed with the CRISP and CHROMIS instruments at the SST; _Panels e–i_: AIA images observed in different channels. The blue and red contours indicates negative and positive polarity at the level of 800 G in the photosphere. Solar north is up. grams, to derive a time-dependent, magnetogrictional nonpotential model for the magnetic field. The photospheric magnetic fields and Doppler elecity are obtained from the HMI/SDO, whereas the electric field patches are computed using the "PDFI" inversion method (Kazachenko et al., 2014; Fisher et al., 2020) in the photosphere. For the vector magnetogram we utilized the JSOC SHARP number 6764 of AR 12593. We perform simulations of the EFR from September 19, 2016 (06:00 UT) to September 20 2016 (11:00 UT), covering the full time domain of SST observations. The simulations are performed in a 3D domain of 672\(\times\)372\(\times\)336 grid points with a grid spacing of 0\({}^{\prime\prime}\).5, which is similar to the spatial resolution of SDO/HMI. We set the simulation output time step to 24 seconds, which is less than the cadence of SST/CRISP spectropolarimetric observations. The spatial and temporal cadence in the simulation is enough to investigate the magnetic field topology of the bipolar regions, which generally have size of more than an arcsec in our observations. Figure 3: Time and slit plots of magnetic field and Ca ii K intensity. _Top panel:_ The photospheric (left) and chromospheric (middle) \(B_{\rm LOS}\), and Ca ii K wavelength summed intensity (right). _Three bottom panels_: Temporal evolution of photospheric and chromospheric \(B_{\rm LOS}\), and Ca ii K wavelength summed intensity across four slits highlighted in the top panels. In each panels the black contours refer to the enhanced intensity in the Ca ii K line. ## 4 Results ### Mixed Polarity Region in the Photosphere and the Chromosphere From the high-resolution observations of the EFR, in Figure 3 several small-scale mixed-polarity regions can be clearly identified both in the photosphere and the chromosphere. Strong patches of the line-of-sight magnetic field (\(B_{\rm LOS}\)) located in the photosphere can also be noticed in the chromosphere, but with a reduced strength of the magnetic field. Additionally, \(B_{\rm LOS}\) in the chromosphere covers a larger area compared to the photosphere, where they are located in small and compact regions. Near the mixed-polarity region, we also observed a significant intensity enhancement in the Ca ii lines, as shown in Figure 3. Additionally, as demonstrated in Fig.2, we observe intensity enhancement in all channels of AIA. Such enhancement suggests that a significant amount of energy is released due to magnetic flux cancellation leading to magnetic reconnection in the lower atmosphere, which can heat plasma at different heights. In Figure 3, \(B_{\rm LOS}\) in the photosphere and the chromosphere are derived using the ME inversion and the WFA, respectively. The time-distance diagram taken across the selected slits, passing through the mixed polarity regions, showed that in certain instances (L2 and L3), the location of increased intensity (in the Ca ii K line) was situated over the polarity inversion line (PIL) of the mixed polarity regions. At this location, \(B_{\rm LOS}\) decreased in both the photosphere and the chromosphere. However, in other cases (L1 and L4), intensity enhancement was not observed directly above the PIL, but was instead observed slightly away from it. We note that the slit width is of pixel-size and may not capture the brightening location at all position. The Stokes Q and U signals in the Ca ii 8542 A are weak throughout the FOV, except the locations of pores. To estimate a proxy for linear polarization in the chromosphere, we generated the total linear polarization (TLP) maps, using a methodology similar to that employed by Leenaarts et al. (2018). The TLP maps are computed as \(\sum_{i=0}^{n}\sqrt{Q_{i}^{2}+U_{i}^{2}}\), where the summation of Stokes Q and U profiles is performed over all wavelength positions. Such TLP maps provide qualitative information about the horizontal component of magnetic field, a stronger TLP means stronger horizontal magnetic field. Figure 4 demonstrates the temporal evolution of a mixed polarity region covering the L2 slit highlighted in Figure 3. The maps illustrate that the intensity en Figure 4: Temporal evolution of a bipolar region across the L2 slit shown in Fig. 3. The top two panels show the B\({}_{LOS}\) in the photosphere and the chromosphere. Middle panel shows the wavelength summed intensity of Ca ii K line. The bottom two panels show the horizontal magnetic field (\(B_{h}\)) in the photosphere and total linear polarization (TLP) in the chromosphere. The green and cyan contours indicate the locations of strong (700 G) \(B_{h}\) (Fe i 6173 Å) and Ca ii K intensity, respectively. hancement in the Ca ii K line lies on the PIL. Although the brightness varies with time, it remains near or on the PIL throughout the time domain of the SST. Additionally, magnetic flux cancellation is observed in both the photospheric and chromospheric LOS magnetic fields. In the chromosphere, high TLP observed at the location of Ca ii K brightening, which is also reported by Leenaarts et al. (2018). For the second bipolar region (covering L1 slit), we find significant horizontal magnetic field (\(B_{h}\)) near the PIL in the photosphere (see Fig. 5). This observation indicates that the magnetic field lines tend to become more horizontal in the vicinity of the PIL. The brightening in Ca ii K intensity reveals a loop-like structure connecting regions of opposite polarity. Additionally, the temporal evolution analysis of this region shows a reduction in \(B_{h}\) within the photosphere. However, there is no corresponding change in the TLP of the chromosphere. Similar to Figure 4, strong patches of TLP in the chromosphere noticed at the Ca ii K brightening sites. This observation suggests that small-scale loops located at the PIL may be disappearing due to magnetic flux cancellation, resulting in the submergence of reconnected smaller loops. ### Differential Emission Measure Analysis Over Bipolar regions For the DEM analysis, we considered a temperature range spanning from log\({}_{10}\) T = 5.7 to 7.6 [K]. Figure 6 illustrates the integrated DEM in various temperature bins, revealing a noticeable increase in temperature (above 1 MK) near a bipolar region. The electron density, \(n_{e}\), is calculated as \(\sqrt{EM/l}\), where the \(EM\) is the integrated DEM over a temperature range of log\({}_{10}\) T = 6.1 to 6.5, and \(l\) is the LOS length scale of the emission. We have adopted the value of \(l\) as 1 Mm, which is also assumed by Park 2020 for a converging bipolar region. This choice leads to an electron density of \(\sim\) 0.5\(\times\)10\({}^{9}\) cm\({}^{-3}\), which is typically observed in active regions (Aschwanden & Acton, 2001). Then we estimated the thermal energy flux from this region as \(E_{th}=3n_{e}k_{b}Tl^{3}\), that turn out to be 4.5\(\times\)10\({}^{23}\) ergs s\({}^{-1}\) (4.5\(\times\)10\({}^{7}\) ergs cm\({}^{-2}\) s\({}^{-1}\)) with T=2.2 MK, which is sufficient to power the local corona (Withbroe & Noyes, 1977). The temporal evolution of DEM and Ca ii K intensity across a black vertical slit indicated in Figure 6 is illustrated in Figure 7. The figure shows co-temporal and co-spatial enhancements in both the DEM and Ca ii K Figure 5: Same as Figure 4, but for a region across the L1 slit shown in Fig. 3. intensity. Brightening in Ca ii K persists throughout the SST time domain, but the DEM does not show significant emission after \(\sim 12\) minutes. Furthermore, the figure demonstrates that heating persists for a longer duration in the chromosphere compared to the corona along the selected slit. This observation suggests that the energy released during flux cancellation results in temperature enhancement both in the chromosphere and in the corona. ### Stratification of Physical Parameters using non-LTE Inversion. As discussed in the section, 3, we inferred the physical parameters, such as temperature, magnetic field, LOS velocity, and microturbulent velocity, by inverting multiple lines (Fe i 6301 A line pair, Ca ii K, and Ca ii 8542A) simultaneously using the STiC code. We obtained the stratification of the parameters as a function of the logarithm of the optical depth scale at 500 nm (log\(\tau_{500}\)). Figure 6: DEM integrated over five different temperature bins ranging from log\({}_{10}\) 5.7 to 7.57 [K]. First four panels showing the continuum intensity, \(B_{\rm LOS}\) in the photosphere, Ca ii 8542 and Ca ii K intensity maps. The blue and black contours indicate negative and positive polarities at the 800 G level in the photosphere. Figure 7: Temporal evolution of DEM (a) and Ca ii K summed intensity (b) across a black vertical line shown in Figure 6. We inverted all pixels along the selected slits passing through bipolar regions, the location of these slits can be seen in Figure 3. The inversion of pixels on the slits were performed using three different cycles (see Table 1). We note that the uncertainties in the parameters increase in higher layers ( log\(\tau_{500}<-4.5\)) as the observed chromospheric lines (Ca ii K, and Ca ii 8542A) are not sensitive to those regions (Leenaarts et al., 2018; Yadav et al., 2021). As an example the observed and best fit of the selected pixels marked in Figure 8 are demonstrated in the appendix (Figure 12). The stratification and temporal evolution of physical parameters, such as temperature, LOS velocity, LOS magnetic field, and microturbulent velocity, obtained along the slits at selected optical depths are shown in Figure 8 (for L2-L4 slits see Figures 13-15 in the appendix). Notably, there is a remarkable similarity between the LOS magnetic field obtained from the non-LTE inversion and the WFA both in the photosphere and the chromosphere. In all slits, the LOS magnetic field decreases as a function of optical depth, which \begin{table} \begin{tabular}{l c c c} \hline Parameters & Cycle 1 & Cycle 2 & Cycle 3 \\ \hline T & 7 & 9 & 10 \\ V\({}_{\rm LOS}\) & 2 & 4 & 6 \\ V\({}_{\rm turb}\) & 1 & 3 & 4 \\ B\({}_{\parallel}\) & 1 & 2 & 3 \\ B\({}_{\perp}\) & 1 & 2 & 3 \\ \(\phi\) & 1 & 1 & 1 \\ \hline \end{tabular} \end{table} Table 1: Number of nodes used for the temperature, LOS velocity (V\({}_{\rm LOS}\)), turbulent velocity (V\({}_{\rm turb}\)), LOS magnetic field (B\({}_{\parallel}\)), horizontal magnetic field (B\({}_{\perp}\)), and azimuth (\(\phi\)) during each cycle of the inversion. Figure 8: Temporal evolution (since 2016-09-16T09:31:29 UT) and stratification of physical parameters obtained from non-LTE inversion. For L1 slit, highlighted in Figure 3, the stratification of temperature, LOS velocity, microturbulent velocity and LOS magnetic field is demonstrated at selected optical depths. The observed and best-fitted Stokes profiles at the location indicated by colored cross symbols are shown in the appendix (Figures 12). is expected as gas pressure decreases several orders of magnitude from the photosphere to the chromosphere, whereas magnetic pressure decays more slowly (Wiegelmann et al., 2014). Therefore, the magnetic field tends to become more room-filling and weaker in the chromosphere. Additionally, the stratification in temperature exhibits an enhancement in the deeper layers (noticed \(\log\)\(\tau_{500}<-2\)). At the \(\log\)\(\tau_{500}=-3\), the temperature reaches up to 8 kK. These temperature enhancements are located close to the mixed polarity regions. Moreover, at the heated locations strong gradient in the LOS velocity is clearly visible. Moreover, the microturbulent velocity, ranges from 0 to 10 km s\({}^{-1}\), normally shows enhancement at the location of strong velocity gradients and heated layers. The simultaneous presence of temperature enhancement and strong upflows and downflows suggests that magnetic energy is being released in the lower atmosphere due to magnetic reconnection, such scenario is also noticed in simulation at the site of reconnection (Hansteen et al., 2019; Tiwari et al., 2022). ### Magnetic Field Topology Above Bipolar Regions To investigate the topology of the magnetic field above the bipolar regions, we reconstructed the 3D magnetic field lines as described in Section 3.3. While the SST observed a negative polarity patch of the EFR, we considered both polarity patches from HMI magnetograms to ensure the simulation satisfied the flux-balance condition. We analyzed the magnetic field topology over a specific region (including L1 and L2 slits) that exhibited strong intensity in the chromospheric lines. This region is clearly seen in both SST/CRISP and SDO/HMI magnetograms. We note that the SST observations have better spatial resolution compared to HMI, allowing us to clearly identify small-scale bipolar regions, which are not clearly visible in the HMI magnetograms. As an example the obtained field configuration in the EFR is demonstrated in Figure 9. For visualization purpose, different field lines are highlighted by different colors. It shows that longer loops connect strong polarity locations as indicated by the orange lines in the left panel. It also shows that loops with shorter heights (\(<3\) Mm) are located above a mixed polarity regions that consists of two bipolar regions. These opposite polarity patches are also associated with the serpentine field lines, normally observed in an EFR (Pariat et al., 2004; Tian et al., 2018; Yadav et al., 2019). The simulation illustrates that the magnetic field configuration are complex, where field lines originating from one bipolar region connect to the nearby regions. We also observed a strong presence of currents (\(J=\nabla\times B\)), located (\(<3\) Mm) near the PIL of the bipolar region (see Figure 9). These locations of intense current are typically considered as possible locations for magnetic field reconfiguration or magnetic reconnection. The region displaying strong currents also exhibits significant intensity enhancement in the chromospheric spectral lines and AIA images (see Figure 2). Furthermore, the presence of strong upflows and downflows, as inferred from the non-LTE inversion, suggests that magnetic reconnection likely occurred at this location. Additionally, the presence of dips in certain field lines (cyan colored lines) indicates that they have formed or rearranged after a magnetic reconnection event. This con Figure 9: The magnetic field topology over the EFR. _Left panel_: The simulated field lines connecting two main opposite polarities. Small-scale loops connecting bipolar regions are highlighted by a yellow box. _Right panel:_ Side view of magnetic field topology over the bipolar regions highlighted by yellow box in the right panel. For better visualization different field lines are displayed in distinct colors. The background image refer to the vertical component of magnetic field (\(B_{z}\)). The appearance of maximum current density (\(J\)) is also highlighted near the PIL. figuration bears similarity to the heating model based on reconnection proposed by Priest et al. (2018), which is discussed in the following section 4.5. ### Heating Model of a Converging Bipolar Region Recently, a series of papers proposed a theoretical model to explain chromospheric and coronal heating using magnetic reconnection driven by flux cancellation (Priest et al., 2018; Syntelis et al., 2019; Syntelis and Priest, 2020). They demonstrated that if two opposite-polarity regions, separated by a distance \(d\) and having magnetic flux \(\pm F\), situated below horizontal magnetic field \(B_{0}\), will undergo reconnection if \(d\) is smaller than the flux interaction distance, \(d_{0}\)(Longcope, 1998). The flux interaction distance can be expressed as follows: \[d_{0}=\sqrt{\frac{F}{\pi B_{0}}}. \tag{2}\] They also derived the location of a semicircular separator in the upper atmosphere, where the magnetic field vanishes. This location is given by the following expression: \[Z_{s}=\sqrt{d^{2/3}d_{0}^{4/3}-d^{2}}. \tag{3}\] Furthermore, in the case of magnetic reconnection, the total rate of magnetic energy released as heat is given by, \[\frac{dW}{dt}=0.4\ S_{i}=0.8\frac{2\pi}{3}\frac{\nu_{0}B_{0}^{2}}{\mu}d_{0}^{2 }\frac{M_{A0}}{\alpha}\frac{[1-(d/d_{0})^{4/3}]}{(d/d_{0})^{2/3}}, \tag{4}\] where \(S_{i}\) is the Poynting flux, \(\nu_{0}\) is converging speed of flux at the photosphere, \(M_{A0}\) and \(\alpha\) are Alfven Mach number and constant. The derivation of above equations are given in Priest et al. 2018. We calculated the above equation for a bipolar region by adopting \(\alpha=0.1\) and \(M_{A0}=0.1\)(Priest, 2014; Priest et al., 2018). These values are also adopted by Park 2020 to investigate a small-scale magnetic flux cancellation event in a quiet-Sun region. The physical quantities such as \(F\), \(d\), \(v_{0}\), and \(B_{0}\) can be derived from observations. As an example, for a bipolar patch shown in Figure 5, we estimate \(F\) as half of the total unsigned magnetic flux of the patch in the photosphere. We note that the theoretical model considers equal negative and positive flux, in our case the flux is different in opposite polarity patches. To determine \(d\), we use the magnetic flux-weighted centroid position of the opposite polarity (similar approach adopted by Yadav and Kazachenko, 2023). Furthermore, the converging speed is estimated from the temporal evolution of \(d\). To determine \(B_{0}\), we utilize the 3D magnetic field obtained from CGEM modeling. We take the average of the horizontal magnetic field, above the selected FOV, within the height range of 10 to 15 Mm, which yield a value of 50 G. Subsequently, with these obtained values we can estimate Eqs. 2-4. The temporal evolution of the parameters obtained for the selected patch (see Figure 5) is shown in 10. The figure demonstrates that the magnetic flux decreases (\(\sim 10^{16}\) Mx s\({}^{-1}\)) as a function of time due to flux cancellation. Consequently, the separation between opposite-polarity patches also decreases with a converging speed of 0.4 Km s\({}^{-1}\). During the SST observations, \(d_{0}\) for this particular region stays around 4.5 Mm. We also note that \(d\) value is always less than the interaction distance. This implies that this region can have magnetic reconnection in the atmosphere. The estimated height of magnetic reconnection separator, \(Z_{s}\), is between 2 to 3 Mm, which is also in agreement with the location of strong vertical current obtained from the CGEM simulation. The magnetic energy released as heat during reconnection, \(dW/dt\), varies from 8 to 11 \(\times\) 10\({}^{26}\) ergs s\({}^{-1}\) or \(\sim\)3\(\times\) 10\({}^{9}\) ergs cm\({}^{-2}\) s\({}^{-1}\) within the selected FOV. This value is two orders of magnitude larger than the thermal Figure 10: Temporal evolution of physical quantities for a bipolar region shown in Fig 5. _Left panel:_ The unsigned magnetic flux in the photosphere and chromosphere is represented by a solid blue and orange line, respectively. _Middle panel:_ The temporal evolution of interaction distance (\(d_{0}\)), separation between two opposite polarity (d), and the height of magnetic reconnection separator (\(Z_{s}\)). _Right panel:_ The total rate of magnetic field energy releases as heat, \(dW/dt\). energy estimated from the DEM approach, which only considers coronal losses. Typically, the chromospheric energy losses tend to exceed coronal losses significantly (Withbroe & Noyes, 1977). ### The Photospheric Poynting flux To estimate the upward transport of magnetic energy, we estimated the vertical Poynting flux as (Welsch, 2015), \[S_{z}=[v_{z}B_{h}^{2}-(\mathbf{v_{h}}\cdot\mathbf{B_{h}})B_{z}]4\pi, \tag{5}\] where \(v_{z}\) and \(v_{h}\) are the vertical and horizontal component of the velocity. \(B_{h}\) and \(B_{z}\) are the horizontal and vertical component of magnetic field. We obtain the vertical velocity, \(v_{z}\), and the magnetic field vector, (\(B_{x}\), \(B_{y}\), \(B_{z}\)), by inverting the Fe I 6301 A line pair, obtained from the SST/CRISP, using ME approach in the photosphere (see Sect. 3). The horizontal components of velocity in the photosphere, \(v_{x}\) and \(v_{y}\), are determined using the FLCT method (Fisher & Welsch, 2008) with \(Bz\) component of magnetic field obtained from the SST/CRISP. Figure 11 shows the estimated \(S_{z}\) over the observed FOV. The black box shown in the figure indicates the location where intense brightening is observed in the chromospheric lines and coronal images. The temporal evolution of \(S_{z}\) in this region demonstrates that the positive value varies around \(\sim\)\(5\times 10^{8}\) ergs cm\({}^{2}\) s\({}^{-1}\), which is sufficient to heat the local chromosphere and corona (Withbroe & Noyes, 1977). We also note that the estimated energy release from the theoretically derived equation (Eq. 4) is \(\sim 3\times 10^{9}\) ergs cm\({}^{2}\) s\({}^{-1}\), whereas the Poynting flux (Eq. 5) is around \(\sim 5\times 10^{8}\) ergs cm\({}^{2}\) s\({}^{-1}\) (see Fig 11), which is lower by a factor of six. Moreover, the fluctuations in the temporal behavior of \(S_{z}\) demonstrate oscillatory patterns that could potentially be attributed to the five-minute oscillations occurring in the photosphere (Ulrich, 1970). ## 5 Discussion In our study, we investigated the thermal, kinematic, and magnetic structures of small-scale bipolar regions present in the vicinity of an EFR, and their impact on chromospheric and coronal heating. To achieve this, we utilized multi-line spectropolarimetric observations of an EFR located at the disk center. The observations were performed simultaneously in the Ca ii 8542 A, Ca ii K, and Fe i 6301 A lines using the CRISP and CHROMIS instruments at the SST. By combining these high-resolution, multi-line observations, we infer the stratification of physical parameters such as temperature, magnetic field, LOS velocity, and micro-turbulent velocity across selected regions using the STiC, a non-LTE multi-line inversion code. Additionally, we used co-aligned AIA images to understand the thermal distribution in the corona above bipolar events using DEM approach. Furthermore, we performed a data-driven magneto-frictional simulation to understand the magnetic field topology of these events. We also investigated how the total rate of magnetic energy released via magnetic reconnection resulting from flux cancellation contributes to chromospheric and coronal heating using cancellation nanoflare model. Our observations demonstrated that the temporal evolution of a converging bipolar event, leading to flux cancellation at the rate of \(\sim 10^{16}\) Mx s\({}^{-1}\), not only resulted in significant brightening in the chromospheric lines (such as Ca ii 8542 A and Ca ii K), but also in the transition region and coronal images observed by AIA. Magnetic reconnection, driven by the flux cancellation, produced detectable signatures in the chromospheric spectral lines, which exhibited complex asymmetric shapes attributed to intense heating and velocity gradients. These shapes can be attributed to the occurrence of magnetic reconnections within the lower solar atmosphere. The non-LTE inverted model atmo Figure 11: The vertical Poynting flux (\(S_{z}\)) in the photosphere (left panel) and the evolution of average \(S_{z}\) (right panel) in a black box shown in the left panel. sphere provided clear evidence of heating and strong upflows/downflows at various layers above bipolar regions. For the selected pixels passing through bipolar regions, the stratification and temporal evolution of temperature show that the temperature in the lower chromosphere (e.g., \(\log\)\(\tau_{500}\)\(\sim\)\(-\)2; in case of slit L2 shown in Figure 8) raised up to \(\sim\)8 kK. Such temperature enhancement are also observed in flares (Kuridze et al., 2017; Yadav et al., 2021). The location of heating does not always lie on the polarity inversion line (PIL) but is situated close to it. This could be attributed to the 3D nature of reconnection or the presence of serpentine structures in the magnetic field lines within the EFR. Furthermore, at these locations the LOS velocity exhibit both upflows (blueshift) and downflows (redshift). Typically, the upflows (\(\sim\)10 km s\({}^{-1}\)) are observed in the higher layers whereas the downflows (\(\sim\)10 km s\({}^{-1}\)) are seen in the deeper layers. Such flows can be clearly identified in the spectral profiles of selected pixels demonstrated in Figure 12. In some cases, the upflows/downflows can reach upto \(\sim\)20 km s\({}^{-1}\) (see profiles shown in Figures 12-15). The presence of increased temperature and strong bidirectional flows provides evidence of a reconnection event in the lower solar atmosphere. We compared our observations with a cancellation nanoflare model given by Priest et al. 2018. According to this model, two converging bipolar regions on the photosphere will reconnect, if their separation is below the interaction distance (\(d_{0}\)), and thus can heat the upper atmosphere. We observationally derived \(d_{0}\), magnetic reconnection separator (\(Z_{s}\)), and the rate of magnetic field energy released as heat (\(dW/dt\)). In our selected bipolar region, we find that their separation is always less than \(d_{0}\), and the field lines associated with them are likely to reconnect somewhere in the chromosphere. The estimated, \(Z_{s}\) value that tells the location of magnetic reconnection is around 2.5-3 Mm. This height range is also in agreement with the location of total vertical currents derived from the data-driven simulation. The estimated released energy \(dW/dt\) in the selected region is approximately \(\sim\)3\(\times\) 10\({}^{9}\) ergs cm\({}^{-2}\) s\({}^{-1}\). This value is roughly six times more than the energy flow estimated using the Poynting flux (see Eq. 5). This difference could be due to the simplistic assumption of a bipolar structure consisting of two opposite polarity sources with equal flux in the theoretical derivation. However, in reality, the photospheric structure is much more complex. Our observations demonstrate the presence of two bipolar regions with different flux in each polarity. Unlike the theoretical model proposed by Priest et al. 2018, the field lines obtained from data-driven simulations reveal connections to nearby opposite field locations. The theoretical heating model based on reconnection may need to incorporate multiple bipolar structures to accurately simulate real observations and explain the various aspects of heating caused by reconnection in the lower solar atmosphere. ## 6 Conclusion Our study provides valuable insights into the thermal, kinematic, and magnetic structures of small-scale bipolar regions in a flux emerging region and their influence on chromospheric and coronal heating. Utilizing multi-line spectropolarimetric observations, non-LTE inversions, co-aligned AIA images, and data-driven magneto-frictional simulations, we found that converging bipolar events, resulting in flux cancellation and magnetic reconnection, led to significant chromospheric brightening and complex spectral signatures. This resulted in significant heating and velocity gradients in the lower solar atmosphere. The observed temperature enhancement, upflows, and downflows retrieved from non-LTE inversion also suggest that the magnetic reconnection is occuring in the lower solar atmosphere, specially around the temperature minimum and above it. The released magnetic energy from flux cancellation was shown to sufficiently heat the local chromosphere and corona. We note that the spectropolarimetric signals in the Ca ii 8542 A line were inadequate for deriving the horizontal magnetic field in the chromosphere. To fully understand the magnetic field changes associated with magnetic reconnection and heating mechanisms, it is critical to obtain sufficient linear polarization signals in both the photosphere and chromosphere. State-of-the-art instruments and new generation solar telescopes like the Daniel K. Inouye Solar Telescope (DKIST; Tritschler et al., 2015) and the European Solar Telescope (EST; Matthews et al., 2016) will be instrumental in achieving this goal. We would like to thank the anonymous referee for the comments and suggestions. The Swedish 1-m Solar Telescope is operated on the island of La Palma by the Institute for Solar Physics of Stockholm University in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofisica de Canarias. The Institute for Solar Physics is supported by a grant for research infrastructures of national importance from the Swedish Research Council (registration number 2021-00169). We acknowledge support from NASA LWS NNH17ZDA001N, NASA LWS 80NSSC19K0070, NASA ECIP 80NSSC19K0910, NASA HSR NNH21ZDA001 and NSF CAREER award SPVKK1RC2MZ3 (R.Y. and M.D.K.). Resources supporting this work were provided by the NASA HighEnd Computing (HEC) Program through the NASA Advanced Supercomputing (NAS) Division at Ames Research Center. This work utilized the Alpine high performance computing resource at the University of Colorado Boulder. Alpine is jointly funded by the University of Colorado Boulder, the University of Colorado Anschutz, Colorado State University, and the National Science Foundation (award 2201538). This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (SUNMAG, grant agreement 759548). We acknowledge the use of the visualization software VAPOR (Li et al., 2019) for generating relevant graphics. Data and images are courtesy of NASA/SDO and the HMI and AIA science teams. This research has made use of NASA's Astrophysics Data System. We acknowledge the community effort devoted to the development of the following open-source packages that were used in this work: NumPy (numpy.org), matplotlib (matplotlib.org) and SunPy (sunpy.org).
2305.20026
Regulated Pure Pursuit for Robot Path Tracking
The accelerated deployment of service robots have spawned a number of algorithm variations to better handle real-world conditions. Many local trajectory planning techniques have been deployed on practical robot systems successfully. While most formulations of Dynamic Window Approach and Model Predictive Control can progress along paths and optimize for additional criteria, the use of pure path tracking algorithms is still commonplace. Decades later, Pure Pursuit and its variants continues to be one of the most commonly utilized classes of local trajectory planners. However, few Pure Pursuit variants have been proposed with schema for variable linear velocities - they either assume a constant velocity or fails to address the point at all. This paper presents a variant of Pure Pursuit designed with additional heuristics to regulate linear velocities, built atop the existing Adaptive variant. The Regulated Pure Pursuit algorithm makes incremental improvements on state of the art by adjusting linear velocities with particular focus on safety in constrained and partially observable spaces commonly negotiated by deployed robots. We present experiments with the Regulated Pure Pursuit algorithm on industrial-grade service robots. We also provide a high-quality reference implementation that is freely included ROS 2 Nav2 framework at https://github.com/ros-planning/navigation2 for fast evaluation.
Steve Macenski, Shrijit Singh, Francisco Martin, Jonatan Gines
2023-05-31T16:55:08Z
http://arxiv.org/abs/2305.20026v1
# Regulated Pure Pursuit for Robot Path Tracking ###### Abstract The accelerated deployment of service robots have spawned a number of algorithm variations to better handle real-world conditions. Many local trajectory planning techniques have been deployed on practical robot systems successfully. While most formulations of Dynamic Window Approach and Model Predictive Control can progress along paths and optimize for additional criteria, the use of pure path tracking algorithms is still commonplace. Decades later, Pure Pursuit and its variants continues to be one of the most commonly utilized classes of local trajectory planners. However, few Pure Pursuit variants have been proposed with schema for variable linear velocities - they either assume a constant velocity or fails to address the point at all. This paper presents a variant of Pure Pursuit designed with additional heuristics to regulate linear velocities, built atop the existing Adaptive variant. The _Regulated Pure Pursuit algorithm_ makes incremental improvements on state of the art by adjusting linear velocities with particular focus on safety in constrained and partially observable spaces commonly negotiated by deployed robots. We present experiments with the Regulated Pure Pursuit algorithm on industrial-grade service robots. We also provide a high-quality reference implementation that is freely included ROS 2 Nav2 framework at [https://github.com/ros-planning/navigation2](https://github.com/ros-planning/navigation2) for fast evaluation. **Keywords:** Service Robots, Mobile Robots, Motion Planning, Path Planning ## 1 Introduction Dynamic Window Approach (DWA) [1], Pure Pursuit (PP) [6], and Model Predictive Control (MPC) [2] are by far the most commonly deployed path trackers. They all have a strong heritage for reliability in a wide range of environmental conditions. DWA and MPC are often, but not always, formulated as multi-objective trajectory generation problems to maximize criteria such as avoiding dynamic obstacle collisions on top of path tracking. This has made them particularly well suited for many robotics applications where dynamic robot behaviors are rewarded. A great deal of work has been conducted on these allowing them to be found on many commercially available robots today. However, there still exist many applications of deployed robot systems where this can be considered a detractor. Surveyed among high-end research and service robot navigation systems, pure path tracking continues to be a common theme in many robots [3, 4]. Among single-objective path trackers, a simple and reliable method continues to be exploited decades after its development: Pure Pursuit. Pure Pursuit uses simple geometry to find the curvature of a path required to drive a robot towards a given point on the path. The algorithm itself does not place any stability restrictions on the translational velocities during operation, however it also lacks any schema for selecting them. Near-universally, implementations use a fixed speed. Many variations of Pure Pursuit exist, however most address the more obvious area of the selection of lookahead points, which largely aids in stabilizing convergence behaviors towards the path at a wider range of velocities. The Pure Pursuit algorithm was not developed with service and industrial robots in mind, which have additional safety requirements making it further unrealistic to move at a fixed velocity. This work proposes an incremental improvement on the Pure Pursuit path tracking algorithm by describing a reference method of adjusting translational and rotational velocities to improve safety and operability in a broad range of common deployed robot applications. We improve the Adaptive Pure Pursuit (APP) algorithm [7] by regulating velocities via penalizing sharp changes in path curvature and proximity to obstacles - two of the most common events requiring conscientious navigation behaviors. The _Regulated Pure Pursuit_ (RPP) algorithm slows sharp turns into partially observable dynamic environments (aisles, hallways, intersections) to reduce the likelihood and impact of potential collisions. It also reduces linear velocity in close proximity to obstacles such as people and fixed infrastructure to reduce likelihood of collision in constrained indoor environments. Finally, it additionally includes preemptive collision detection missing from other variations. Another main contribution of this work is in describing and providing free and high quality implementations of PP, APP, and RPP for evaluation. It is integrated into the Nav2 mobile robot navigation system commonly used by researchers and adds a new capability to the framework [5]. This work is well documented, tested, and is in use on robots deployed today. This paper is organized as follows. First, in section 2, we will describe the conventional Pure Pursuit Algorithm, providing a mathematical formulation that will allow us to describe related works that provide variations to the original algorithm. In section 3, we describe the Regulated Pure Pursuit Algorithm, our main contribution to this paper. Section 4 will describe its implementation in a real robotic system, and in section 5, we will experimentally validate the improvement introduced with our contribution. Finally, we will provide the conclusions in section 7. ## 2 Related Work Pure Pursuit [7] is a widely used algorithm for path tracking. It is simple but effective. Considering a path \(\mathcal{P}\) as an ordered list of points \(\mathcal{P}=\{p_{0},p_{1},...,p_{n}\}\) where \(p_{i}=(x_{i},y_{i})\in\mathcal{P}\). A local trajectory planner is a function \(f\) that determines the linear and angular velocity to track a reference path \(\mathcal{P}_{t}\) at time \(t\) (Eq. 1). \[(v_{t},\omega_{t})=f(\mathcal{P}_{t}) \tag{1}\] Figure 1 shows visual geometric representations of Pure Pursuit. First, it determines the closest point \(p_{r}\) on \(\mathcal{P}_{t}\) to the robot position. Using a given lookahead distance, \(L\), the lookahead point \(p_{l}\) is determined as the first \(p_{i}\) at least \(L\) distance away from \(p_{r}\) in Eq. 2. \[dist(p_{i})=\sqrt{(x_{r}-x_{i})^{2}+(y_{r}-y_{i})^{2}}\] \[p_{l}=p_{i}\in\mathcal{P}_{t},\left\{\begin{aligned} & dist(p_{i-1})<L\\ & dist(p_{i})\geq L\end{aligned}\right. \tag{2}\] With a known \(p_{l}\), we can determine curvature of the circle (recall, \(R=1/\kappa\)) using simple geometry. If \(\mathcal{P}_{t}\) is represented in vehicle base coordinates, \(\mathcal{P^{\prime}}_{t}\), where the robot position is the origin, then the curvature can be represented as \[\kappa=\frac{2\,y_{l}^{\prime}}{L^{2}} \tag{3}\] where \(\kappa\) is the path curvature required to drive the robot from its starting position to the lookahead carrot, \(y^{\prime}_{l}\) is the lateral coordinate of the lookahead point \(p^{\prime}_{l}\), and \(L\) is the desired distance between \(p_{r}\) and \(p_{l}\). Figure 1 shows this visually, where \(L\) is represented geometrically as the circle's chord. With curvature to drive towards the lookahead point, the commands can be send to a robot controller. This process is then updated at the desired rate. The primary parameters of the Pure Pursuit path tracker are simply the velocity of travel \(v_{t}\) and the distance \(L\) along the path used to select the lookahead point \(p_{l}\). In the standard formulation, this lookahead point \(p_{l}\) is a distance \(L\) from the robot tuned to achieve an acceptable trade-off between oscillations centered around the path (shorter distances) and slower convergence (longer distances). There exists a broad range of admissible lookahead distances [6]. PP and its variants are applicable to Ackermann and differential-drive robots due to PP's formulation supporting dynamics with longitudinal motion and a turning rate in body-fixed frame. For Ackermann robots to use PP, the path must be feasible given the kinematic-constraints of the platform - while differential-drive robots may follow any holonomic path using PP. While omni-directional robots may utilize PP variants, they will be restricted from performing lateral movements. There are several known downsides of this approach. In high curvature situations, Pure Pursuit is known to have overshoot or undershoot behaviors resulting in path deviations, even in a well tuned system [6]. This is typically not a major concern for autonomous driving applications which naturally has a minimum turning radius limit, but a more substantive issue for smaller-scale applications like industrial and consumer robots. It also does not specify any translational velocity specification criteria during execution. While this can be beneficial as it allows for a great deal of flexibility with different linear velocity profiles, in practice, without described methods, nearly all known variants use a constant translational velocity profile. This is an unsafe defacto-standard. Variants of this algorithm have been proposed to increase path tracking stability by varying computations of the lookahead point. MIT's entry into the DARPA Urban Challenge implemented the _Adaptive_ Pure Pursuit (APP) algorithm for lane following while varying the lookahead distances proportionally to the translational velocity [7]. For velocities in the operating range, a mapping of lookahead distances to velocities is required such that they have an acceptable trade-off between oscillation and slower convergence to the path. A common formulation for this is in Eq. 4, where \(L_{t}\) is the lookahead distance, \(v_{t}\) is the translational velocity, and \(l_{t}\) is a lookahead gain representing the amount of time to project \(v_{t}\) forward [8]. \[L_{t}=v_{t}l_{t} \tag{4}\] A recent variant has been created to significantly improve path tracking accuracy of Pure Pursuit [7]. This variant also modifies the process of determining the lookahead point, but instead adjusts this point off the path to address the long-standing edge case of short-cutting during path tracking in substantial turns. While it also contains a heuristic change to the translational velocity during that edge-case, its policy is derived from on-road properties and ackermann kinematics that does not generalize to other applications outside of on-road driving of Ackermann vehicles. It does not solve the problems facing deployment of PP techniques to general mobile robotics applications for which this work focuses (e.g. practicable translational velocity control); though its improvements in path tracking accuracies are notable. Figure 1: Geometry of finding the path curvature. Pure Pursuit nor its variations account for dynamic effects of the vehicle. As PP is geometrically derived, vehicle dynamics are not modeled in this style of path tracking algorithm. There are other types of path trackers that do account for vehicle dynamics [9]. This paper uses PP and its variants for comparison to offer a candid contrast with the same or directly analog parameters to clearly indicate the differences in performance due to the contribution 1. Footnote 1: Not leaving the opportunity for misleading results by comparing unrelated families of methods with unknown degrees of tuning ## 3 Regulated Pure Pursuit The Regulated Pure Pursuit algorithm is designed for service and industrial mobile robots in real-world constrained and partially observable environments. It provides methods for adapting the robot's translational velocity to current conditions; robot systems cannot simply barrel through aisles and facilities at full speed without regard. These methods are linear regulation cost functions that provide high-quality behavior across a variety of practical mobile robot environments derived from commonplace requirements on reducing robot velocities in the presence of sharp turns or when operating in confined regions. The first phase of the Regulated Pure Pursuit algorithm is to transform the input path \(\mathcal{P}_{t}\)\(\{p_{0},...,p_{j},p_{r},...,p_{n}\}\) into the robot's base coordinate frame \(\mathcal{P^{\prime}}_{t}\) and prune it. In doing so, determining the curvature of the path is reduced to the simple algebraic expression shown in Eq. 3. Before transformation, \(p_{r}\) is determined and all prior points \(\{p_{0},...,p_{j}\}\) are permanently pruned from the stored path to prevent unnecessary future transformation of obsolete data. The transformed path (\(\{p^{\prime}_{r},...,p^{\prime}_{n}\}\)) is also pruned for all \(p_{i}\) points where \(dist(p^{\prime}_{r},p^{\prime}_{i})>>L_{t}\) as they are sufficiently far away that they will never need to be considered at \(t\). These far path points continue to be held in the stored path for future iterations, until a new path is received, as the robot progresses along the path. Next, RPP will utilize the same lookahead selection mechanics as Adaptive Pure Pursuit described in Eq. 4. The lookahead distance \(L_{t}\) is thusly proportional to the speed \(v_{t}\) and a lookahead gain \(l_{t}\) such that longer distances are used while moving faster. This stabilizes the path tracking behavior over larger ranges of translational velocities [10]. This distance is used to select the lookahead point \(p_{l}\). While interpolating between path points was found to demonstrably improve smoothness on sparse paths at autonomous vehicle speeds, empirically this did not contribute much benefit at the service and industrial robot speeds using typical grid map planning resolutions (0.025 m - 0.1 m) [8]. However, interpolation is beneficial for use in sparser path resolutions (0.1 - 1.0m). The desired linear velocity, \(v_{t}\), is next further processed by the curvature and proximity heuristics. Both heuristics are applied to linear velocity and we take the maximum of the two. The purpose of the **curvature heuristic** is to slow the robot to \(v^{\prime}_{t}\) during sharp turns into partially observable environments, such as when entering or exiting hallways and aisles commonly found in retail, warehouses, factories, schools, and shopping malls. This allows for significantly safer traversal when making blind turns. This heuristic is applied to the linear velocity \(v_{t}\) when the change in curvature \(\theta\) is above a minimum threshold \(T_{\theta}\). This minimum radius restricts velocity scaling in minor turns or path variations that do not require slowing. The curvature velocity \(v^{\prime}_{t}\) returned by this heuristic is selected as: \[v^{\prime}_{t}=\begin{cases}v_{t}&\kappa>T_{\kappa},\\ \frac{v_{t}}{r_{min}\,\kappa}&\kappa\leq T_{\kappa}\end{cases} \tag{5}\] Where \(r_{min}\) is the minimum radius to apply the heuristic. This formulation is a mathematical reduction of a simple error calculation between the minimum radius and the radius of a circle represented by \(\kappa\) and can be trivially derived. The **proximity heuristic** is applied to the linear velocity \(v_{t}\) when the robot becomes in close proximity to dynamic obstacles or fixed infrastructure. The purpose of this is to slow the robot when in constrained environments where the potential for collision is particularly high. Reducing the speed near fixed infrastructure lowers the likelihood of collision by lessening the impact of small path variations in tight spaces. Lowering the speed of industrial and service robots in close proximity to dynamic agents, such as humans, is a common safety requirement- allowing a robot to reactively stop faster to prevent potential injury. The linear formulation of this heuristic reduces the speed by the ratio of \(d_{O}/d_{prox}\) with a gain, \(\alpha\), to adjust the response for individual systems. The other formulations tested, such as exponential and quadratic, far too significantly penalized proximity to objects and contained only a narrow band of gains which would result in an acceptable trade-off between proximity to obstacles and velocities to accomplish the robotic task. This linear heuristic in Eq. 6 has a broad range of \(\alpha\) which may be finely tuned by a system designer and is derivative of the well-used Adaptive Pure Pursuit's formulation [10]. \[v_{t}^{\prime}=\begin{cases}v_{t}\ \frac{\alpha}{d_{O}}&d_{O}\leq d_{prox}\\ v_{t}&d_{O}>d_{prox}\end{cases} \tag{6}\] Where \(d_{prox}\) is the proximity distance to obstacles to apply the heuristic, \(d_{O}\) is the current distance to an obstacle, and \(\alpha\) is a gain to scale the heuristic function for aggressive behavior, with the requirement that \(\alpha\leq 1.0\). A higher \(\alpha\) lowers the velocity of the robot in proximity to obstacles more expeditiously. The value of \(d_{prox}\) should be established based on the system requirements of a robot's application for how close an obstacle can be before the robot begins slowing its maximum velocity. After velocity regulation, the algorithm then determines the path curvature using Eq. 1. The angular velocity is computed using the regulated velocity, not desired linear velocity, which prevents consequential undershoot behavior relative to target curvatures [6]. Finally, the angular velocity, \(\omega_{t}\), is then trivially found as Eq. 7. \[\omega_{t}=v_{t}^{\prime}\ \kappa \tag{7}\] The final step of the algorithm is to check our path tracking command for current or imminent collisions, new to RPP. A given angular velocity \(\omega_{t}\) and regulated linear velocity \(v_{t}^{\prime}\) can be projected forward in time, resulting in a circular arc. Points on the arc are sampled at the grid map cell resolution forward for a set duration. Collision checking is done based on a duration to collision rather to the lookahead point such that the robot is always, at minimum, a set duration from collision. At slow speeds, it may not be sensible to collision check to a lookahead point tens of meters, or hundreds of seconds, away. Rather, a temporal schema allows for fine maneuvers in confined spaces where the current velocity commands may not be admissible a short distance - but long time - away. This controller is known to converge from the stability analysis completed in [11]. Ollera's work on the general Pure Pursuit algorithm applies to Regulated Pure Pursuit as well, showing that straight paths and constant curvature paths are stable. Pure Pursuit itself is independent of velocity in its formulation. It is only used in the final step to convert the path curvature into a rotational velocity for a base to track. The regulation heuristics proposed in this variant do not change the basis of stability of the root algorithm. ## 4 Implementation Another contribution of this work is a high-quality reference implementation of Regulated Pure Pursuit. This reference implementation is available as one of the default path tracking algorithm plugins in the new and improved ROS 2 Navigation System, known as Nav2. Nav2 is a scalable navigation framework with multiple algorithm implementations, documentation, and support for building modern and reliable research and commercial navigation systems [5]. This optimized C++ implementation has 92% unit test coverage and is used in the experiments in Section 5. It also contained a few additional features that are highlighted in this section. Rather than creating distinct implementations of Pure Pursuit, Adaptive Pure Pursuit, and Regulated Pure Pursuit, all three have been built into a single reference implementation parameterized to enable each specific behavior. This allows us to test these algorithms easily and analyze their run-time performance more closely knowing that they share the vast majority of their computations. This also allows researchers and developers to quickly evaluate and tune these features by simply changing a handful of parameters from the available binaries. Additionally, there is a setting to allow the robot to slow its speed when approaching the target goal pose. This feature allows the robot to come to a more gradual stop. When the robot is within close proximity to the goal, the translational velocity is lowered proportional to the remaining distance, up to a minimum viable velocity to make progress. This work is specifically targeted at industrial and service robots, many of which are differential drive. While this is of no technical concern to Pure Pursuit, many differential drive robots will utilize holonomic search-based path planners (such as Dijkstra's or A*) rather than kinematically feasible planners frequently used by Ackermann vehicles [13]. This is of concern to the Regulated Pure Pursuit implementation as the relative heading of the path may not align with the robot's starting heading. In this case, it is beneficial for the robot to be able to rotate to a rough starting heading before beginning to track the holonomic path. Further, rotating to the final heading when the robot is within the translational goal tolerances is useful when being deployed in orientation-sensitive applications. These are non-issues when using feasible planning algorithms that, no matter the drivetrain. However, it is useful to consider this situation since holonomic grid search algorithms are popular. Further, it is a common approach for local trajectory algorithms to leverage a rolling environmental model with the robot at its center. The predictive collision detection algorithm will check for collisions some \(N\) seconds in the future given the robot's current commanded speed. If a near-future collision is detected, the robot is stopped. When a robot is traveling at high speeds, this may actually lapse outside of the bounds of a rolling environmental model if not carefully tuned. The implementation offers protections from such situations and issues stern recommendations to prevent the serious safety violation. Users are able to tune all of the parameters related to PP, APP, and RPP. In particular, the heuristics have parameters \(r_{min}\), \(\alpha\), and \(d_{prox}\). It is possible for these to be poorly tuned such that a robot would drop to an impractically low speed when in close proximity to an obstacle or in a turn. To prevent the robot from traveling _too_ slow, a minimum speed threshold is exposed such that the robot's speed will never be set below a parameterized value. Note: the parameters and speeds in the experiment were set up such that this did not impact our experimental results in Section 5. An open-source community contributed feature made available is support for reversing - not only driving forward. When a path contains a cusp or discontinuity in direction, the controllers will reverse from forward to back and vis-a-via. When paired with a planner that can generate such paths, such as the Hybrid-A* planner in Nav2, the robot can track more complex paths. ## 5 Experiments and Analysis This section describes the experiments to demonstrate the benefits of our contribution compared to other existing Pure Pursuit variants. One experiment was conducted in simulation and three others were conducted with a physical Tiago robot (Figure 2). Tiago is used in industrial and service applications and the test environment is shown in Figure 7, corresponding to a university campus building. In all three hardware experiments below, the same navigation configurations are utilized. The maximum linear speed \(v_{max}\) is set to \(0.8m/s\), the maximum acceleration \(a_{max}\) is \(0.2m/s^{2}\), and the maximum angular speed \(\omega_{max}\) is \(3.2rad/s^{2}\). The lookahead distance \(L_{t}\) is set between \(0.25m\) and \(1.2m\) and lookahead time is \(1.0s\) for Adaptive and Regulated Pure Pursuit; while Pure Pursuit uses the maximum of \(1.2m\). In all four experiments, no replanning was utilized such that the distance to path data is regular and can be meaningfully analyzed. SLAM Toolbox was used to generate the 2D map and AMCL was used to localize within it [5, 14]. ### Path Tracking Experiment A simulation experiment was conducted to analyze the improvements Regulated Pure Pursuit can offer during sharp turns in ideal conditions. This simulation was conducted using the Turtlebot 3 robot, the Gazebo simulator, and an empty Figure 2: The Pal Robotics Tiago used in the experiments [12]. environment with ground truth information made available to remove the contribution of odometric and localization error. The robot followed the reference step function path indicated by the red piece-wise line in Figures 3 and 4 using the three variants of Pure Pursuit. Figure 3 displays the comparison with the different methods. The same parameters were utilized in this experiment as the hardware experiments, with the exception of the desired linear velocity, set to \(1.0m/s\) and the curvature heuristic minimum radius, set to \(1.5m\). The overshoot observed during sharp turns was notably lower in Regulated Pure Pursuit due to the controlled slow down during sharp changes in curvature allowing the robot to track the path more finely. The slow down also triggers the use of a closer adaptive lookahead point further increasing local stability. The mean tracking error from the reference path of Regulated Pure Pursuit was merely \(0.03m\). The others experienced over an order of magnitude difference at \(0.10m\) and \(0.19m\) for APP and PP, respectively. While the main contribution of this work is _not_ an improvement on path tracking, it is a convenient emergent property to further ensure safety. This experiment showcases that the algorithm can follow a computed collision-free path in confined spaces more closely and thusly more safely. Using the ideal environment, the effects of varying the curvature heuristic's minimum radius were also studied. From Figure 4, it can be observed that larger values of \(r_{min}\) allow a robot to follow the reference path more closely. This is due to the curvature triggering reactions quickly on approach to the discontinuity. When \(r_{min}\) was increased from \(1.0m\) to \(1.5m\), the mean tracking error decreases from \(0.8m\) to \(0.3m\). This reduction in tracking error does come at a cost: a reduction in the speed resulting in longer navigation times. For this experiment, when \(r_{min}>1.5m\), the improvement on tracking error became insignificant, but navigation times begin to spike as the speed during turns trends towards the minimum allowed speed. ### Blind Turning Experiment This experiment evaluates the three algorithms' performance when the robot encounters an unexpected obstacle that blocks its path around a blind corner. The experiment consists of the robot making a sharp 90-degree turn (Figure 5) behind which there is an obstacle. The obstacle is such that the robot cannot observe it before beginning Figure 4: Effects of varying \(r_{min}\) in Regulated Pure Pursuit. Figure 5: A blind corner experiment with an obstacle just out of view. When the robot cross the green line, the obstacle is set. Figure 3: Comparison of variants of Pure Pursuit. to execute the curve and has minimal time to react, placed after the robot crosses the green line. For each algorithm, the experiment is repeated ten times. The purpose of this experiment is to test the reaction of each algorithm in a common but particularly dangerous situation. While PP and APP do not themselves contain collision detection capabilities, for a fair comparison, each are evaluated using the collision checking method described in Section 3. Table 1 shows the results of this experiment; in all cases, none of the methods resulted in a collision. This shows that the collision detection features of RPP (applied to all methods) are sufficient to prevent blind collisions at service robot speeds. A similar experiment without RPP's collision detection yielded collisions in nearly all instances. The RPP algorithm displayed a 33% increase in average stopped distance relative to APP. APP and PP, both which do not change their speeds in turns, had expectedly similar behavior. This is an improvement in reaction time for safe navigation in environments with many dynamic agents potentially cornering at the same time. While the \(8cm\) increase does not seem like much, with many robots navigating independently for their own tasks, this additional precaution is the difference between robots scraping by and colliding with permanent damage. This highlights that RPP's slowing of the robot in blind turns helps to reduce risk in partially observable settings. ### Confined Corridor Experiment The confined corridor experiment was designed to evaluate how well the proposed approach improves performance in narrow spaces at high speeds when making continuous changes in direction. Figure 6 shows the test environment map, creating a "salam" pattern, starting from the left. The total width of the corridor is 1.5 meters and the obstacles each have dimensions of roughly 0.7 m. Each algorithm was tested five times and the data from this experiment is shown in Table 2. Figure 6 displays the robot position data overlaying the trajectory each algorithm was asked to follow. The most telling region of this experiment to analyze is the final and sharpest turn. RPP's proximity and curvature heuristics contributed in this location and resulted in a reduced speed and 14% less path tracking error than APP. The other algorithms had path undershoot due to the sharp turn and came into much closer proximity to the obstacle and wouldn't be considered functionally safe. This undershoot was corrected in RPP by reacting to the increased curvature partially due to the deviation from the path, compounded by its approaching proximity to the obstacle, slowing the robot down. This allowed RPP to better recover from a minor deviation off the path and navigated safer distance away from the obstacle. The impact of the regulation heuristics can be seen in blue as the robot weaves around the final obstacle after quickly changing direction. The distance traveled by PP is lower than both variants due to the short cutting it displayed \begin{table} \begin{tabular}{|c|c|c|c|} \hline & **PP** & **APP** & **RPP** \\ \hline \multicolumn{1}{|c|}{Avg Stopped Distance (\(m\))} & 0.15 & 0.16 & 0.24 \\ \hline \end{tabular} \end{table} Table 1: Result of the Blind Turning Experiment. \begin{table} \begin{tabular}{|c|c|c|c|} \hline & **PP** & **APP** & **RPP** \\ \hline Time (\(s\)) & 13.0 & 12.7 & 13.3 \\ \hline Distance (\(m\)) & 9.04 & 9.54 & 9.63 \\ \hline Collisions & 0 & 0 & 0 \\ \hline Average Speed (\(m/s\)) & 0.679 & 0.736 & 0.682 \\ \hline Average Distance Obstacle (\(m\)) & 0.662 & 0.681 & 0.683 \\ \hline Average Distance to Path (\(m\)) & 0.100 & 0.059 & 0.052 \\ \hline \end{tabular} \end{table} Table 2: Result of the confined corridor experiment. Figure 6: Calculated path in gray and the robot’s traveled path, colored by speed, during the corridor experiment. throughout the trajectory in Figure 6. This habitual undershoot also led to an decreased average distance to an obstacle - passing much closer to potential collisions. ### Full-system experiment In the final experiment, the robot will follow a route through the three control points shown in Figure 7 in a campus building. The route contains collision-free open space, confined halls, and blind sharp turn of approximately 70 m. The robot repeats the path three times with each of the algorithms, the data is provided in Table 3. The purpose of this experiment is to compare the RPP algorithm's general behaviors at a system-level to the existing variants. The data indicates little difference in overall system-level behavior - the average time to navigate was between 85-95s. Consistent with the previous experiment, PP does display slightly shorter distance traveled and an increased tracking error - corresponding to path short-cutting. The most compelling attribute of this experiment is the lack of particularly unique outcomes between the algorithms. Although RPP slows the robot in narrow corridors and while completing sharp turns, surprisingly, these maneuvers did not meaningfully impact the high-level navigation metrics. Time to task completion and average speeds were consistent across all three algorithms when considering the standard deviations between the trials. The system-level performance of RPP is so similar to that of APP, we conclude that the additional benefits of RPP come at little disadvantage to a system designer. In fact, it is possible for a solution architect to increase the maximum speed of the robot modestly when using RPP due to its slowing in turns and in proximity to obstacles (the common limiting factors of speed in robotics applications). This meaningfully improves the overall efficiency of a robot system while securing higher-quality safety features. 2. Footnote 2: A video with the experiments with the real robot can be found at [https://youtu.be/LQAzzJ8GmS0](https://youtu.be/LQAzzJ8GmS0) ## 6 Limitations Regulated Pure Pursuit, and other Pure Pursuit variants, suffer from a lack of modeling of the vehicle. As this class of technique is purely geometric based on the path, it will not consider dynamic constraints while changing velocities to track the path. Further, the global path must be feasibly drivable for a given robot platform since these methods compute velocities in the absence of kinematic limitations. For differential-drive robots, this may be any path. However for Ackermann steering robots, this path must already be drivable considering the kinematic limitations on the minimum possible turning radius. Regulated Pure Pursuit also continues to short-cut sections of paths with high curvature turns, though to a much reduced degree than PP or APP. The work introduced in [10] provides an alternative method for selecting the lookahead point with the goal of more accurately tracking the reference path. While this work also introduces a velocity heuristic that is not suitable for our class of applications, a variation on the method for selecting lookahead points has merit to be applied in mobile robotics. \begin{table} \begin{tabular}{|c|c|c|c|} \hline & **PP** & **APP** & **RPP** \\ \hline Time (\(s\)) & 85.6 & 94.3 & 88.6 \\ \hline Distance (\(m\)) & 58.24 & 59.26 & 58.64 \\ \hline Collisions & 0 & 0 & 0 \\ \hline Average Speed (\(m/s\)) & 0.661 & 0.675 & 0.646 \\ \hline Min Distance to Obstacle (\(m\)) & 1.139 & 1.133 & 1.135 \\ \hline Average Distance to Path (\(m\)) & 0.062 & 0.049 & 0.043 \\ \hline \end{tabular} \end{table} Table 3: Result of the full-system experiment. Figure 7: Itinerary for full-system experiment. The total length is 70 meters. ## 7 Conclusion Regulated Pure Pursuit builds incrementally on Adaptive Pure Pursuit with a focus on service robots. This method contains a schema for selecting velocities that improves functional safety for real-world deployed robot applications. Our demonstrations on an industrial robot exhibited improved safety around blind turns and in confined settings without significant system-level changes. A high-quality implementation of Regulated Pure Pursuit is freely available at [https://github.com/ros-planning/navigation2](https://github.com/ros-planning/navigation2) and is in use on robots today. ## Conflict of Interest and Acknowledge The authors have no conflicts of interest to declare that are relevant to the content of this article. This work has been partially funded by Ministerio de Economia and Competitividad of the Kingdom of Spain under project PID2021-126592OB-C22 and by the European Commission under grant CoreSense (N1.101070254).
2309.11890
Towards In-Cabin Monitoring: A Preliminary Study on Sensors Data Collection and Analysis
The last decade's market has been characterized by wearable devices, mainly smartwatches, edge, and cloud computing. A possible application of these technologies is to improve the safety of dangerous activities, especially driving motor vehicles. Common enabling technologies, such as system-on-chip, ultra-low-power computational platforms, and wide-band wireless connectivity, push all these trends. On the one hand, wearable devices, thanks to the continuous contact with the user's body, can measure physiological parameters. On the other hand, edge computing and machine learning techniques, alongside cameras, allow the implementation of contactless computer vision systems capable of providing information about the user's current behavior. Another trend is the usage of RADARs in automotive applications, both for collision avoidance and monitoring driver behavior. These technologies can be combined to develop systems designed to aid the driver. For the sake of this paper, we are focusing on warning drivers, allowing them to know whenever they are drowsy and hence risking a sleep onset or are not paying attention to the road. Developing such systems poses many challenges, such as automatic classification of physiological signal patterns, facial expression recognition, head movements and eye gaze detection. These challenges have been individually addressed in the literature. Anyway, we noticed a need for more description on implementing data fusion. Two main reasons for adopting the fusion approach are to improve the quality of the overall representation (increasing accuracy and specificity against drowsy) and make a more reliable system due to redundancy.
Jacopo Sini, Luigi Pugliese, Sara Groppo, Michele Guagnano, Massimo Violante
2023-09-21T08:49:59Z
http://arxiv.org/abs/2309.11890v1
# Towards In-Cabin Monitoring: A Preliminary Study on Sensors Data Collection and Analysis ###### Abstract The last decade's market has been characterized by wearable devices, mainly smartwatches, edge, and cloud computing. A possible application of these technologies is to improve the safety of dangerous activities, especially driving motor vehicles. Common enabling technologies, such as system-on-chip, ultra-low-power computational platforms, and wide-band wireless connectivity, push all these trends. On the one hand, wearable devices, thanks to the continuous contact with the user's body, can measure physiological parameters. On the other hand, edge computing and machine learning techniques, alongside cameras, allow the implementation of contactless computer vision systems capable of providing information about the user's current behavior. Another trend is the usage of RADARs in automotive applications, both for collision avoidance and monitoring driver behavior. These technologies can be combined to develop systems designed to aid the driver. For the sake of this paper, we are focusing on warning drivers, allowing them to know whenever they are drowsy and hence risking a sleep onset or are not paying attention to the road. Developing such systems poses many challenges, such as automatic classification of physiological signal patterns, facial expression recognition, head movements and eye gaze detection. These challenges have been individually addressed in the literature. Anyway, we noticed a need for more description on implementing data fusion. Two main reasons for adopting the fusion approach are to improve the quality of the overall representation (increasing accuracy and specificity against drowsy) and make a more reliable system due to redundancy. Physiological data, Sleep, Safety, Vehicle Driving, Camera-based systems, Image processing, Neural Networks ## I Introduction As reported by American National Highway Traffic Safety Administration (NHTSA), driver loss of attention is a worldwide problem that leads to a high number of deaths every year. The leading cause of the decrease in attention while driving is drowsiness. Particularly, sleep at the wheel contributes to motor vehicle accidents for 20% of all police-reported crashes [1]. Several methods for monitoring the driver's state have been developed, mainly focusing on the way to drive [2], the camera-based measurements [3], and the physiological parameters of the driver [4]. Detecting methods based on the way to drive have low reliability because obtained results can change a lot due to unpredictable factors such as road geometry and traffic conditions. Camera-based measurements are theoretically very effective in drowsiness detection but, in real driving conditions, they are subjected to light and skin color variations [5]. Physiological parameters that can be used for drowsiness study are electrocardiography (ECG), electromyography (EMG), electrooculography (EOG), electroencephalography (EEG), and respiration rate (RR) [6]. They can provide very accurate results but some of them, such as EEG, require very invasive sensors to be acquired, and this makes it difficult to use them for driver drowsiness detection. The most used physiological parameters are heart rate (HR), heart rate variability (HRV), and respiration rate (RR). The best approach for driver monitoring is combining data from multiple sensors, obtaining data that are more consistent and reliable [7]. This process is known as sensor-data fusion and its great reliability is nowadays seen as a way to increase road safety [8]. Our research group is involved in studying sleep macro and micro patterns, sleep disorders, and daytime drowsiness related to sleep onset while driving. In this study, data from different sensors have been collected and paired as preliminary work for a fusion algorithm able to give a comprehensive analysis of camera-based and physiological-based parameters of the driver to evaluate drowsiness and attention levels. A low attention level is of course dangerous by itself and could be compromised by drowsiness and/or attention to the road, and these two factors are crucial for motor vehicle accidents. A vital sign detection RADAR was used to measure HR and RR, a smartwatch was used to detect HR, RR, and HRV, and two cameras were used to find eye blinking (EB), eye gaze angles (EGA), percentage of eyel closure over the pupil over time (PERCLOS) and head movements (HM). ## II State of the Art Regarding physiological parameters, the most accountable for sleep onset prediction are: * Heart Rate (HR), which describes the contractions of the heart per minute. * Heart Rate Variability (HRV), which represents the change in time intervals between adjacent heartbeats. * Respiration Rate (RR), which is the number of breaths a person takes per minute. These physiological parameters are the most reliable and accurate in drowsiness detection as they are concerned with what is happening with the driver physically; in fact, through their monitoring, it is possible to advise the driver to stop the vehicle before the physical symptoms of drowsiness effectively appear [9]. Moreover, they can be extracted both through contact or contactless technologies [10]. Concerning contact-based solutions, heart signals such as electrocardiogram (ECG) and photoplethysmography (PPG) are considered accurate measures of fatigue. However, their use is limited because of the intrusive nature of the sensors. Nevertheless, novel sensors can be embedded in the steering wheel or the seat belt [11]. As proposed by Jung et al., electrodes were embedded in the steering wheel; in this way, HR and HRV were extracted from ECG and drowsiness was successfully detected. However, very highly accurate sensors were needed and the position of the hands of the driver on the steering wheel was crucial for the proper acquisition of the data [12]. In another study, Li and Chuang developed a PPG sensor placed on the steering wheel of the vehicle. In this case, HRV was extracted from the raw PPG signal. Then, a Support Vector Machine (SVM) was trained to classify the state of the driver as fatigued or alert, thus obtaining a 95% fatigue detection accuracy. Even considering the non-intrusiveness of this method, it is susceptible to human error and natural movements [13]. Recently, wrist-worn wearable devices have been successfully employed as contact-based solutions for monitoring physiological parameters [14]. As reported by Kundinger et al., by the use of various smartwatch devices an accuracy of about 92% was reached in detecting drowsiness, compared to medical-grade device [15]. However, the main limitation of these devices can be the accuracy of sensing technology; in addition, it is required that the subject wears the device, relying on the diligence of the driver itself. Considering the challenges of contact-based methods, various contactless solutions have been recently explored for monitoring physiological parameters. Among them, techniques based on radar have shown promising results. Liu et al. developed a novel radar-based technology for drowsiness detection, where the accuracy of heart rate was 96.4% and the accuracy of drowsiness detection was 82.9%, compared to other state-of-the-art approaches [16]. Moreover, camera-based techniques offer the possibility to estimate drowsiness through eyes and head tracking. Eye gaze direction is widely used to detect drowsiness. In-Ho Choi and Yong-Guk Kim [17] track the driver's gaze direction by tracing the pupil's center point. Eye blinking becomes slower and more frequent while getting drowsier [18]. Shekari Soleimanloo S et al. [19] used Optalert to detect eye blinking, an infrared oculography system. Wang X and Xu C used instead the smarteye eye tracker [20]. PERCLOS express for how much time there is at least 70\(\%\) (or 80\(\%\)) of eyelid closed during a unit of time of 1 minute (or 30 seconds). In [21], B. Bakker et al. used a 3-camera smart eye pro system, with infrared lighting. G. Du et al. used instead an RGB camera for PERCLOS detection [22]. Head movements are another visual drowsiness sign as, while falling asleep, someone could start nodding. They can be found by recording with a camera [23], or by using a magnetic tracker, as the Ascension flock of birds [24]. ## III Proposed methodology The system has been designed to obtain a time series of physiological parameters from the previously described sensors. Each sensor communicates with the experimenter workstation with different protocols and physical interfaces. For this reason, we decided to design an application developed for the Microsoft Windows environment to save a CSV file containing the logs from the current experimental campaign, aligned from the timing point of view, and to send, as a JSON record via MQTT, each acquisition to a centralized NoSQL database. Message Queuing Telemetry Transport (MQTT) allows the creation of a client-server system. In this protocol, the server (in charge of relaying the record to the clients) is called _broker_. A client can have two roles: if it receives the relayed data, it is called _subscriber_ while, if it instead sends records, it is a _publisher_. For this reason, MQTT is defined as a publish-subscribe protocol. JavaScript Object Notation (JSON) is a data-exchange format. It allows to create a hierarchical data structure, and it is widely used in web development due to its convenience: thanks to ready-to-use serializer and deserializer functions, it is possible respectively to encapsulate data contained in a class into a JSON string or vice-versa to extract the JSON string content creating novel instances of the contained classes. The designed system is capable of retrieving data from the following devices: * Vital Sign Detection Radar (Chuhang Technology Radar), featuring HR, RR, and distance measurement. The latter value is needed to assess the quality of the provided data. If the distance changes it means that there is a movement of the person with respect to the Radar and hence that the measure is unreliable. The Radar is directly connected via USB (emulated serial port, with data formatted in binary form) with the experimenter workstation. * Wearable Device (GARMIN Enduro, Enduro 2, VenuSq, Venu 2) featuring off-the-shelf HR, RR, HRV, and a drowsiness onset real-time prediction algorithm developed by our research group. Bluetooth wireless protocol connects the Device with an Android smartphone, which relays the data to a remote MQTT broker that transmits a copy to its subscribers and stores the received data into a persistence layer implemented resorting to MongoDB. * Camera (Varroc and one developed by our research team) featuring EB, EGA, PERCLOS, and HM. As it is possible to observe, some data of these acquisitions are redundant (HR and RR), but it is important to keep both measurements due to the different measurement technologies. In particular, the RR is more reliable when measured by the radar, while the HR is more reliable when measured by the wearable device. In Fig.1 it is shown the architecture of the system. The central node is the workstation running the Data fusion collector. It receives data from the dashcam (via a Wi-Fi or Ethernet connection), resorting to the MQTT protocol, and from the vital sign detection radar, via a virtual serial port over USB. The data from the wearable device are received indirectly: the smartwatch sends the data to an app running on a smartphone via a virtual serial port over Bluetooth. The smartphone sends the data via Internet (mobile communication standards or Wi-Fi), resorting to the MQTT protocol to a remote server, which runs a Broker that receives the data and send them back to the Data Fusion Collector (in this case, acting as a Subscriber). Finally, the Data fusion collector saves the received data into a CSV file in the workstation's local file system and sends it to the remote server via MQTT (in this case, acting as a Publisher). The remote server stores all the received data in mongoDB. The data transmitted over MQTT are encapsulated, resorting to the JSON format, allowing to store into the mongoDB directly the received records. The variety of devices involved justifies the choice of a NoSQL database: it is possible to use different dashcams, wearable devices, and vital sign detection radar, each with a different set of recorded measurements. Thanks to the NoSQL database (with does not feature fixed-structure tables) and JSON format, it is possible to change the record structure seamlessly. Currently, the system is a work in progress, so we expect to add more sensors to this set in the following months, mainly to measure environmental conditions like air relative humidity and temperature, ambient light sensors, and air quality parameters, in particular, carbon dioxide quantity. Another future expansion of this system will be the possibility of generating synchronized time series in offline mode: the devices are equipped with proprietary tools to connect them to the workstation, which can create log files: for using these for data fusion, it is needed to align all the data from the timing point of view: the application will read the logs and generate a newer one with the fusion. Moreover, it can also store in the NoSQL database the data fusion to make data available for remote access and long-term reliable storage. The purpose of the system data-fusion system is twofold: 1. in this phase, to aid the experiments described in Section IV, needed for the development and testing of the data fusion algorithm. 2. since the data fusion collector has the availability of all the data from the sensors acts as a proof-of-concept of the final system running the real-time warning system. The data collected in phase 1 are currently used to develop the real-time algorithm iteratively: the algorithm runs offline, and the results are compared with respect to the sleep onset detected from the various polysomnography read by sleep expert medical doctors. Considering a possible future use in real vehicles (phase 2), the architectural choices may appear clearer: the dashboard/infotainment computer system can run the data fusion algorithm and warn the driver. It simplifies the management of the camera and radar: they can be part of the car itself, directly connected to the car power supply, and wired to the onboard systems. This direct connection allows for fulfilling the mandatory need for driver drowsiness and attention warning (DDAW) (Regulation (EU) 2019/2144). In any case, when the driver wears a smartwatch, connecting their smartphone to the system is more convenient by considering the typical commercial use: the driver connects their phone via Bluetooth to the car's infotainment system. Exploiting this link to seamlessly exchange data about drowsiness to the data fusion algorithm through the infotainment system is possible. Looking forward, proposing a communication standard to allow different vendors to implement this functionality in their products appears to be a good idea. ## IV Experimental results In this first stage, we conducted some experiments (Mantainance Weakfullness Tests) involving 16 volunteers, 5 females, 11 males, age range 25-30 to verify the sensitivity and accuracy of the data fusion algorithm. Each test lasted about 1.5 h, considering the instrumentation with the polysomnograph and its pads and wirings and the set-up of the data fusion system. After the instrumentation phase, the volunteers are asked to take two test sessions of 20 minutes of Mantainance Weakfullness Test (MWT). Between the two tests, the volunteers are required to answer two questionnaires. The first questionnaire, compiled before the tests, aimed to understand the participants' clinical status and his/her capability to stay awake (ESS). The second questionnaire used the Karolinska Sleepiness Scale to evaluate their drowsiness level during the test. The fusion approach has undergone thorough evaluation to validate its effectiveness. This evaluation encompasses two key aspects: firstly, enhancing the overall representation's quality, including the information about distractions, and secondly, bolstering the system's reliability through redundancy. In Fig.2, it can be observed that a complete understanding of the passenger's status requires additional information about distraction, which is not directly evident from the physiological data. In Fig.3, it is evident that the physiological-based method provides clearer and smoother information regarding the drowsiness level. Notably, the blue line indicates potential alarms for the passenger minutes earlier than the red line. To further enhance the system's performance, implementing system redundancy to establish a clearer initial driver condition would be beneficial. ## V Conclusions In this study, a significant array of sensors was utilized, each capable of operating independently and improving performance by leveraging information from other devices. The results section examined a specific real-life scenario involving a combination of a camera and a wearable device. This integrated system provided a comprehensive understanding of the driver's status in terms of both drowsiness and attention levels. Currently, the investigation is focused on merging information from the various sensors utilized. The future direction of this research involves the development of a real-time sensor fusion algorithm to effectively synthesize data from all sensors simultaneously. This advancement aims to further streamline and enhance the overall system's performance. The algorithm development is ongoing as the radar results have not been presented yet due to lower data accuracy, specifically concerning the wearable data.
2309.17239
EGVD: Event-Guided Video Deraining
With the rapid development of deep learning, video deraining has experienced significant progress. However, existing video deraining pipelines cannot achieve satisfying performance for scenes with rain layers of complex spatio-temporal distribution. In this paper, we approach video deraining by employing an event camera. As a neuromorphic sensor, the event camera suits scenes of non-uniform motion and dynamic light conditions. We propose an end-to-end learning-based network to unlock the potential of the event camera for video deraining. First, we devise an event-aware motion detection module to adaptively aggregate multi-frame motion contexts using event-aware masks. Second, we design a pyramidal adaptive selection module for reliably separating the background and rain layers by incorporating multi-modal contextualized priors. In addition, we build a real-world dataset consisting of rainy videos and temporally synchronized event streams. We compare our method with extensive state-of-the-art methods on synthetic and self-collected real-world datasets, demonstrating the clear superiority of our method. The code and dataset are available at \url{https://github.com/booker-max/EGVD}.
Yueyi Zhang, Jin Wang, Wenming Weng, Xiaoyan Sun, Zhiwei Xiong
2023-09-29T13:47:53Z
http://arxiv.org/abs/2309.17239v1
# EGVD: Event-Guided Video Deraining ###### Abstract With the rapid development of deep learning, video deraining has experienced significant progress. However, existing video deraining pipelines cannot achieve satisfying performance for scenes with rain layers of complex spatio-temporal distribution. In this paper, we approach video deraining by employing an event camera. As a neuromorphic sensor, the event camera suits scenes of non-uniform motion and dynamic light conditions. We propose an end-to-end learning-based network to unlock the potential of the event camera for video deraining. First, we devise an event-aware motion detection module to adaptively aggregate multi-frame motion contexts using event-aware masks. Second, we design a pyramidal adaptive selection module for reliably separating the background and rain layers by incorporating multi-modal contextualized priors. In addition, we build a real-world dataset consisting of rainy videos and temporally synchronized event streams. We compare our method with extensive state-of-the-art methods on synthetic and self-collected real-world datasets, demonstrating the clear superiority of our method. The code and dataset are available at [https://github.com/booker-max/EGVD](https://github.com/booker-max/EGVD). Video deraining, event camera, hybrid imaging, multimodal. ## I Introduction Outdoor cameras encounter adverse weather conditions, such as rain. Rain not only degrades the visual quality of captured images and videos but also hampers the performance of downstream multimedia tasks that rely on clean video frames, such as object tracking [1, 2], person re-identification (Re-ID) [3, 4, 5] and SLAM [6, 7, 8]. Under the considerable demands of rain-free videos, it is imperative to explore the algorithm of video deraining. Recently, many methods are proposed to handle the video deraining task and obtain substantial performance on some public benchmark datasets, e.g., NTURain [9], RainSynLight25 and RainSynComplex25 [10]. However, there still exist some drawbacks. On one hand, most of the methods are insufficient to model the spatio-temporal distribution of rain layers that exhibit strong spatial variations e.g., scale, direction, and density) and temporal dynamics (e.g., velocity and acceleration). Many deraining methods fail in accurately modeling these randomly scattered rain streaks, resulting in unsatisfactory rain streak removal and detail loss in non-rain regions. On the other hand, for exploiting multi-frame correlation, existing video deraining methods [11, 12, 13, 14] follow the flow-based pipeline. They utilize either optical flow or deformable convolution [15] to temporally align neighboring frames for rain removal. However, the presence of rain streaks breaks the existing flow constraints (the brightness constancy constraint) and prohibits estimating an accurate motion field for alignment, especially under torrential rainfall. Hence, these methods cannot make full use of information from neighboring frames, calling for more effective solutions. To sum up, how to precisely model the spatio-temporal distribution of rain layers blended in the rainy video and how to learn the favorable clues from the adjacent frames are worth more attention. Instead of struggling to design complex computational architectures, we resort to using an event camera [16], an emerging bio-inspired sensor, to solve the aforementioned limitations of existing video deraining methods. Event cameras are novel vision sensors whose working mechanism is drastically different from conventional frame sensors. Instead of capturing images at regular intervals, they report pixel-wise changes in brightness as a stream of asynchronous events. With unique advantages such as high temporal resolution (up to 1 MHz), high dynamic range (up to 140 dB), and very low power consumption, event cameras have already been applied in a variety of video tasks. Actually, the moving rain streaks usually produce obvious intensity changes, which naturally suits the dynamic perception of event cameras. We explore the application of event cameras in the context of video deraining from two perspectives. First, we use an event camera as a complementary sensor, which provides additional motion-aware information that is not explicitly provided by conventional frame cameras. With this hybrid imaging system, not only can we acquire absolute pixel intensity measurements reflecting rain and background layers, but also the motions of rain and moving background objects are prominently detected by the event camera. Second, it is of great significance to separate rain and background information that is typically merged in the image and feature domain. Nevertheless, it is hard to be achieved by conventional frame cameras especially when rain layers are complex because the motion prior out of exposure time is inaccessible. In comparison, event cameras, which output data at microseconds, can accurately perceive the motion variation of background layers. Therefore, the visibility of rain and moving objects is enhanced by event cameras and served as strong guidance for subsequent rain removal. In this paper, we propose an end-to-end learning neural network, called **E**vent-**G**uided **V**ideo **D**eraining **N**etwork (**EGVD**), for video deraining with an event camera. To be concrete, we first devise an event-aware motion detection module to selectively detect and aggregate motion information of neighboring frames using event-aware masks. Thus, we acquire fused frame features containing rain-background motions enhanced by event streams. Second, in contrast to prior works that only exploit one modality (i.e., frame), we design a pyramidal adaptive selection module to reliably separate rain and background layers in the feature domain via incorporating multi-modal contextualized clues. In such a way, we reconstruct the final rain layer, which is then added to the rain-degraded input to produce a final rain-free video. For training our network, we generate large-scale synthetic datasets, including various rains from light drizzling to heavy falls. For real-world evaluation, we use a Color-DAVIS346 camera [17] to build a real-world dataset for event-based video deraining, which contains rain-degraded videos and temporally synchronized events. The main contributions are summarized as follows: * We approach video deraining with an event camera by exploiting its motion-aware imaging and high temporal resolution property. * We design two novel components, i.e., an event-aware motion detection module and a pyramidal adaptive selection module, for effectively enhancing motion-aware regions and separating rain-background layers to produce rain-free videos. * We build a real-world dataset for event-based video deraining, which includes rainy videos and temporally synchronized event streams. Moreover, large-scale synthetic datasets including various rains are also built. * We achieve superior performance over existing state-of-the-art methods on both synthetic and self-collected real-world datasets. ## II Related Work ### _Single Image Deraining_ The single image deraining methods can be roughly divided into model-based methods and deep learning-based methods. Most of the model-based methods are proposed to utilize the intrinsic properties of the rain signal and the background texture details for separating the rain and background layers, e.g., discriminative sparse coding [18], dictionary learning [19], non-local mean filter [20], Gaussian mixture model [21]. Compared with the model-based methods, deep learning-based methods achieve better performance. Fu _et al._[22, 23] proposed a guide filter to remove rain streaks from the high-frequency parts of the rainy image and directly predict the residual rain layer. Later works [24, 25] focused on designing more effective and advanced network architectures to obtain better performance. ### _Video Deraining_ Video deraining is a long-standing ill-posed and challenging problem. In contrast to single-image deraining [24, 26, 27, 28], temporal correlation and motion contexts can be additionally incorporated for video deraining. In [29], Garg and Nayer first introduced the video deraining problem and detailed the properties of rain, such as the physical properties, spatial distribution, and appearance model. Based on the unique properties of rain, model-based methods have been proposed to approach the video deraining task by utilizing more intrinsic priors to identify and remove rain in video. For example, chromatic properties [30, 31], shape characteristics [32, 33], high frequency structure [34] are comprehensively explored for removing rains. However, for heavy rain and other complex outdoor scenes, the above prior knowledge is not enough to support these model-based methods to identify the rain and background. After the advent of deep learning, video deraining performance has been significantly improved. In [9], Chen _et al._ employed super-pixel segmentation to decompose the scene and then aligned the scene content at the super-pixel level. Then a CNN was utilized to compensate for the misalignment and missing details. Yang _et al._[11] built a two-stage recurrent network that employs dual-level flow to regularize the learning process and predicted the rain-related variables in the video. In [35], a new rain model considering rain accumulation, rain streaks, and rain occlusion was proposed. Besides, a convolutional LSTM network was designed to make full use of the spatio-temporal redundancy. More recently, Yue _et al._[36] proposed a semi-supervised video deraining method. They employed a dynamic rain generator to fit rain layers and took the real rainy videos into consideration for better performance in the real cases, substantially promoting deraining performance. In [37], Xue _et al._ designed a multi-stream coarse temporal aggregation module and a single-stream fine temporal aggregation module, which replaces the time-consuming alignment module to utilize the abundant temporal information. Although these methods achieve considerable performance, they always pay great attention to complex architectures. In contrast, we resort to using an event camera which can provide helpful information for video deraining. ### _Multi-Sensor Deraining_ Instead of using only one imaging sensor, some works [38, 39] attempted to approach the deraining problem via building a stereo system, based on the observation that the effects of identical rain streaks across stereo images are different. In [39], Kim _et al._ warped the spatially adjacent right-view frame and subtracted the warped frame from the original frame. Then a median filter was applied to the residual image for detecting rain streaks. Zhang _et al._[38] proposed the first semantic-aware stereo deraining network, which leverages semantic information and visual deviation between two views to detect and remove rain. Although stereo deraining methods promote deraining performance by taking the advantage of spatial-related information from stereo images, there still exist two common limits. First, in the scene with large and dense rain streaks, stereo images suffer from severe rain pollution and fail to provide adequate cues to each other. Second, it is hard to model the temporal dynamics of rain for stereo sensors, which is more important for understanding the generation process and intrinsic property of rain layers. ### _Event-Based Vision Techniques_ Event cameras have found extensive applications in various domains. Our work is closely related to prior research in event-based video deblurring [40, 41], video super-resolution [42, 43], and video interpolation [44, 45]. Notably, event cameras have recently been employed in video deraining [46], where a pioneering approach was introduced, leveraging multi-patch progressive learning for event-aware video deraining. In contrast, our method capitalizes on event data to explicitly detect rain and separate the rain layer, achieving better performance. ## III Preliminaries ### _Rain in Frame Cameras_ There are many unique properties of rain, such as geometric property [47], chromatic property [31], spatial and temporal property [31]. In our work, we focus on the photometry of rain [29]. In [29], Garg and Nayer pointed out that a raindrop acts as a spherical lens that refracts and reflects light, and produces a positive intensity change at a pixel. The imaging process is formulated as: \[I_{r}(\vec{x})=\int_{0}^{\tau}E_{r}dt+\int_{\tau}^{T}E_{b}dt, \tag{1}\] where the \(\tau\) is the time that a drop projects onto a pixel, \(T\) is the exposure time of the camera. We can see that the intensity for pixel \(\vec{x}\) is a linear combination of raindrop irradiance \(E_{r}\) and background irradiance \(E_{b}\), resulting in a fused measurement that is hard to separate. Due to the intrinsic property of capturing at a fixed internal for the frame camera, severely motion-blurred rain streaks are produced by high-speed moving raindrops. Moreover, the motion priors out of exposure time are inaccessible, leading to inevitable performance degeneration. ### _Rain in Event Cameras_ The event camera outputs a sparse data stream \(\mathcal{E}=\{e_{k}\}_{k=1}^{N_{e}}\), where \(N_{e}\) is the number of events, reporting the intensity changes in the scene. Each triggered event can be represented as a quaternion \((x_{k},y_{k},t_{k},p_{k})\), describing the spatial coordinates, timestamp and polarity respectively. Each pixel of an event camera is able to independently and asynchronously produces an event if intensity change reaches a threshold: \[\log I(\textbf{x},t)-\log I(\textbf{x},t-\Delta t)=\pm C, \tag{2}\] where \(I(\textbf{x},t)\) is the intensity of pixel **x** at time t, C is the contrast threshold that can be obtained from the camera configuration, \(\Delta t\) is the time interval of the last event triggered at the same position. The survey paper [16] provides more details of the event camera. In a rain scene, the output of an event camera can be given as: \[\mathcal{E}=\mathcal{E}_{r}\cup\mathcal{E}_{b}, \tag{3}\] where \(\mathcal{E}_{r}\) and \(\mathcal{E}_{b}\) are event streams triggered by motions of rain and background respectively. Particularly, there are two differences between a frame camera and an event camera for rain imaging. **(i)** The measurement of an event camera (Equation (3)) only focuses on the motion regions, while a frame camera (Equation (1)) records motion and static regions simultaneously; **(ii)** An event camera outputs data at microseconds that approximates temporally-continuous recording, while a frame camera only produces data within the exposure time. The above observations motivate us to approach video deraining with an event camera, by exploiting its perception of motion variation and microsecond temporal resolution property. ## IV Proposed Method We propose a learning-based network, named as **EGVD**, for approaching video deraining with an event camera. As shown in Fig. 1, our EGVD comprises four components, i.e., a feature extractor, an event-aware motion detection (EAMD) module, a pyramidal adaptive selection (PAS) module, and a reconstruction module, which tightly collaborate for video deraining. ### _Overview_ We consider three consecutive frames and in-between events as the input. In specific, given a target frame \(I_{t}\) and its Fig. 1: Pipeline of our event-guided video deraining method. \(I_{t}\), \(I_{t-1}\), \(I_{t+1},E^{t}\), \(E^{t}_{d}\) denote the target frame, its neighboring frames and in-between events. \(EAMD\) and \(PAS\) denote the event-aware motion detection module and pyramidal adaptive selection module respectively. The pyramidal features in the light green trapezoid region are the outputs of \(PAS\). The reconstruction module is composed of three decoder blocks (\(DB_{i}\)) and two \(MSAMs\). The output of the decoder block (\(DB_{i}\)) is the decoded rain feature and \(I^{i}_{d}\) denotes the derained frame. We build \(DB_{i}\) by using eight convolutional blocks and present the details of \(MSAM\) in the lower left part. neighboring frames \(I_{t-1},I_{t+1}\), the events between \(I_{t-1}\) and \(I_{t}\) and the events between \(I_{t}\) and \(I_{t+1}\) are converted into event voxel grids \(E_{-}\) and \(E_{+}\) respectively. The conversion method will be described in Section IV-B. We then utilize the feature extractors to extract the features from three frames \(I_{[t-1:t+1]}\) and two event voxel grids \(E_{-}\), \(E_{+}\), forming frame features \(F_{[t-1:t+1]}^{f}\in\mathbb{R}^{C\times 3\times H\times W}\) and event features \(F_{-}^{e}\in\mathbb{R}^{C\times 1\times H\times W}\), \(F_{+}^{e}\in\mathbb{R}^{C\times 1\times H\times W}\) respectively. The feature extractors of frame and event share a similar network architecture, which contains one convolution layer and one residual block. After feature extraction, we feed the frame features \(F_{[t-1:t+1]}^{f}\) and the event features \(F_{-}^{e}\), \(F_{+}^{e}\) to EAMD, which is responsible for detecting and enhancing the motion-aware information of frame features with the guidance of events. Then, the enhanced frame feature \(F_{EAMD}^{f}\) and event feature \(F_{EAMD}^{e}\) from EAMD are fed into PAS for motion separation and multi-modal fusion. Finally, the output features of PAS pass through a reconstruction module for predicting the residual layer, to which the target degraded frame is added for obtaining the derained frame. ### _Event Representation_ Compared with conventional frame data, event data is essentially a kind of sparse spatio-temporal stream. We first convert it into a fixed-size representation. Specifically, we opt to encode event data that is triggered in the time interval between two adjacent frames in a spatio-temporal voxel grid, sharing a similar idea with [48]. Given an event stream \(\mathcal{E}=\left\{e_{k}=\left(x_{k},y_{k},t_{k},p_{k}\right)\right\}_{k=0}^{ N-1}\) with a duration \(\Delta T=t_{N_{e}-1}-t_{0}\), we uniformly divide the duration \(\Delta T\) into \(B\) time bins. In this way, every event distributes its polarity to two temporally closet voxels. Mathematically, the event voxel grid is formulated as: \[E\left(x_{m},y_{n},t_{l}\right)=\sum_{\begin{subarray}{c}\left(x_{k},y_{k} \right)=\left(x_{m},y_{n}\right)\\ k\in\left\{0,\cdots,N_{e}-1\right\}\end{subarray}}p_{k}\max\left(0,1-\left| t_{l}-t_{k}^{*}\right|\right), \tag{4}\] where \(t_{k}^{*}\triangleq\frac{B-1}{\Delta T}\left(t_{k}-t_{0}\right)\) is the normalized event timestamp, and \(t_{l}\in\left\{0,\cdots,B-1\right\}\) denotes the index of time bin. ### _Event-Aware Motion Detection_ Unlike single-image deraining, additional temporal information that exists across adjacent frames can be exploited for video deraining. However, directly packaging the multi-frame features into a network is not necessarily effective, but burdens the rain removal task due to the introduction of excessive redundant information. In contrast, we selectively extract the motion-aware information of neighboring frames. The motion information typically acts as favorable clues for modeling the dynamic generation process of rain layers, which is able to identify the rain region. However, it is difficult to obtain motion-aware information for conventional frame cameras, which are limited by their low temporal resolution and low dynamic range properties of imaging. Fortunately, event cameras are capable of accurately detecting motion variations even for large motion and low light scenes thanks to their unique properties. Therefore, we design an event-aware motion detection module to detect the motion features from the neighboring frames and then employ a 3D convolution block to fuse them with central target frame features. More specifically, as shown in Fig. 2, we first generate a motion-attention mask \(M\) using event features \(F_{-}^{e}\), \(F_{+}^{e}\). We formulate it as: \[\begin{split} M=&\sigma\left(F_{m}^{e}\right),\\ F_{m}^{e}=&\psi_{3\times 3}\left(\left[\left(\psi_{1 \times 1}\left(\psi_{7\times 7}\left(F^{e}\right)\right)\right),\left(\psi_{1 \times 1}\left(\psi_{3\times 3}\left(F^{e}\right)\right)\right)\right]\right),\\ F^{e}=&[F_{-}^{e},F_{+}^{e}],\end{split} \tag{5}\] where \(M\) is the predicted motion-attention map, \(\sigma\) is the sigmoid activation function which restricts the outputs in (0,1), and \([\cdot]\) indicates channel-wise concatenation. We adopt seven convolution layers with different kernel-sizes, i.e., \(1\times 1\), \(3\times 3\), \(5\times 5\), \(7\times 7\), denoted as \(\psi_{1\times 1},\psi_{3\times 3},\psi_{5\times 5},\psi_{7\times 7}\), to extract the information of different receptive fields from event features, Fig. 2: Detailed architecture of event-aware motion detection (\(EAMD\)) module and pyramidal adaptive selection (\(PAS\)) module. \(EAMD\) first selectively extracts the motion-aware information from the neighboring frame features with the guidance of the motion-attention map which is learned from the event features, then employs a 3D convolution block to fuse the detected motion features from the neighboring frames with the central target frame features. \(PAS\) adopts the pyramidal architecture to embed the multi-scale features in the frame domain and event domain respectively. At each scale, the rain-edge attention (\(REA\)) block is designed to enhance the motion of rain layers in the event domain and the multi-modal fusion (\(MMF\)) block is employed for effectively fusing multi-modal features from frames and events. Additionally, a ConvLSTM layer is employed at the last scale encoder to model the long-term correlation of frames. which enables the effective detection of motion-aware regions. We visualize the motion-aware mask \(M\) in Fig. 3 (b). As can be seen, the motion regions of rain and moving objects can be clearly detected. Afterwards, we rectify the frame features \(F_{t-1}^{f}\), \(F_{t+1}^{f}\) using the predicted motion-aware mask \(M\): \[F_{m}=\psi_{1\times 1}\left(M\otimes\left[F_{t-1}^{f},F_{t+1}^{f}\right] \right), \tag{6}\] where \(\otimes\) stands for the element-wise multiplication. Then a 3D convolution layer is adopted to aggregate the multi-frame contexts and further enhance the motion-aware information in the frame domain, generating the enhanced frame features \(F_{EAMD}^{f}\). Similarly, in the event domain, we also aggregate the event features using a 3D convolution layer, yielding the enhanced event features \(F_{EAMD}^{e}\). These two 3D convolution layers do not share parameters. We formulate them as: \[F_{EAMD}^{f},F_{EAMD}^{e}=\psi_{3D}\left(\left[F_{t}^{f},F_{m}\right]\right), \psi_{3D}\left(F^{e}\right). \tag{7}\] ### _Pyramidal Adaptive Selection_ One key challenge of video deraining is how to accurately separate the rain layer and background layer, which are typically blended in the image domain and feature domain. It is difficult to achieve with frame cameras, especially for heavy rain scenes. Thanks to the properties of high temporal resolution and high dynamic range, event cameras are able to enhance the visibility of rain and moving objects of background in the event domain, hence providing strong guidance for rain-background separation. To this end, we build a pyramidal adaptive selection module as illustrated in Fig. 2. To be specific, we first adopt the encoder architecture of standard UNet [49] to embed deep multi-scale features in the frame domain and event domain respectively. We denote the extracted features as \(F_{i}^{f}\) and \(F_{i}^{e}\), where \(i\in\{1,2,3\}\) denotes the scale index. At each scale, we design a rain-edge attention (REA) block to enhance the motion of rain layers in the event domain and employ a multi-modal fusion (MMF) block to adaptively fuse the information of both frame and event modalities. We implement our REA block by adopting four symmetric channel-spatial-spatial-channel attentions and concatenate the multi-modal features before feeding to a residual convolution to form an MMF block. Moreover, in order to model the long-term correlation of frames, we also employ a ConvLSTM layer at the last scale encoder. With the design of repeated symmetric channel-spatial-spatial-channel attentions, the REA block is able to cyclically learn the spatial/channel importance in the event domain. In this way, the rain and background motions are able to be separated effectively, which can be clearly observed in Fig. 3 (c), (d). The fused multi-modal features of the MMF block take the complementary merits of multi-modal features from frames and events, favorably promoting the video deraining performance. The computing process of the PAS module can be formulated as: \[F_{PAS}^{i}=\begin{cases}MMF\left(REA\left(F_{i}^{e}\right),F_{i}^{f}\right),i=1,2\\ MMF\left(REA\left(F_{i}^{e}\right),LSTM\left(F_{i}^{f},S_{t-1}\right)\right),i= 3\end{cases} \tag{8}\] where \(i\in\{1,2,3\}\) denotes the scale index, and \(S_{t-1}\) is the previous state of the ConvLSTM layer. ### _Rain Layer Reconstruction_ Instead of predicting a clean background layer, our network predicts a negative rain layer, which can be attributed to two reasons. On one hand, the rain layer is sparser than the background layer, making it easier for the network to converge, which has been proved in [23]. On the other hand, in our event-guided video deraining setting, we separate the rain layer and background layer from the perspective of motions. The moving edges of rain help us model the dynamics and spatial distribution of rain streaks, which makes it easier to directly predict the rain layer. Typically, most deraining methods choose to predict the rain layer for the first reason, however, due to the specificity of our setting, the second reason is more important. We also validate the effectiveness of the way to directly predict the rain layer in Section V-C. As shown in Fig. 1, given the features \(\{F_{PAS}^{i}\mid i\in\{1,2,3\}\}\) generated by PAS, we use three convolutional blocks to progressively reconstruct the negative rain layer. Inspired by [26], we also build a multi-scale supervised attention module (MSAM) to enhance the feature learning with the supervision of a ground-truth clean frame. After obtaining the negative rain layer, we add it with the input target rainy frame to obtain the clean background. The rain layers at three scales will be supervised during the training phase, and we select the output of the last scale as the final reconstructed rain layer when inference. ### _Loss Function_ We apply the negative SSIM loss for computing the distance between the intermediate prediction at each scale and the ground-truth frame. The overall loss is the sum of losses at different scales, which is formulated as: \[\mathcal{L}=-\sum_{i=1}^{3}SSIM\left(I_{d}^{i},I_{gt}\right), \tag{9}\] Fig. 3: Visualizations of (a) the last channel map of the event voxel grid (blue: positive event; red: negative event), (b) motion mask \(M\) in \(EAMD\) module, (c, d) background and rain features outputted by \(REA\) of \(PAS\) module (two from all channel maps), (e) rainy frame, (f) rain features outputted by the last decoder block (\(DB_{1}\) in Fig. 1), (g) estimated rain layer, and (h) derained frame. Zoom-in for better visualization. where \(I_{d}^{i}\), \(I_{gt}\), \(i\) indicates the derained frame, corresponding ground-truth frame and scale index, respectively. ## V Experiments ### _Experimental Settings_ #### V-A1 Synthetic Datasets To the best of our knowledge, there are no benchmark datasets that provide rainy videos with temporally synchronized event streams and corresponding ground-truth clean videos. Hence, we generate large-scale synthetic datasets for event-based video deraining. Specifically, we first use a video editing software to synthesize rain streaks. We randomly set the parameters, e.g., scale, density, wind direction, camera shutter speed, scene depth and capacity. Afterwards, we choose some video clips as ground-truth, which are overlaid by the generated rain layers to produce rainy videos. We finally choose the open event simulator [50] to generate event streams from rainy videos. In such a way, we generate four synthetic datasets in total. We name them following the pattern "N-D", where "N" indicates Neuromorphic and "D" will be replaced with the name of the original dataset from which the clean videos come. The details of four synthetic datasets are provided below. _N-NTURain_: It is generated from 16 rain-free sequences in the NTURain [9] dataset, which is widely used as a benchmark for video deraining methods, but with re-synthesized rain streaks. Before synthesizing rain streaks, we use the interpolation method proposed in [51] to increase the frame rate for better event simulation. We adopt the default dataset splitting as in [9]. For training, we synthesize 3 to 4 different rain appearances over each clean video, resulting in 25 training videos. For testing, we produce 8 videos with varying rain parameters. _N-GoproRain_: To take the more complex motion information into consideration and further validate the ability of motion separation of \(REA\), we choose GoPro [52] that is built by a high-speed camera for dynamic scene deblurring as the clean video source. GoPro is a more challenging dataset that contains a variety of object motions and camera motions. We adopt the default dataset splitting as in [52]. For training, we synthesize 3 different rain appearances over each clean video, resulting in 66 training videos. For testing, we produce 11 videos with varying rain parameters. _N-AdobeRainH, N-AdobeRainL:_ Similar to RainSynLight25/RainSynComplex25 [10] and Rain100L/H [53], we synthesize two datasets containing only heavy and light rain layers based on Adobe240fps [54], which are captured outdoors at 240fps. Each of them contains 109 training video clips and 19 testing video clips. #### V-A2 Real-World Dataset For real-world evaluation, we construct a real-world dataset using a Color-DAVIS346 camera [17]. This camera has a high-speed event sensor and a low frame-rate active pixel sensor (APS) with a resolution of \(260\times 346\), which produces event streams and low frame-rate frames. We capture rainy videos in real rainy days. By doing so, we are able to model the real spatio-temporal distribution of rain streaks along with the light conditions of real scenes. Moreover, we collect the data at different times and carefully control exposure time to obtain the rainy videos with different types of rain streaks under different lighting conditions. Our real-world dataset consists of two groups of data sets, the videos in the first group are captured by a still camera, while those in the second group are captured by a panning camera. Each of them varies from light drizzling to heavy falls. In total there are 10 video clips in our real-world dataset. We name our real-world dataset Rain-DAVIS. #### V-A3 Implementation Details We train our network on random-cropped 128x128 patches with a batch size of 2 for 500 epochs. We use Adam optimizer [61] with the initial learning rate of \(1\times 10^{-4}\), which is steadily decreased to \(1\times 10^{-5}\) using the cosine annealing strategy [62]. We implement our deep model using PyTorch 1.1 [63] and conduct all experiments on an NVIDIA GTX1080Ti GPU. ### _Comparisons with State-of-The-Art Methods_ #### V-B1 Baselines We make comparisons with the state-of-the-art video deraining methods: SAVDTDC [15], S2VD [36], RMFD [35], GTA-Net [37] and single image deraining methods: MPRNet [26], DCSFN [57], PReNet [56], DualResNet [55]. The event-based video deraining method EAVD [46] is also evaluated for comparison. Because our method is trained in a supervised manner, so we follow the instructions in [36] for training a supervised S2VD [36] using ground-truth data for a fair comparison. For training other baselines, we carefully follow the training strategy provided by the authors. The average PSNR and SSIM are used as evaluation metrics. #### V-B2 Results on Synthetic Datasets Table I lists the average PSNR and SSIM results on four synthetic datasets, including N-NTURain, N-GoproRain, N-AdobeRainL, and N-AdobeRainH. Evidently, EGVD attains the best performance. Especially on N-NTURain, N-AdobeRainL, and N-AdobeRainH, EGVD achieves at least 2 dB PSNR gains, which demonstrates that our method obtains better performance in removing rain streaks and preserving clean texture details. This could be attributed to its powerful capability in effectively utilizing the motion features from the neighboring frames and accurately modeling the spatio-temporal distribution of rain layers with the guidance of event data. The probabilistic video deraining method S2VD [36] is less effective in these datasets because it adopts a dynamical rain generator consisting of a transition model and an emission model to represent the dynamics of rains and mimic the generation process of rain layers based on the statistics of rain streaks. It fails to handle complex and heavy rain in these synthetic datasets. With the additional event information, EAVD performs well in N-NTURain and N-AdobeRainL, but relatively worse in the other two datasets. The possible reason is that the scenes in N-GoproRain and N-AdobeRainH usually have heavy rain, which might be hard for EAVD to tackle with. Visual comparison results on the N-NTURain dataset are depicted in Fig. 4. In the first example, state-of-the-art video deraining techniques such as S2VD [36] and SAVDTDC [15] outperform single-image deraining methods like MPRNet [26], Restormer [60], and PReNet [56] due to their incorporation of additional temporal consistency and correlation. However, they exhibit shortcomings in detecting and removing certain small rain streaks. EAVD [46] also demonstrates commendable deraining results, albeit with residual rain traces. In contrast, EGVD effectively harnesses contextual information from adjacent frames, accurately identifying rain streaks from a motion perspective. Consequently, EGVD excels in rain streak removal and approaches ground truth quality. In the second example, the frame-based deraining methods struggle to distinguish rain streaks from the background layer, leading to either missed detections or excessive smoothing of non-rain regions resembling rain streaks. Conversely, the event camera, capturing motion information, guides our method in correctly identifying rain streaks and non-rain regions. Consequently, our approach efficiently eliminates rain streaks while preserving clean texture details. To further bolster our claims, we present additional visual comparisons on the N-GoproRain, N-AdobeRainH, and N-AdobeRainL datasets, displayed in Fig. 5, 6, respectively. Clearly, our EGVD method outperforms other deraining techniques across various rain intensities, from light driz Fig. 4: Visual comparisons on N-NTURain. We visualize the last channel map of the event voxel grid (blue: positive event; red: negative event) in (b). Zoom-in for better visualization. ## IV Conclusion Fig. 5: Visual comparisons on N-GoproRain. We visualize the last channel map of the event voxel grid (blue: positive event; red: negative event) in (b). Zoom-in for better visualization. Fig. 6: Visual comparisons on N-AdobeRainL. and N-AdobeRainH. We visualize the last channel map of the event voxel grid (blue: positive event; red: negative event) in (b). Zoom-in for better visualization. zles to heavy downpours. It excels at effectively eliminating rain streaks and accurately restoring intricate texture details, whereas the compared deraining methods struggle to completely eradicate the rain streaks and often incorrectly handle background layer details. #### Iv-B3 Generalization on Real-World Dataset We evaluate the generalization capability of representative methods using our real-world dataset, Rain-DAVIS. To ensure a fair comparison, we employ pre-trained models on the N-NTURain dataset to eliminate real rain streaks in Rain-DAVIS. Visual results are presented in Fig. 7. Frame-based methods struggle to detect and remove rain streaks due to differences in rain patterns between synthetic and real rainy videos. Consequently, frame-based rain removal methods yield deraining results with incomplete rain streak removal and some detail loss. In contrast, our method, guided by events, excels in rain streak detection and removal, surpassing the performance of EAVD. As evident in the two examples, the compared methods produce deraining results with incomplete rain streak removal, while our approach successfully restores clearer rain-free frames and preserves fine details. ### _Ablation Study_ We examine the efficacy of key components within our approach by a comprehensive series of ablation experiments. Specifically, we scrutinize the impact of various factors, including input data, network modules, mapping ways, and loss functions. All of these ablation experiments are conducted using the N-NTURain dataset as our testbed. #### Iv-C1 Influence of Input Data We argue that the utilization of event data enhances the video deraining process. To substantiate the efficacy of our approach, we conduct three experiments: **1)** Only frame as input. We remove the event branch (containing operations related to event data) in Fig. 1, thus only frames are taken as input. **2)** Frame + Frame. We keep the event branch but replace its input with the frames. Fig. 8: Visual results of ablation on input data. We visualize the last channel map of the event voxel grid (blue: positive event; red: negative event) in (b). Zoom-in for better visualization. Fig. 7: Visual comparisons on our real-world dataset Rain-DAVIS. We visualize the last channel map of the event voxel grid (blue: positive event; red: negative event) in (b). Our method is able to remove the rain streaks and restore clearer texture information, while other methods cannot remove some rain streaks and lose details of non-rain regions. Zoom-in for better visualization. It means the frame/event branch processes the same frames. **3)** Frame + Event. We take frames and events as input, which is the main method in this paper. We present the numerical results in Table II and the visual results in Fig. 8. It can be clearly observed that the setting of frame + event achieves the best result, providing visually pleasing derained images. In Fig. 8 (d), (e), we observe that the white pillars are misjudged and removed as rain streaks, which causes the loss of texture details. Meanwhile, when we keep the event branch but replace its input with frames, the additional rainy frames fail to give positive cues for the removal of rain streaks but instead burden the deraining task. In contrast, with the guidance of event data, our proposed method correctly identifies the rain region, which avoids losing the details of the background, especially for the region which is similar to the rain streaks. 5 Influence of the number of Time Bins We examine how the number of bins in the event voxel grid affects our investigation. We evenly distribute events into varying bin counts for our experiments. In Table VI, we find that our method is not significantly affected by the number of time bins. We identify a 10-bin voxel grid as the optimal choice and apply it in all previous experiments. ## VI Conclusion In this paper, we present a learning-based framework for addressing the video deraining task using an event camera. Our approach comprises two key components: an event-aware motion detection module and a pyramidal adaptive selection module. These modules are designed to effectively enhance motion-aware regions and extract rain layers. Furthermore, we have curated a real-world dataset specifically for event-based video deraining. We provide quantitative and qualitative evidence showcasing the superiority of our method compared to state-of-the-art techniques, across both synthetic and real-world datasets. We anticipate that the concepts we propose can find application in restoring clear videos in adverse weather conditions like snow, hail, and sandstorms, and we plan to explore these possibilities in our future work.
2309.10556
Forgedit: Text Guided Image Editing via Learning and Forgetting
Text-guided image editing on real or synthetic images, given only the original image itself and the target text prompt as inputs, is a very general and challenging task. It requires an editing model to estimate by itself which part of the image should be edited, and then perform either rigid or non-rigid editing while preserving the characteristics of original image. In this paper, we design a novel text-guided image editing method, named as Forgedit. First, we propose a vision-language joint optimization framework capable of reconstructing the original image in 30 seconds, much faster than previous SOTA and much less overfitting. Then we propose a novel vector projection mechanism in text embedding space of Diffusion Models, which is capable to control the identity similarity and editing strength seperately. Finally, we discovered a general property of UNet in Diffusion Models, i.e., Unet encoder learns space and structure, Unet decoder learns appearance and identity. With such a property, we design forgetting mechanisms to successfully tackle the fatal and inevitable overfitting issues when fine-tuning Diffusion Models on one image, thus significantly boosting the editing capability of Diffusion Models. Our method, Forgedit, built on Stable Diffusion, achieves new state-of-the-art results on the challenging text-guided image editing benchmark: TEdBench, surpassing the previous SOTA methods such as Imagic with Imagen, in terms of both CLIP score and LPIPS score. Codes are available at https://github.com/witcherofresearch/Forgedit
Shiwen Zhang, Shuai Xiao, Weilin Huang
2023-09-19T12:05:26Z
http://arxiv.org/abs/2309.10556v2
# Forgedit: Text Guided Image Editing via Learning and Forgetting ###### Abstract Text guided image editing on real images given only the image and the target text prompt as inputs, is a very general and challenging problem, which requires the editing model to reason by itself which part of the image should be edited, to preserve the characteristics of original image, and also to perform complicated non-rigid editing. Previous fine-tuning based solutions are time-consuming and vulnerable to overfitting, limiting their editing capabilities. To tackle these issues, we design a novel text guided image editing method, Forgedit. First, we propose a novel fine-tuning framework which learns to reconstruct the given image in less than one minute by vision language joint learning. Then we introduce vector subtraction and vector projection to explore the proper text embedding for editing. We also find a general property of UNet structures in Diffusion Models and inspired by such a finding, we design forgetting strategies to diminish the fatal overfitting issues and significantly boost the editing abilities of Diffusion Models. Our method, Forgedit, implemented with Stable Diffusion, achieves new state-of-the-art results on the challenging text guided image editing benchmark TEdBench, surpassing the previous SOTA method Imagic with Imagen, in terms of both CLIP score and LPIPS score. Codes are available at [https://github.com/witcherofresearch/Forgedit](https://github.com/witcherofresearch/Forgedit). ## 1 Introduction Image Editing (Oh et al., 2001) is a fundamental problem in computer vision. In order to edit the image, there should be a guidance condition to inform the model what is the editing target. Language is the most direct and general form of such editing guidance, in which case the editing task is called text guided image editing. Such a text describing the content of the desired edited image is usually called target prompt. In this paper, we are trying to tackle text guided image editing in the toughest setting with only original image and target prompt provided, which are the minimum requirements of input for text guided image editing. Text guided image editing is a very general and universal editing task, which includes both rigid and non-rigid editing, for example, editing the appearance, identity and style, replacing or adding or removing certain parts of the image, editing the pose, action and angles of the objects, editing multiple objects of complex relationships, controlling the numbers and positions of the objects, etc. According to whether fine-tuning process is involved, the solutions to text guided image editing are divided into non-optimization methods and optimization involved methods. There are various works for non-optimization editing, for example, ControlNets(Zhang & Agrawala, 2023), Diffusion based Inpainting Models (Rombach et al.), SDEdit (Meng et al., 2021), PnP Diffusion (Tumanyan et al., 2023), instruct pix2pix (Brooks et al., 2023), DiffEdit (Couairon et al., 2023) etc. However, we found that none of them, are strong enough to preserve the characteristics and perform sophisticated non-rigid edits at the same time. Thus, it is essential to fine-tune the Diffusion Models with the original image in order to preserve the identity of the objects. Imagic (Kawar et al., 2023) is a three-stage text guided image editing method, which regards the target prompt as a pseudo source prompt to describe the original image. In the first stage, Imagic fine-tunes the source prompt text embedding, freezing everything else. In the second stage, Imagic fine-tunes the UNet, freezing other parameters. In the third stage, Imagic interpolates fine-tuned source prompt embedding and target prompt embedding and completes the editing by utilizing the interpolated text embedding to guide text-to-image generation. Imagic equipped by Imagen (Saharia et al., 2022) is the current state-of-the-art text guided image editing algorithm. However, such multi-stage fine-tuning process takes long time and costs great amount of computation resources. Another possible solution is a popular fine-tuning method, DreamBooth (Ruiz et al., 2023), which can be further adapted and improved to perform text guided image editing. Instead of requiring a user provided prompt 'a [V] object' to refer to the editing object, we utilize BLIP (Li et al., 2022) to generate a caption to describe the original image. Such BLIP+DreamBooth combinations are capable of conducting non-rigid edits and preserving consistent characteristics of original image, demonstrating amazing semantic alignments with the target prompt and high fidelity to original image. However, Both Imagic and BLIP+DreamBooth suffer from overfitting in many cases, restricting the editing capability of the Diffusion Models. In this paper, we are going to tackle the aforementioned issues of these optimization based editing methods. We name our text guided image editing method _Forgedit_, similar with _forget it_. There are two stages in our editing method, fine-tuning and editing. Overall, with BLIP (Li et al., 2022) generating source prompt, we design a novel vision language joint optimization fine-tuning framework, which can efficiently learn to reconstruct the original image with source text embedding and UNet in less than one minute on one A100 GPU, much faster than Imagic(Kawar et al., 2023) considering the fact that Imagic+Stable Diffusion (Rombach et al.) takes 7 minutes to fine-tune on an A100 GPU. Besides, we explore two different methods to merge the source text embedding and target text embedding, vector subtraction which sums source prompt embedding with a weighted subtraction of target prompt embedding and source prompt embedding, and vector projection which decomposes the target prompt embedding along source prompt embedding and orthogonal to the source prompt embedding then sum these two vectors with two coefficients. We found that vector subtraction is better at editing yet vector projection is better at preserving the characteristics of original image during editing. Finally, our Forgedit aims to tackle the overfitting issue existing in previous optimization based editing methods. Due to such overfitting issues, for many cases, text guided image editing methods are only capable of reconstructing the original image, losing their capabilities to edit. Simple solutions may be trying different learning rates and training steps or selecting proper parameters of the Diffusion Models to fine-tune. Yet, there are no silver bullets to find a group of proper hyper-parameters for each editing image thus such hyper-parameter searching for fine-tuning process could be very inefficient and resource consuming. Instead, we propose novel **Forgetting Strategies** to tackle the overfitting issue during **sampling** process. Compared with fine-tuning process, sampling process is more computation-efficient. Such forgetting strategies are designed based on our observation of a universal property of UNet structures in Diffusion Models. We found that the Encoder of UNets controls the pose, action, angles, spatial positions meanwhile the Decoder of UNets is in charge of appearance and textures. We could replace the learned parameters of the UNets with original parameters according to the purpose of target prompt, which we call **forgetting**. To sum up, our main contributions are: 1. We present Forgedit, a novel efficient optimization based image editing framework to tackle general text guided image editing problem, capable of performing both rigid and non-rigid editing. 2. We introduce vector projection mechanism to merge source text embedding and target text embedding, which is generally better at preserving the characteristics of original image than vector subtraction. 3. We design novel forgetting strategies based on our observation of UNets' properties in Diffusion Models, tackling the common and fatal overfitting issues in optimization involved text guided image editing methods thus significantly improve the editing capability of Diffusion Models. Our Forgedit implemented with even the outdated Stable Diffusion 1.4 achieves new state-of-the-art quantitative results on the challenging benchmark TEdBench (Kawar et al., 2023), surpassing previous SOTA Imagic equipped with Imagen in terms of both CLIP score (Hessel et al., 2021) and LPIPS score (Zhang et al., 2018). Our Forgedit is a very general text guided image editing method, which can also significantly boost the performance of other fine-tuning based text guided image editing method, which we will show in the appendix. ## 2 Related work **Text to Image Diffusion Models** Diffusion Models have dominated text to image generation. DDPM(Ho et al., 2020) improves Diffusion process proposed by Sohl-Dickstein et al. (2015) on generating images. DDIM (Song et al., 2021) accelerates the sampling procedure of Diffusion Models by making reverse process deterministic and using sub-sequence of time-steps. Dalle 2 (Ramesh et al., 2022) trains a diffusion prior to convert a text caption to CLIP (Radford et al., 2021) image embedding and then employs a Diffusion Decoder to transfer the generated CLIP image embedding to an image. Imagen (Sahara et al., 2022) is a Cascaded Diffusion Model (Ho et al., 2021), whose UNet is composed of three Diffusion Models generating images with increasing resolutions. Also, Imagen employs the powerful T5 text encoder (Raffel et al., 2020), which turns out to be vital for complex semantic understanding and generating sophisticated scenarios. Stable Diffusion (Rombach et al.) utilizes Variational AutoEncoders (Kingma & Welling, 2014) to compress the training image to a compact latent space so that the UNets could be trained with low resolution latents in order to save computational resources. **Image Editing with Diffusion Models** Empowered by recent progress in text-to-image Diffusion Models, image editing methods have witnessed remarkable improvements. There are various works for non-optimization editing. ControlNets (Zhang & Agrawala, 2023) are trained on extra datasets to learn generating images with different conditions. However, these conditions only reflect partial attributes of the original image thus ControlNets are incapable of preserving the identity of the object being edited and also struggle to conduct non-rigid edits. Inpainting Models based on Diffusion Models(Rombach et al.) require masks to indicate the editing region, for whom the target mask can be obtained via semantic segmentation models by using a text prompt to refer to. Such text guided Inpainting Models are good at replacing or removing objects, better than other text guided image editing models in terms of preserving non-edited details of original image. However, there are several disadvantages of text guided inpainting models. First, these models cannot preserve the identity of the object being edited. Second, due to the restricts of the region of masks, inpainting models cannot conduct non-rigid editing, for example, making a bird perching on the branch spread its wings. Third, extra masks or texts to refer to the target objects in original image has to be provided, which is not possible in our case since there are only target prompt and original image given in our settings. SDEdit (Meng et al., 2021) utilizes DDIM Inversion to add noises to the original image and then denoises the image with target prompt. DiffEdit (Couairon et al., 2023) obtains the target object mask with Diffusion Model itself by a user provided source prompt and conduct SDEdit in the mask region. PnP Diffusion (Tumanyan et al., 2023) injects intermediate features of original image to the generation of target prompt. Instruct pix2pix (Brooks et al., 2023) pretrains the Diffusion Models on external datasets with triplets of original image, edited image and target prompt. All these non-optimization methods suffer from the fact that they are either incapable of preserving the characteristics or unable to conduct complex non-rigid editing. Prompt to Prompt (Hertz et al., 2023) requires that the source prompt and target prompt must be provided in a precisely matching form so that the algorithm could accurately find the editing target, which is too ideal thus impossible in our setting. Imagic (Kawar et al., 2023) is a three-stage optimization based editing method, which is the current state-of-the-art text guided image editing algorithm, which could be regarded as a combination of textual inversion (Gal et al., 2023)in the first stage and DreamBooth (Ruiz et al., 2023)in the second stage. However, the fine-tuning stages of Imagic are very slow and suffer from overfitting. ## 3 Forgedit ### Problem settings Given a target prompt and an image, text guided image editing edits the image according to the target prompt, which not only requires the editing being conducted well, but also needs to preserve everything else unchanged. In this paper we try to tackle the text guided image editing problem with the condition that only target prompt and original image are provided, which means that the model should reason by itself which part of the original image is inconsistent with the target prompt and conduct the edit. We aim to design a general editing method, which is capable of conducting different kinds of edits including both rigid and non-rigid editing. ### Preliminaries Diffusion models (Ho et al., 2020; Sohl-Dickstein et al., 2015) start from the given image \(x_{0}\), and then progressively add Gaussian Noise \(\epsilon_{t}\sim\mathcal{N}(0,1)\) in each timestep \(t\) to get \(x_{t}\). In such a diffusion process, \(x_{t}\) can be directly calculated for each timestep \(t\in\{0,...,T\}\), \[x_{t}=\sqrt{\alpha_{t}}x_{0}+\sqrt{1-\alpha_{t}}\epsilon_{t} \tag{1}\] with \(\alpha_{t}\) being the diffusion schedule parameters with \(0=\alpha_{T}<\alpha_{T-1}...<\alpha_{1}<\alpha_{0}=1\). Given \(x_{t}\) and text embedding \(e\), the time-conditional UNets \(\epsilon_{\theta}(x_{t},t,e)\) in diffusion models predict the random noise \(\epsilon_{t}\) added to \(x_{t-1}\). With DDIM (Song et al., 2021), the reverse process is \[x_{t-1}=\frac{\sqrt{\alpha_{t-1}}}{\sqrt{\alpha_{t}}}(x_{t}-\sqrt{1-\alpha_{t }}\epsilon_{\theta}(x_{t},t,e))+\sqrt{1-\alpha_{t-1}}\epsilon_{\theta}(x_{t}, t,e) \tag{2}\] With Latent Diffusion Models (Rombach et al.), the \(x_{0}\) is replaced by the latent \(z_{0}\) from VAE Encoder \(\varepsilon(x_{0})\). The training loss is \[L=\mathbb{E}_{z_{t},\epsilon_{t},t,e}||\epsilon_{t}-\epsilon_{\theta}(z_{t},t,e)||_{2}^{2} \tag{3}\] ### Joint fine-tuning In order to tackle such challenging text guided image editing problems, we have to fine-tune the model to remember the concepts and reconstruct the image. It is worth noting that although DDIM inversion (Song et al., 2021) could reconstruct the original image, the given text prompt has to be an empty string. If the given text prompt is not empty, DDIM inversion is not able to reconstruct original image precisely and often leads to significant appearance shift (Hertz et al., 2023; Meng et al., 2021). Thus it is necessary to optimize the network for high quality reconstruction and semantic understanding. Shown in Figure 1, we introduce the overall design of our vision language joint optimization framework. **source prompt** We first use BLIP (Li et al., 2022) to generate a caption describing the image, which we call source prompt. The source prompt is then fed to text encoder of Stable Diffusion (Rombach et al.) to get the text embedding \(e_{src}\). Previous three-stage editing method Imagic (Kawar et al., 2023) regards target prompt text embedding as \(e_{src}\). We found that it is essential to use the BLIP caption as source prompt instead of using the target prompt as a pseudo source prompt like Imagic. Otherwise such fine-tuning methods easily lead to overfitting issues. We also found that using the BLIP caption as source prompt leads to better semantic alignment with the given image, thus leads to better editing results. **vision language joint learning** We choose to optimize the encoder 0, 1, 2 and decoder 1, 2, 3 in the UNet structure. Similar with Imagic, we regard source text embedding as parameters of the network. Yet different with Imagic, we found it vital to jointly optimize the source text embedding and UNet parameters for faster learning and better reconstruction quality. Although trained together, we use a learning rate of \(10^{-3}\) for source text embedding and \(6\times 10^{-5}\) for UNet, with Adam Optimizer (Kingma & Ba, 2015). For faster training, since we only have one training image, we repeat the tensors on batch dimension for batch-wise optimization with batch size 10. We use mean square error loss and empirically we also make sure that the final loss is less than 0.03 for stable reconstruction quality. With batch size set to 10, the models are fine-tuned for 35 to 40 steps. Once the loss is less than 0.03 after 35 steps, the training stops. The model is fine-tuned for 40 steps at most. This fine-tuning process takes less than 1 minute on one A100 GPU. The training loss is \[L=\mathbb{E}_{z_{t},\epsilon_{t},t,e_{src}}||\epsilon_{t}-\epsilon_{\theta,e_{ src}}(z_{t},t,e_{src})||_{2}^{2} \tag{4}\] whose difference with loss 3 is that \(e_{src}\) is also considered parameters to optimize. Now, with joint fine-tuning, we are capable to reconstruct the original image given the optimized source text embedding \(e_{src}\). ### Reasoning and Editing We first input the target prompt to CLIP (Radford et al., 2021) text encoder of Stable Diffusion (Rombach et al.) to get the target text embedding \(e_{tgt}\). With our learned source text embedding \(e_{src}\), we propose two methods to merge \(e_{src}\) and \(e_{tgt}\) so that the merged text embedding edits the original image according to the target prompt and preserves the unrelated details of original image. Given \(e_{src}\in\mathbb{R}^{B\times N\times C}\) and \(e_{tgt}\in\mathbb{R}^{B\times N\times C}\), we conduct all vector operations on the \(C\) dimension to get the final text embedding \(e\). **Vector Subtraction** We use the same interpolation method as Imagic (Kawar et al., 2023), \[e=\gamma e_{tgt}+(1-\gamma)e_{src}=e_{src}+\gamma(e_{tgt}-e_{src}) \tag{5}\] Shown in Figure 2, the final text embedding \(e\) is obtained by travelling along vector subtraction \(e_{tgt}-e_{src}\). In our experiments, we found that in most cases, \(\gamma\) goes beyond 1 when the editing is Figure 1: We show the overall framework of Forgedit. We use BLIP to describe the original image and get the source text embedding \(e_{src}\) with CLIP text encoder of Stable Diffusion. The source text embedding \(e_{src}\) is jointly optimized with UNet with different learning rate for text embedding and UNet, with UNet’s deep layers frozen. During editing process, we merge source text embedding \(e_{src}\) and target text embedding \(e_{tgt}\) with vector subtraction or vector projection to get final text embedding \(e\). With forgetting strategies on UNet parameters, we utilize DDIM sampling to get the final edited image. successful. This leads to the problem that the distance of final embedding \(e\) and source embedding \(e_{src}\) may be so far that the appearance of the edited object could change vastly. **Vector Projection** We propose to use vector projection to better preserve the appearance of the original image. Shown in the Figure 2, we decompose the target prompt text embedding \(e_{tgt}\) into a vector along \(e_{src}\) and a vector orthogonal to \(e_{src}\). We call the orthogonal vector \(e_{edit}\). We first calculate the ratio \(r\) of the projected vector on \(e_{src}\) direction. \[r=\frac{e_{src}e_{tgt}}{||e_{src}||^{2}} \tag{6}\] Thus, we could get the \(e_{edit}\) by \[e_{edit}=e_{tgt}-re_{src} \tag{7}\] In order to better preserve the original image details, we sum \(e_{src}\) and \(e_{edit}\) with coefficient \(\alpha\) and \(\beta\), \[e=\alpha e_{src}+\beta e_{edit} \tag{8}\] **Editing** We use DDIM sampling (Song et al., 2021) with classifier free guidance (Ho, 2022) to conduct the edit. The guidance scale is 7.5. For vector subtraction, we iterate over a range of \(\gamma\in[0.8,1.6]\). For vector projection, we choose \(\alpha\) from two values \(\{0.8,1.1\}\), \(\beta\) from a range of [1.0,1.5] ### Forgetting Strategies In some cases the network still overfits since there is only one training image. The fine-tuning process is computational expensive compared to sampling process, thus we design forgetting strategies during sampling process to tackle the overfitting problem. The network is only fine-tuned once, and can be converted to multiple different networks during sampling process by merging certain fine-tuned parameters \(w_{learned}\) and the corresponding original UNet parameters before fine-tuning \(w_{orig}\) with coefficient \(\sigma\). In practice, we found that \(\sigma=0\) works in general, which means that we simply Figure 2: We demonstrate vector subtraction and vector projection to merge \(e_{src}\) and \(e_{tgt}\). Vector subtraction could leads to inconsistent appearance of the object being edited since it cannot directly control the importance of \(e_{src}\). The vector projection decompose the \(e_{tgt}\) into \(re_{src}\) along \(e_{src}\) and \(e_{edit}\) orthogonal to \(e_{src}\). We can directly control the scale of \(e_{src}\) and \(e_{edit}\) by summation. replace the fine-tuned parameters with original parameters so that the network completely forgets these learned parameters. \[w=\sigma w_{learned}+(1-\sigma)w_{orig} \tag{9}\] Yet it remains a problem which parameters should be forgotten. Shown in Figure 3, we found interesting properties of UNets in Diffusion Models. The encoder of UNets learns the pose, angle and overall layout of the image. The decoder learns the appearance and textures instead. If the target prompt tends to edit the pose and layout, we choose to forget parameters of encoder. If the target prompt aims to edit the appearance, the parameters of decoder should be forgotten. Currently we only apply the forgetting strategies when text embeddings \(e\) is obtained by vector subtraction in previous section. For editing with forgetting strategies, we iterate over a range of \(\gamma\in[0.0,1.4]\). For different settings of forgetting strategies, we explore their effects in the ablation study, shown in Figure 5 and Figure 6. ### Limitations There are at least three limitations of our Forgedit. First of all, although our fine-tuning framework has been optimized and is much faster than Imagic, the fine-tuning process still takes tens of seconds or even more depending on the GPU devices. We will explore in the future whether it is possible to preserve high fidelity characteristics of the original image without fine-tuning. Second, the effect of Forgedit is influenced by randomness. The fine-tuning process inevitably introduces randomness thus for some particular cases, we cannot guarantee to perfectly reconstruct the details of original image thus we have to run the fine-tuning stage several times for these challenging cases. The sampling procedure is also related to the initial random seed of reverse process, thus for some extremely challenging cases we have to sample tens of images or even hundreds, though rarely the case, before we could get a proper edited one. Third, the editing capability of Forgedit is restricted by the utilized Diffusion Model. If the target prompt cannot even be generated by the Diffusion Model itself, it is almost impossible to accomplish the edit according to the target prompt. For example, the prompt 'a sitting flamingo' cannot be generated by Stable Diffusion at all, thus Forgedit cannot successfully edit it either. Such an issue could possibly be solved by switching to better Diffusion Models. Figure 3: The encoder parameters of UNets learn features related to pose, angle, structure and position. The decoder parameters are related to appearance and texture. Thus we could design forgetting strategies according to the editing intention. ## 4 Experiments ### Benchmark TEdBench (Kawar et al., 2023), is one of the most difficult public available text guided image editing benchmarks. There are 100 editings in the benchmark, with one target prompt and one image for each edit. These target prompts are very general and various, including but not limited to changing the appearance of objects, replacing certain parts of the image, changing the position, action and number of the object, editing multiple objects with complex interactions. In particular, the non-rigid edits turn out to be very tough for many SOTA text-guided image editing methods. In terms of quantitative evaluation, we utilize CLIP Score (Hessel et al., 2021) to measure semantic alignments with target prompt and LPIPS score (Zhang et al., 2018) to indicate fidelity to the original image. ### Ablation Study **vector subtraction vs vector projection** We compare two different reasoning method to merge \(e_{src}\) and \(e_{tgt}\) to get the final text embedding \(e\), shown in Figure 4. For the dog and the cat example, vector projection preserves the appearance of the dog and cat better than vector subtraction. However, for the glass of milk and cookie example, vector subtraction is better than vector projection. In this example, vector projection struggles to change the milk to juice and also introduces wave-like blurs in the image. We observe such phenomenons in many other cases for vector projection, which demonstrates that it is more suitable for edits where the identity of object should be kept instead of changed. These two methods are complementary to each other on many cases, with vector projection better at preserving the identity and vector subtraction better at editing. **forgetting strategies** Although the forgetting strategies strengthen the editing abililty of the model, forgetting parameters inevitably leads to minor reconstruction quality. For example, shown in Figure 3, for encoder or decoder, we remain all parameters related to self attention and cross attention, forgetting the rest, which are called 'encoderattn' in Figure 6 and 'decoderattn' in Figure 5. We found that there are certain unwanted changes unrelated to the target prompt, which are the side effects of forgetting strategies. For each column, the background of the image changes a little bit, the white bird disappears, the knife is gone, the branch no longer exists, the appearance of the tree changes. Figure 4: comparison of vector subtraction and vector projection to reason the final text embedding \(e\). In fact, these two methods are on par in many cases, yet complementary on the others. We also experiment with different extent of forgetting strategies. In Figure 5, we explore different decoder forgetting strategies. With all fine-tuned parameters of encoder preserved and all decoder parameters forgotten, we gradually add fine-tuned parameters back to decoder. 'decoderattn2kv' means that we use fine-tuned parameters of decoder cross attention key and value matrices. Since all the fine-tuned encoder parameters are preserved, the overall structure of the image and the pose of the objects being edited are almost identical with the original image, yet the appearance and textures are changed. 'decodertant' indicates that we utilize all learned self attentions and cross attentions parameters in decoder. This is our default setting since it is rather general. More appearance and textures features of the original image are preserved in such a setting. 'decoderattn+decoder2' refers to the forgetting strategy that we preserve all learned self attentions and cross attentions of decoder plus the decoder2 block. The position of decoder2 block is shown in Figure 1. More details are preserved for some edits, yet for the others the editing ability of our method is lost due to overfitting. In the last column of figure, we show the editing results by using all fine-tuned parameters. We also explore different forgetting strategies for encoder in Figure 6. 'noencoder' indicates that we forget all learned parameters of encoder and only use learned decoder parameters for sampling. 'encoderattn' refers to the strategy that we preserve all the parameters of self attention and cross attention. With 'encoderattn+encoder1' strategy, we preserve encoder self attention, cross attention and the encoder1 block. All the other parameters of encoder are forgotten. ### Comparison with State-of-the-art We compare our Forgedit with multiple SOTA text guided image editing methods in Figure 7. For non-optimization text guided image editing methods, we choose to compare with the most representative method, SDEdit (Meng et al., 2021). We found that SDEdit struggles to preserve the identity of the edited objects in most cases. We also compare with a kind of very strong optimization involved method, which we call 'BLIP+DreamBooth'. In order for such methods to be applied in text guided image editing, we utilize BLIP (Li et al., 2022) to generate captions describing the original image like our Forgedit. With the caption, we train the UNet to reconstruct the original image and edit the image by directly using the target prompt to guide the fine-tuned UNet for image generation, shown in the 3rd column of Figure 7. We also experiment with an improved version by training text encoder and UNet at the same time, shown in the 4th column. Such simple fine-tuning of UNet and text encoder are actu Figure 5: We explore different forgetting strategies for decoder. All learned encoder parameters are preserved. In the second to fourth columns, we preserve decoder cross attention parameters k and v, decoder self attention and cross attention, decoder self attention and cross attention and the entire decoder2 block, forgetting all the other parameters of decoder. ally very powerful text guided image editing methods, which are also called 'DreamBooth' (Ruiz et al., 2023) in some literature. The difference is that our BLIP+DreamBooth uses BLIP generated caption yet original DreamBooth requires user provided caption in a special form of 'a [V] object' referring the object to be reconstruct. Following the settings of DreamBooth (Ruiz et al., 2023), we use a learning rate of \(5\times 10^{-6}\) for both text encoder and UNet, with a batch size of 4. We train BLIP+DreamBooth with one image for 100 steps, which takes more than one minute on a A100 GPU. Unlike original DreamBooth which needs 3 to 4 images to learn the new object concept, we found that with BLIP+DreamBooth one image is enough to reconstruct the majority features of the original image. It is obvious that BLIP+DreamBooth methods are much better at preserving the identities and backgrounds than SDEdit. However, BLIP+DreamBooth, when only UNet is fine-tuned, suffers from underfitting since it cannot preserve the identity of the objects in many cases. BLIP+DreamBooth suffers from overfitting in many cases when text encoder and UNet are jointly fine-tuned. In fact, we found that our Forgedit can also be simply adapted to help tackling such overfitting issues of BLIP+DreamBooth, shown in the appendix, which again demonstrates the strong generalization of Forgedit framework on various optimization based editing methods. We also compare with the SOTA two-stage text guided image editing method, Imagic (Kawar et al., 2023). We use Stable Diffusion (Rombach et al.) and Imagen Saharia et al. (2022) as the diffusion models for Imagic respectively, shown in the 5th and 6th columns of Figure 7. Imagic with Stable Diffusion suffers greatly from overfitting, leading to few successful edits. Imagic with Imagen is the current SOTA on TEdBench, demonstrating very strong editing abilities and preserves the original identities well in most cases. Our method, Forgedit, shown in the last column, though with the inferior Stable Diffusion as diffusion models for editing, is generally on par with Imagic with Imagen in most cases, sometimes better. Also, our Forgedit with Stable Diffusion surpass the current SOTA Imagic+Imagen on TEdBench benchmark in terms of both CLIP Score and LPIPS Score, shown in Table 1. \begin{table} \begin{tabular}{c|c|c} Editing method & CLIP Score \(\uparrow\) & LPIPS Score \(\downarrow\) \\ \hline Imagic+Imagen (Kawar et al., 2023) & 0.748 & 0.537 \\ Forgedit+SD (ours) & **0.771** & **0.534** \\ \end{tabular} \end{table} Table 1: Our Forgedit with Stable Diffusion is the new state-of-the-art text guided image editing method on the challenging benchmark TEdBench, surpassing previous SOTA Imagic+Imagen. Figure 6: We explore different forgetting strategies for encoder. All learned decoder parameters are preserved. For the second to fourth column each, we preserve none of the encoder parameters, encoder self attention and cross attention, encoder self attention and cross attention and cross attention and the entire encoder1 block, forgetting all the other parameters of encoder. ### Conclusion We present our novel Forgedit framework to tackle the challenging text guided image editing problem. Besides the optimized vision language joint learning for fast reconstruction of the original image, we also introduce the vector projection mechanism to strengthen Forgedit's capability of identity preservation during editing. Finally, we propose the forgetting strategy to efficiently solve the overfitting issue of optimization based model during sampling. Even with the outdated Stable Diffusion version 1.4, our Forgedit achieves new state-of-the-art CLIP score and LPIPS score on the Figure 7: comparison with SOTA text guided image editing methods. We compare with the non-optimization method, SDEdit and optimization methods, BLIP+DreamBooth and Imagic, demonstrating the strong editing ability and stable identity preservation. most challenging editing benchmark TEdBench. Forgedit can also be adapted to other fine-tuning based text guided image editing methods, for example, BLIP+DreamBooth. We demonstrate the generalization of Forgedit in the appendix. Theoretically, our Forgedit framework should also be compatible with other structures of Diffusion Models beyond Stable Diffusion thus has the potential to obtain better editing results, which we will explore in the future.
2309.06334
On the non-dissipative tidal evolution of the misalignment between spin and orbital angular momenta
We extend our previous work on the evolution of close binary systems with misaligned orbital and spin angular momenta resulting from non-dissipative tidal interaction to include all physical effects contributing to apsidal motion. In addition to tidal distortion of the primary by the compact secondary these include relativistic Einstein precession and the rotational distortion of the primary. The influence of the precession of the line of nodes is included. The dependence of the tidal torque on the apsidal angle $\hat\varpi$ couples the apsidal motion to the rate of evolution of the misalignment angle $\beta$ which is found to oscillate. We provide analytical estimates for the oscillation amplitude $\Delta\beta$ over a wide range of parameter space confirmed by numerical integrations. This is found to be more significant near critical curves on which $d{\hat \varpi } /dt=0$ for a specified $\beta$. We find that to obtain $0.1 < \Delta\beta < \sim 1,$ the mass ratio, $q > \sim1$ the initial eccentricity should be modest, $\cos \beta < 1/\sqrt{5},$ with $\cos\beta <0 $ corresponding to retrograde rotation, initially, and the primary rotation rate should be sufficiently large. The extended discussion of apsidal motion and its coupled evolution to the misalignment angle given here has potential applications to close binaries with anomalous apsidal motion as well as transiting exoplanets such as warm Jupiters.
P. B. Ivanov, J. C. B. Papaloizou
2023-09-12T15:51:03Z
http://arxiv.org/abs/2309.06334v1
# On the non-dissipative tidal evolution of the misalignment between spin and orbital angular momenta ###### Abstract We extend our previous work on the evolution of close binary systems with misaligned orbital and spin angular momenta resulting from non-dissipative tidal interaction to include all physical effects contributing to apsidal motion. In addition to tidal distortion of the primary by the compact secondary these include relativistic Einstein precession and the rotational distortion of the primary. The influence of the precession of the line of nodes is included. The dependence of the tidal torque on the apsidal angle \(\hat{\varpi}\) couples the apsidal motion to the rate of evolution of the misalignment angle \(\beta\) which is found to oscillate. We provide analytical estimates for the oscillation amplitude \(\Delta\beta\) over a wide range of parameter space confirmed by numerical integrations. This is found to be more significant near critical curves on which \(d\hat{\varpi}/dt=0\) for a specified \(\beta\). We find that to obtain \(0.1<\Delta\beta<\sim 1\),the mass ratio, \(q>\sim 1\) the initial eccentricity should be modest, \(\cos\beta<1/\sqrt{5}\), with \(\cos\beta<0\) corresponding to retrograde rotation, initially, and the primary rotation rate should be sufficiently large. The extended discussion of apsidal motion and its coupled evolution to the misalignment angle given here has potential applications to close binaries with anomalous apsidal motion as well as transiting exoplanets such as warm Jupiters. keywords: hydrodynamics - celestial mechanics - planetary systems: formation, planet -star interactions, stars: binaries: close, rotation, oscillations, solar-type Introduction In binary and exoplanetary system there could be a situation when rotational axis of a companion is inclined with respect to orbital plane. Recently, this possibility has received observational confirmation, see e.g. Albrecht et al (2009) for a discussion of this effect in case of binary system DI Herculis and Albrecht, Dawson & Winn (2022) and references there in for a discussion of exoplanetary systems with close-in planets on orbits inclined with respect to rotational axis of the parent star. In addition the two transiting warm Jupiters on eccentric orbits, TOI 5152b and TOI-5153b, could potentially exhibit such a misalignment (Ulmer-Moll et al. 2022). For sufficiently small separation of the components of a binary/exoplanetary system, tidal interaction may play a significant role in governing orbital evolution (see e.g. Ogilvie 2014; Barker 2020, for a general recent discussion). When there is a misalignment between the rotation axis and the orbital angular momentum tidal interactions are significantly modified in comparison to the more frequently studied aligned case, (see e.g. Eggleton et. al. 1998; Barker & Ogilvie 2009). A seminal theory of the quasi-static tidal interaction between gaseous objects on inclined orbits, valid for any value of the angle of inclination between the rotation axis and the orbital angular momentum was proposed by Eggleton et. al. (1998). Recently, Ivanov & Papaloizou (2021), hereafter IP, revised the theory of Eggleton et. al. (1998), incorporating Coriolis forces and a self-consistent treatment of energy dissipation based on first principles. This made use of a formalism previously applied to dynamics tides (see e.g. Ivanov & Papaloizou 2007; Ivanov et al. 2013). They avoided neglecting Coriolis forces as well as making any ad hoc assumptions on the character of the tidal interaction, and the energy dissipation rate, as was done in Eggleton et. al. (1998). IP found that qualitatively new effects arise from the consideration of Coriolis and inertial forces. Their scale is proportional to stellar rotation frequency \(\Omega_{r}\). A consequence is evolution of the inclination angle, \(\beta,\) together with the orbital angular momentum, in the regime in which energy is conserved (the non dissipative regime). In this regime both orbital and rotational energies and, accordingly, the orbital semi-major axis \(a,\) and \(\Omega_{r}\),are conserved. As discussed in IP the physical origin of such non-dissipative evolution is associated with the Coriolis and inertial forces generating a tidal response displacement that has an angular dependence differing from that of the tidal forcing which it would otherwise take. The effect can be regarded as acting in a similar way to the well-known Lidov-Kozai effect, but, in our case there is no need for the presence of a third body to cause the joint evolution of the orbital eccentricity and the orbital angular momentum. It is important to note that due to the inefficiency of dissipative processes operating in gaseous celestial bodies the corresponding characteristic time scales of evolution are typically very long compared to those associated with non dissipative evolution. Moreover, e.g. turbulent viscosity, which may lead to dissipation of quasi-static tides in many potentially interesting objects may be too weak to be important, see e.g. Duguid, Barker & Jones (2020) and references therein. For non-dissipative evolution the corresponding torque acting between the primary and orbit is proportional to \(\sin 2\hat{\varpi}\), where \(\hat{\varpi}\) is the angle characterising the orientation of the apsidal line of the orbit. Therefore, the characteristic time scale of evolution is in part determined by the rate of apsidal precession. This may have several different sources. IP considered the situation where tides are exerted only on component (the primary star), the secondary being compact. Also, they took into account only classical apsidal precession induced by tidal distortion. They found that the inclination angle exhibited periodic motions with period one half of the period of apsidal precession. The amplitude was determined by several factors, most importantly, \(\Omega_{r}\), \(a\), the orbital eccentricity \(e,\) the stellar moment of inertia \(I,\) and the mass ratio \(q\) between the secondary and primary. In this paper we generalise results of IP considering taking account of all expected contributions to apsidal motion for a binary of the type we consider. These include relativistic Einstein precession and effects arising from the flattening of the primary due to its rotation, see e.g. Barker & O'Connell (1975). It is important to note that the latter effect depends on the inclination angle \(\beta\) and may change sign. This dependence was used by Shakura (1985) to explain an unusual apsidal motion of DI Herculis and, later, was invoked to explain properties of AS Camelopardalis, see Pavlovski et al (2011). It is also important to note that the orientation of the apsidal angle is made with respect to the line of nodes which is also precessing, a feature that also depends on \(\beta.\) This will affect the rate of precession of the apsidal line that we require. Thus, when all these effects may play a significant role, a coupled evolution of the angles \(\beta\) and \(\hat{\varpi}\) is expected. We analyse in detail qualitative properties of the resulting dynamical system, which describes the evolution of \(\beta\) and \(\hat{\varpi},\) with the orbital eccentricity \(e\) being determined as a dependent quantity. We begin by providing conditions, under which any one process gives the dominant contribution to apsidal motion, going on to estimate a typical magnitude for the expected change to the inclination, \(\Delta\beta,\) in each case. We go on to consider the situation when the system, in the course of its evolution crosses a 'critical curve', in the parameter space of the problem, defined by the condition, that the total apsidal precession rate is zero for a particular value of \(\beta,\) namely \(\beta_{0}.\) The discussion given here is expected be useful for assessing the possibility of dramatic reductions or reversals in the direction of apsidal motion in close binary systems similar to DI Herculis. In this situation it is expected that amplitude of variation, \(\Delta\beta\) is much larger than for the previous case. We discuss in detail the properties of such 'critical curves' finding that one can only be crossed when \(\beta_{0}\) is relatively large and possibly corresponding to retrograde rotation such that \(\cos\beta_{0}<1/\sqrt{5}\). We study the evolution of \(\Delta\beta\) when the system evolves near a critical curve making the assumption that the magnitude of \(\Delta\beta\) is small. We show that it is formally governed by a simple pendulum equation. Is found that the system's behaviour changes drastically for such solutions. The apsidal angle changes periodically (librates), while variations of \(\Delta\beta\) can be large enough to lead to periodic changes in \(\beta\) corresponding to switching from prograde to retrograde rotation and back. We confirm our analysis by considering two numerical integrations and discuss the four conditions we found to be required in order to obtain \(\Delta\beta\) in the range \(0.1-1\). These were: 1) \(\Omega_{r}\) should be large enough, 2) the eccentricity should be moderately large, say, \(e\sim 0.5\), 3) the initial inclination, \(\beta_{0}\), should be large enough, 4) the mass ratio \(q\) should be order of unity or larger. The case of large mass ratio could, for example, be applicable to a tidally active planet with its rotational axis strongly inclined with respect to the orbital plane. In an accompanying paper (Ivanov & Papaloizou, 2023) a larger preliminary numerical survey of parameter space also provides some further confirmation of these conditions. The effects discussed in this paper could have several possible observational implications. The discussion of the processes contributing to apsidal motion incorporating the precession of the line of nodes as well as of the critical curves could be applicable to future studies of transiting exoplanets in orbits with significant eccentricity and misalignment (Ulmer-Moll et al., 2022). As already noted these effects may also be relevant to the light curves of eclipsing binaries such as DI Herculis (Shakura, 1985). In addition, significant changes in \(\beta\) may be possible in such systems. The effects studied here may also play a role when the system's evolution on longer dissipative time scales is considered. The structure of this paper is as follows. In Section 2 we introduce our basic notations and definitions. In Section 3 we discuss the basic equations governing the non-dissipative evolution of our system. In Section 4 we provide a qualitative analysis of it and estimate the variation of \(\Delta\beta\) under the assumption that a single process dominates the apsidal precession rate. In Section 5 we discuss the determination and properties of the 'critical curves' and in Section 6 we discuss solutions evolving close to a critical curve both analytically and numerically. Finally, in Section 7 we conclude by discussing the possible implications and extensions of this work. ## 2 Basic definitions and notation We consider a binary that consists of a primary star of mass \(M_{*}\) and radius \(R_{*}\) together with a point-like secondary star of mass \(M_{1}\). The orbit of the binary is assumed to be, in general, elliptic, with eccentricity \(e\) and semi-major axis \(a\). There are three dynamical frequencies that are significant for our purposes, a typical inverse dynamical time scale associated with the primary \(\Omega_{*}=\sqrt{GM_{*}/R_{*}^{3}}\), where \(G\) is gravitational constant, the mean motion \(n_{0}=\sqrt{G(M_{*}+M_{1})/a^{3}}\), and the rotation frequency of the primary star \(\Omega_{r}\). Is is also convenient to use the dimensionless semi-major axis \(\tilde{a}=a/R_{*}\), the ratio of the rotation frequency to the orbital mean motion \(\sigma~{}=~{}\Omega_{r}/n_{0}\), 1 and the mass ratio \(q=M_{1}/M_{*}\). Footnote 1: This deviates slightly from the notation of IP in which \(\sigma=\Omega_{r}/(\lambda n_{0})\) with \(\lambda\) being defined there. This quantity is not used in this paper. The orbital angular and stellar spin angular momentum vectors are \(\mathbf{L}\) and \(\mathbf{S}\), respectively, their sum \(\mathbf{J}=\mathbf{L}+\mathbf{S}\) defines the total angular momentum of the system, which is conserved in the course of orbital evolution. We define inclination angles \(\beta\), \(i\) and \(\delta\) as inclination angles between \(\mathbf{S}\) and \(\mathbf{L}\), \(\mathbf{L}\) and \(\mathbf{J}\) and \(\mathbf{S}\) and \(\mathbf{J}\), respectively, with their relative orientations chosen in such a way, that \(\delta=\beta-i\) (see also IP). We have the obvious relations following from the definition of these angles (see IP) \[\cos\beta=\frac{(\mathbf{L}\cdot\mathbf{S})}{LS}, \tag{1}\] where \(L\) and \(S\) are the magnitudes of \(\mathbf{L}\) and \(\mathbf{S}\), and we have \(S=I\Omega_{r}\), where \(I\) is primary's moment of inertia, and \(L=qM_{*}/(1+q)n_{0}a^{2}\sqrt{1-e^{2}}\). Furthermore \[\cos i=\frac{(\mathbf{J}\cdot\mathbf{L})}{JL},~{}~{}{\rm and}~{}~{}~{}\cos \delta=\frac{(\mathbf{J}\cdot\mathbf{S})}{JS}, \tag{2}\] where \(J\) is the magnitude of \(\mathbf{J}\). In addition, we also have \(~{}~{}2\mathbf{J}\cdot\mathbf{L}=J^{2}+L^{2}-S^{2}~{}~{}~{}{\rm and}~{}~{}~{}2 \mathbf{L}\cdot\mathbf{S}=J^{2}-L^{2}-S^{2}\) and, accordingly, the cosines of \(\beta\) and \(i\) are given by, \[\cos\beta=\frac{J^{2}-L^{2}-S^{2}}{2LS}~{}~{}~{}{\rm and}~{}~{}~{}\cos i=\frac {J^{2}+L^{2}-S^{2}}{2JL}. \tag{3}\] From the first of these we obtain \[\frac{J}{L}=\frac{1}{\sqrt{1-S^{2}\sin^{2}\beta/J^{2}}-S\cos\beta/J}. \tag{4}\] We can also express the sines of \(\beta\), \(i\) and \(\delta\) in terms of \(J\), \(L\) and \(S\), thus obtaining \[\sin\beta=\frac{\sqrt{(J^{2}-(L-S)^{2})((L+S)^{2}-J^{2})}}{2LS},\ \mathrm{ and}\] \[\sin i=\frac{\sqrt{(S^{2}-(J-L)^{2})((J+L)^{2}-S^{2})}}{2JL}. \tag{5}\] In addition, consideration of the angular momentum components perpendicular to \(\mathbf{J}\) and \(\mathbf{S}\) respectively gives \[\sin\delta=\frac{L}{S}\sin i=\frac{L}{J}\sin\beta. \tag{6}\] It is clear that vectors \(\mathbf{L}\), \(\mathbf{S}\) and \(\mathbf{J}\) lie in the same plane. For our purposes, it is useful to introduce two orthonormal right oriented triads of unit vectors, defining two Cartesian coordinate systems \((X,Y,Z)\) and \((X^{{}^{\prime}},Y^{{}^{\prime}},Z^{{}^{\prime}})\) in such a way, that the \(Y\) and \(Y^{\prime}\) axes are colinear and lie in the direction perpendicular to this plane, while the \(Z\) and \(Z^{\prime}\) axes are directed along \(\mathbf{S}\) and \(\mathbf{L}\), respectively. From these definitions and the above discussion it follows that we can choose the first triad \(\mathbf{e}_{x},\mathbf{e}_{y},\mathbf{e}_{z}\) to be explicitly represented in the form \[\mathbf{e}_{x}=\frac{(\mathbf{s}\times\mathbf{j})\times\mathbf{s}}{\sin\delta} =\frac{\mathbf{j}-\cos\delta\mathbf{s}}{\sin\delta},\quad\mathbf{e}_{y}=\frac{ \mathbf{s}\times\mathbf{j}}{\sin\delta},\quad\mathbf{e}_{z}=\mathbf{s}, \tag{7}\] where \(\mathbf{s}=\mathbf{S}/S\) and \(\mathbf{j}=\mathbf{J}/J\), while the second one \(\mathbf{e}_{x^{\prime}},\mathbf{e}_{y^{\prime}},\mathbf{e}_{z^{\prime}}\) can be obtained from (7) by the substitution \(\delta\to i\) and \(\mathbf{s}\rightarrow\mathbf{l}\), where \(\mathbf{l}=\mathbf{L}/L\): \[\mathbf{e}_{x^{\prime}}=\frac{(\mathbf{l}\times\mathbf{j})\times\mathbf{l}}{ \sin i}=\frac{\mathbf{j}-\cos i\mathbf{l}}{\sin i},\quad\mathbf{e}_{y^{\prime} }=\frac{\mathbf{l}\times\mathbf{j}}{\sin i},\quad\mathbf{e}_{z^{\prime}}= \mathbf{l}. \tag{8}\] Later on we are going to call coordinate frames defined with help of (7) and (8) as'stellar' and 'orbital' frames, respectively Equations governing the non-dissipative tidal evolution of the inclination angle between the spin and orbital angular momentum vectors In order to discuss the non-dissipative evolution of the inclination angles we need to relate non-dissipative contribution to the tidal torque acting in the stellar frame, which was provided in IP, to time derivatives of these angles. This can be easily done by differentiating \(\cos\delta=(\mathbf{j}\cdot\mathbf{s})\) with respect to time, taking into account that \(\mathbf{j}\) is conserved and with the help of eq. (7) expressing \(\mathbf{j}\) in terms of \(\mathbf{e}_{x}\) in the resulting expression, thus we obtain \[\dot{\delta}=-\frac{\sin\delta(\mathbf{e}_{x}\cdot\hat{\mathbf{s}})+\cos \delta(\mathbf{s}\cdot\hat{\mathbf{s}})}{\sin\delta}. \tag{9}\] But, \((\mathbf{s}\cdot\hat{\mathbf{s}})=0\), so we have \[\dot{\delta}=-\frac{T^{x}}{S}, \tag{10}\] where \(T^{x}=S(\mathbf{e}_{x}\cdot\dot{\mathbf{s}})=(\mathbf{e}_{x}\cdot\dot{\mathbf{S}} )\equiv(\mathbf{T}\cdot\mathbf{e}_{x})\) is the component of the torque \(\mathbf{T}\) in the \(X\) direction acting on the star. Derivation of the evolution equation for the angle \(\beta\) proceeds in a similar way. We first differentiate equation (1) with respect to time. We then note that angular momentum conservation implies that \(\dot{\mathbf{L}}=-\dot{\mathbf{S}}\) and we use the fact that for non dissipative evolution, \((\mathbf{T}\cdot\mathbf{e}_{z})=0,\) so that, \(S\) is conserved (IP), and, accordingly, \((\mathbf{S}\cdot\dot{\mathbf{L}})=\ -\ (\mathbf{S}\cdot\dot{\mathbf{S}})\ =\ 0\). In this way we obtain \[\dot{\beta}=-\frac{1}{\sin\beta}\frac{(\mathbf{L}\cdot\dot{\mathbf{S}})}{L^{2 }}\left(\frac{L}{S}+\cos\beta\right). \tag{11}\] From eq.(7) we find in addition that, \(\sin\delta(\mathbf{e}_{x}\cdot\dot{\mathbf{S}})=(\mathbf{L}\cdot\dot{\mathbf{ S}})/J.\) We then use (6), thus obtaining \[\dot{\beta}=-\left(\frac{1}{S}+\frac{\cos\beta}{L}\right)T^{x} \tag{12}\] ( see also equation (17) of IP). An evolution equation for the angle \(i=\beta-\delta\) can be easily obtained from (10) and (12). In addition we have the conservation of the total angular momentum which yields \[L^{2}/(2S)+L\cos\beta=(J^{2}-S^{2})/(2S)=\mathrm{constant}, \tag{13}\] where we recall that \(S\) is constant. ### An explicit expression for \(T^{x}\) The torque component \(T^{x}\) requires an extensive analysis which is carried out in IP. The reader is referred there for details. Here we note that in the equilibrium tide approximation for a barotropic stellar model of the type we consider, \(T^{x}=0\). However, a non zero value is obtained when the induced acceleration and effective Coriolis force is included in the determination of the tidal response. This is carried out in Section 5 of IP with some discussion of the origin of a non zero value of \(T^{x}\) given in Section 5.4.2. The results are then used to obtain \(T^{x}\) in Section 6 and Appendix B of IP. ### Quasi-static and dynamical tides IP considered the density response, \(\rho_{n,k}\), to the perturbing potential \(\overline{U}=r^{2}\mathcal{A}_{n,k}Y_{2,n}(\theta,\phi),\) where the forcing frequency is \(\omega_{f}=kn_{o}+n\Omega_{r},\) with \(n\) being the azimuthal mode number and \(k\) an integer. For the definition of other quantities here and in the rest of this Section see IP. The associated displacement is \(\boldsymbol{\eta}=\boldsymbol{\xi}_{eq,n,k}+\boldsymbol{\xi}_{eq1,n,k}\), This is written as the sum of two parts, \(\boldsymbol{\xi}_{eq,n,k}\) identified as the equilibrium, or quasi-static, tidal displacement and \(\boldsymbol{\xi}_{eq1,n,k}\) which is the difference between the displacement and that quantity. The latter incorporates the dynamical tide. The quantity, \(\boldsymbol{\xi}_{eq,n,k}\), can be taken to be the displacement in the limit of zero forcing frequency. This together with \(\boldsymbol{\eta}\) was specified by equations (57) and (27) of IP to be in a purely spheroidal form. However, it is important to note that the analysis given Sections 5.2 and 5.3 of IP does not depend on this assumption. Furthermore the analysis, aimed at specifying the overlap integral, \(\int\rho_{n,k}^{{}^{\prime}*}r^{2}Y_{2,n}(\theta,\phi)dV,\) this being required in order to determine the tidal torque, can be undertaken while retaining \(\boldsymbol{\xi}_{eq1,n,k}\). One finds that equation (53) of IP specifying the overlap integral is retained but with modified definitions of the quantities, \(\beta_{*}\) and \(\Gamma\) defined in IP, the latter being neglected for the non dissipative evolution considered here. Hence, \(n\omega_{f}^{2}(\beta_{*}-1)\Omega_{r}\rightarrow\) \[-\omega_{f}^{2}\mathcal{R}\left(\frac{\int\mathrm{i}\rho\Omega_{r}\boldsymbol{ \xi}_{eq,n,k}^{*}\cdot(\mathbf{\hat{k}}\times\boldsymbol{\eta})dV}{\int\rho| \boldsymbol{\xi}_{eq,n,k}|^{2}dV}-\frac{\omega_{f}}{2}\left(\frac{\int\rho \boldsymbol{\xi}_{eq,n,k}^{*}\cdot\boldsymbol{\xi}_{eq1,n,k}dV}{\int\rho| \boldsymbol{\xi}_{eq,n,k}|^{2}dV}-\frac{\int\rho\boldsymbol{\xi}_{eq,n,k}^{*} \cdot\boldsymbol{D}_{NA}dV}{\omega_{f}^{2}\int\rho|\boldsymbol{\xi}_{eq,n,k}| ^{2}dV}\right)\right), \tag{14}\] where \(\mathcal{R}\) indicates the real part, \(\mathbf{D}_{NA}\) represents the dissipative terms in equation (35) of IP, and for completeness \(\Gamma\rightarrow\Gamma\int\rho|\boldsymbol{\eta}|^{2}dV/(\int\rho|\boldsymbol{ \xi}_{eq,n,k}|^{2}dV).\) IP then assume that \(\boldsymbol{\xi}_{eq1,n,k}\) can be neglected in comparison to \(\boldsymbol{\xi}_{eq,n,k}\) so that in addition \(\boldsymbol{\eta}\rightarrow\boldsymbol{\xi}_{eq,n,k}\) in (14), which becomes the same as equation (55) of IP, when non adiabatic effects which are assumed to be weak in comparison to conservative effects are neglected. IP discuss the evaluation of \(\beta_{*}\) in this case using the form of the equilibrium tide given by equation (57) of IP. This spheroidal form applies in the case of a non rotating spherical star and is such that \(\beta_{*}\) and \(\omega_{eq}\) are constants independent of \(n\). However, an alternative form for the equilibrium tide could be adopted with corresponding change to \(\boldsymbol{\xi}_{eq1,n,k}\). Then IP, as well as the discussion below, effectively make the approximation of adopting constant values for \(\beta_{*}\) and \(\omega_{eq}\) independent of \(n\). Note that Coriolis forces are not necessary to obtain a non zero value of \(\beta_{*}\). If they are neglected \(\beta_{*}=1\). However, neglecting \(\boldsymbol{\xi}_{eq1,n,k}\) and adopting equation (57) of IP neglects the possibility of resonances due to eg. inertial modes (see eg. Papaloizou & Ivanov, 2005; Ogilvie, 2014) or \(r\) modes (see eg. Papaloizou & Savonije, 2023) which are associated with the dynamical tide. But the latter resonances are highly localised in parameter space and accordingly unlikely to play a significant role. Note too that as only the density perturbation and associated overlap integral is required to obtain tidal torque, further identification of the form of the displacement is not needed. ### Equation governing the evolution of \(\beta\) Equations (90) and (92) of IP then specify \(T^{x}\) through \[T^{x}=-T_{*}\frac{3(2\beta_{*}+1)}{2}e^{2}(1-e^{2})^{3/2}\left(1+e^ {2}/6\right)\left(\frac{\Omega_{r}}{\omega_{eq}}\right)^{2}\sin\beta\sin 2 \hat{\varpi},\] \[\mathrm{where}\;\;T_{*}=\frac{3k_{2}q^{2}}{(1+q)}\left(\frac{R_{ *}}{a}\right)^{5}M_{*}n_{o}^{2}a^{2}(1-e^{2})^{-6}\;\;\mathrm{and} \tag{15}\] \(\beta_{*}\) is a constant of order unity ( see equation (55) of IP), \(\omega_{eq}\) differs from \(\Omega_{*}\) by numerical factor order of unity, \(k_{2}\) is the apsidal motion constant and \(\hat{\varpi}-\pi/2\) is the angle between the apsidal line and the \(Y\) axis which may be used to define the line of nodes. Then the angle between the apsidal line and the \(X^{{}^{\prime}}\) axis is \(\varpi=\hat{\varpi}-\pi\). Thus we have \[\dot{\beta}= \left(\frac{1}{S}+\frac{\cos\beta}{L}\right)T_{*}\frac{3(2\beta_ {*}+1)e^{2}(1-e^{2})^{3/2}}{2}\times\] \[\left(1+\frac{e^{2}}{6}\right)\left(\frac{\Omega_{r}}{\omega_{eq }}\right)^{2}\sin\beta\sin 2\hat{\varpi}. \tag{16}\] Provided that a dependence of \(\hat{\varpi}\) on time is specified equations (12), (13) and (16) together with the standard expression of \(L\) in terms of \(a\) and \(e\) form a complete set. We considered in IP the simplest case when apsidal precession determined by equilibrium tides is given by the classical expression \[\frac{d\hat{\varpi}}{dt}=\frac{d\varpi}{dt}=\frac{d\varpi_{T}}{dt}=\frac{15k_ {2}n_{0}M_{1}R_{*}^{5}}{M_{*}(a(1-e^{2}))^{5}}\left(1+\frac{3e^{2}}{2}+\frac{ e^{4}}{8}\right), \tag{17}\] (Sterne, 1939). In this paper we would like to consider a more complicated situation taking into account other potentially important sources of apsidal precession, namely, the Einstein precession and apsidal precession determined by rotational flattening of the primary (e.g. Barker & O'Connell, 1975; Shakura, 1985) In the latter case the apsidal precession rate depends on inclination of the stellar axis to the orbit, \(\beta\), which results in a much richer dynamics. We derive an expression for the apsidal precession rate due to rotational flattening in a form appropriate for our purposes from the results of Barker & O'Connell (1975) in Appendix A, see equation (A14). As seen from this expression there are two contributions, which have physically different origin. The former term is directly determined by gravitational perturbation of the Keplerian point-mass potential arising from the rotational distortion of the primary, causing apsidal precession. The nature of the second term proportional to \(\cos i\), is 'indirect' in the following sense. When the rotation axis of the star is inclined with respect to the orbit, interaction of the tidal potential with the misaligned axisymmetric density distribution of the rotationally flattened star leads to precession of this axis. This, in turn causes the orbital angular momentum vector to similarly precess in order to conserve total angular momentum. This makes the orbital frame non-inertial inducing corresponding Coriolis forces, which give rise to the additional apsidal precession of the orbit. Accordingly, adding all the contributions together we have \[\frac{d\hat{\varpi}}{dt}=\frac{d\varpi_{T}}{dt}+\frac{d\varpi_{E}}{dt}+\frac{d \varpi_{R}}{dt}+\frac{d\varpi_{NI}}{dt}, \tag{18}\] where \(d\varpi_{T}/dt\) is given by (17), \[\frac{d\varpi_{E}}{dt}=\frac{3GM_{*}(1+q)}{c^{2}a(1-e^{2})}n_{0}, \tag{19}\] is the standard expression for the Einstein relativistic apsidal precession, \(c\) is speed of light, and \(d\varpi_{R}/dt\) and \(d\varpi_{NI}/dt\) are given by the first and second contributions to the apsidal advance rate specified by eq. (A14). ### Evolution equations in dimensionless form In order to simplify the discussion of the evolution equations we obtain a dimensionless form of equation (16)) by introducing a new'slow' time variable \(\tau=t/t_{*}\), where the time \(t_{*}\) defines the tidal apsidal precession timescale for a small eccentricity \(e\) and is given by \[t_{*}=\frac{\tilde{a}^{13/2}\Omega_{*}^{-1}}{15k_{2}q\sqrt{(1+q)}}, \tag{20}\] where we recall that \(\tilde{a}=a/R_{*}\), equation (16) thus leads to \[\frac{d\beta}{d\tau}=\left(\frac{\cos\beta}{\sqrt{(1-e^{2})}}+\frac{1}{\tilde {S}}\right)\tilde{T}(1-e^{2})^{3/2}\sin(\beta)\sin(2\hat{\varpi}), \tag{21}\] where \[\tilde{T}=\frac{3}{5}(1+q)\gamma_{*}\frac{e^{2}(1+e^{2}/6)}{(1-e^{2})^{6}} \tilde{a}^{-3}\sigma^{2}, \tag{22}\] and \[\tilde{S}=\frac{\tilde{I}(1+q)}{q}\tilde{a}^{-2}\sigma, \tag{23}\] where we recall that \(\sigma=\Omega_{r}/n_{0}\), in addition \(\omega_{*}=\omega_{eq}/\Omega_{*}\), \(\tilde{I}=I/(M_{*}R_{*}^{2})\), and \(\gamma_{*}=(2\beta_{*}+1)/(2\omega_{*}^{2})\) is a numerical factor order of unity. The dimensionless quantity \(\tilde{T}\) is related to the ratio of the torque \(T_{*}\) introduced in (15) and the orbital angular momentum and \(\tilde{S}/\sqrt{1-e^{2}}\) is the ratio of the spin and orbital angular momentum. In what follows we set \(\gamma_{*}=1\) and adopt \(\tilde{I}=0.1\). Eq. (18) together with (20) leads to the representation of the apsidal precession rate in terms of the dimensionless time, \(\tau\), in the form \[\frac{d\hat{\varpi}}{d\tau}=\frac{d\varpi_{T}}{d\tau}+\frac{d\varpi_{E}}{d \tau}+\frac{d\varpi_{R}}{d\tau}+\frac{d\varpi_{NI}}{d\tau}, \tag{24}\] where \[\frac{d\varpi_{T}}{d\tau}=\frac{(1+3e^{2}/2+e^{4}/8)}{(1-e^{2})^{5}}, \tag{25}\] \[\frac{d\varpi_{E}}{d\tau}\approx 4.3\times 10^{-5}\alpha_{E}\frac{(1+q)}{q}\frac{ \tilde{a}^{4}}{(1-e^{2})}, \tag{26}\] \[\frac{d\varpi_{R}}{d\tau}=\frac{(1+q)}{30}\frac{1}{q}\frac{(3\cos^{2}\beta-1) }{(1-e^{2})^{2}}\sigma^{2}, \tag{27}\] \[\mbox{and}\;\;\frac{d\varpi_{NI}}{d\tau}=\frac{1}{15\tilde{I}}\left(\frac{J}{ L}\right)\frac{\cos i\cos\beta}{(1-e^{2})^{3/2}}\sigma\tilde{a}^{2}, \tag{28}\] \[\mbox{with}\;\;\alpha_{E}=\left(\frac{M_{*}}{M_{\odot}}\right)\left(\frac{k_ {2}}{10^{-2}}\right)^{-1}\left(\frac{R_{*}}{R_{\odot}}\right)^{-1}. \tag{29}\] The dependence on \(\cos i\) can be removed by using the relation for the component of the total angular momentum in the direction of \(\mathbf{L}\) \[J\cos i=L+S\cos\beta=L(1+(1-e^{2})^{-1/2}\tilde{S}) \tag{30}\] substituting the above into (28) and making use of (23) we obtain \[\frac{d\varpi_{NI}}{d\tau}=\frac{\sigma\tilde{a}^{2}\cos\beta}{15\tilde{I}(1- e^{2})^{3/2}}+\frac{(1+q)\sigma^{2}\cos^{2}\beta}{15q(1-e^{2})^{2}}. \tag{31}\] Note that although the Einstein term (26) contains a small parameter, it dominates over the tidal contribution when either \(\tilde{a}\) is sufficiently large, or \(q\) is sufficiently small. Comparing (25) and (26) we find that the Einstein term dominates over the tidal one provided that \[\tilde{a}>\tilde{a}_{E}\approx 12\alpha_{E}^{-1/4}\left(\frac{q}{1+q}\right)^{ 1/4}\frac{(1+3e^{2}/2+e^{4}/8)^{1/4}}{(1-e^{2})}. \tag{32}\] The set of equations (20) and (24) also depend on the eccentricity \(e\). We recall that \(J,S,a,\) and accordingly \(\sigma\) and \(\tilde{S}\) are constant in non dissipative evolution (see e.g. IP) The eccentricity can be expressed in terms of the angle \(\beta\) using the first integral derived from the conservation of total angular momentum given by (13), which leads to the relation \[\mathcal{C}=\frac{(1-e^{2})}{2\tilde{S}}+\sqrt{1-e^{2}}\cos\beta. \tag{33}\] where \(\mathcal{C}\) is a constant 2. This may be chosen so that the system takes on prescribed values \(\beta=\beta_{0}\), and \(e=e_{0}\) at \(\tau=0\). Thus Footnote 2: Note a misprint in the corresponding equation (116) of IP, the sign (-) on r.h.s. should be (+). which leads to consistency with (33). \[\mathcal{C}=\frac{(1-e^{2}_{0})}{2\tilde{S}}+\sqrt{1-e^{2}_{0}}\cos\beta_{0}, \tag{34}\] Equation (21) together with equations (24-34) form a complete set for determining the evolution as a function of \(\tau\). This is converted to time \(t=t_{*}\tau\) using (20). In particular after specifying conserved quantities and making use of (25-34) equations (21) and (24) become a pair of first order ordinary differential equations for \(\beta\) and \(\hat{\varpi}\). These contain, \(\tilde{a},\sigma,\alpha_{E},q\) and \(\tilde{I}\) as fixed parameters. #### 3.4.1 Allowed values of \(\tilde{a}\) and \(\sigma\) Here we point out that in what follows, \(\tilde{a}\) should not be too small, and \(\sigma\) should not be too large. Clearly the radius of periastron, \(r_{p}=(1-e)a\), should be larger than the stellar radius \(R_{*}\). Thus \(\tilde{a}\) should be larger than \(1/(1-e)\). Additionally, the radius of periastron cannot smaller than tidal disruption radius \(r_{T}=\left(M_{1}/M_{*}\right)^{1/3}\!R_{*}=q^{1/3}\!R_{*}\), this being larger than the stellar radius when \(q>1\). Combining the requirement that \(r_{p}\) should be larger than both \(R_{*}\) and \(r_{T}\) we have \[\tilde{a}>\tilde{a}_{min}=\frac{\max(1,q^{1/3})}{(1-e)}. \tag{35}\] In addition the rotational frequency \(\Omega_{r}\) should be significantly smaller than \(\Omega_{*}=\sqrt{GM_{*}/R_{*}}^{3}\) as when \(\Omega_{r}\sim\Omega_{*}\) the star experiences rotational break-up. Furthermore, for sufficiently large rotation rates the theory leading to our evolution equations is not applicable. Following Ivanov & Papaloizou (2007a) we shall assume that \(\Omega_{r}<0.5\Omega_{*}\). From this and given that \(\sigma=\Omega_{r}/n_{0}\) we obtain \[\sigma<\sigma_{max}=\frac{\tilde{a}^{3/2}}{2\sqrt{1+q}}. \tag{36}\] #### 3.4.2 The rate of precession of the longitude of periapsis \(d\Pi/dt\) We recall that (28) as given by (A14) is the contribution to the rate of advance of the line of apsides measured with respect to the line of nodes that arises from the precession of the line of nodes itself. It can also be written as \(-(d\Omega_{N}/dt)\cos i\), which is \((-)\) the component of the angular velocity associated with the precession of the line of nodes, \(d\Omega_{N}/dt,\) in the direction of the orbital angular momentum. In order to remove this contribution when either \(|\cos\beta|=1\) or in the limit when the magnitude of the spin angular momentum is negligible compared to the orbital angular momentum, the longitude of periapsis, \(\Pi=\Omega_{N}+\hat{\varpi}-\pi/2\) is often used. When \(|\cos\beta|=1\) the precession is then measured with respect to a line fixed in an inertial frame. We have \(d\Pi/dt=d\hat{\varpi}/dt+d\Omega_{N}/dt\). Thus the transition from \(d\hat{\varpi}/dt\) to \(d\Pi/dt\) is obtained if \(\cos i\) is replaced by by \(\cos i-1\) in (28). Making use of (4) and (30) one finds that following the above prescription (28) is modified to become \[\frac{d\varpi_{NI}}{d\tau}\rightarrow\frac{1}{15\tilde{I}}\frac{(\sqrt{1-S^{2 }/J^{2}\sin^{2}\beta}-1)}{(\sqrt{1-S^{2}/J^{2}\sin^{2}\beta}-S\cos\beta/J)} \frac{\sigma\tilde{a}^{2}\cos\beta}{(1-e^{2})^{3/2}}. \tag{37}\] Notably, this vanishes in the limit of small \(S/J\) which is the expected situation when \(q\) is of order unity. Thus in this limit \(d\Pi/dt\) is obtained from \(d\hat{\varpi}/dt\) by simply omitting \(d\varpi_{NI}/dt\). However, no such simplification occurs for small \(q\) and it is important to note that \(\hat{\varpi}\) rather than \(\Pi\) is the significant angle when the evolution of \(\beta\) is concerned. Hence, hereafter we focus on this. ## 4 Discussion of the evolution equations A qualitative analysis of the evolution equations under the assumption that variations of \(\beta\) are small #### 4.1.1 Determining the dominant form of apsidal precession The behaviour of our system depends on the relative values of \(d\varpi_{T}/d\tau\), \(d\varpi_{E}/d\tau\), \(d\varpi_{R}/d\tau\) and \(d\varpi_{NI}/d\tau\). To estimate importance of these terms which contribute to the right hand side of equation (24), we set \((3\cos^{2}\beta-1)\) and \(\cos\beta\) to unity in equations (27) and (31), respectively. We then adopt the largest of the two terms on the right hand side of (31) to make estimates. In this way we obtain \[\frac{d\varpi_{NI}}{d\tau}\sim\max\left(\frac{\sigma\tilde{a}^{2} }{15\tilde{I}(1-e^{2})^{3/2}},2\frac{d\varpi_{R}}{d\tau}\right),\ \ \mathrm{and}\] \[\frac{d\varpi_{R}}{d\tau}\sim\frac{(1+q)\sigma^{2}}{30q(1-e^{2} )^{2}}. \tag{38}\] It follows that either we have \(d\varpi_{NI}/d\tau\sim d\varpi_{NI}^{(1)}/d\tau\equiv\sigma\tilde{a}^{2}/(15 \tilde{I}(1-e^{2})^{3/2})\), or both inertial and rotational terms have the same order of magnitude. In what follows we call the latter case as rotational-non-inertial and use \(d\varpi_{RNI}/d\tau\sim(\ 1\ +\ q\ )\sigma^{2}/(15q(1-e^{2})^{2})\) for our estimates below. 1.2 Values of, \(\sigma\equiv\Omega_{r}/n_{0},\) separating regimes of tidal and non inertial precession Let us consider the situation when \(\tilde{a}<\tilde{a}_{E}\), and, accordingly, tidal precession is more important than Einstein precession. From the condition, \(d\varpi_{RNI}/d\tau>d\varpi_{T}/d\tau,\) we obtain the requirement that \(\sigma>\sigma_{1}\), where \[\sigma_{1}=\sqrt{\frac{15q(1+3e^{2}/2+e^{4}/8)}{(1+q)(1-e^{2})^{3}}}. \tag{39}\] Similarly, the condition that, \(d\varpi_{RNI}/d\tau>d\varpi_{NI}^{(1)}/d\tau\) leads to the requirement \(\sigma>\sigma_{2}\), where \[\sigma_{2}=\frac{q}{(1+q)\tilde{I}}(1-e^{2})^{1/2}\tilde{a}^{2}. \tag{40}\] In addition, the condition \(d\varpi_{NI}^{(1)}/d\tau>d\varpi_{T}/d\tau\) leads to \(\sigma>\sigma_{3}\), where \[\sigma_{3}=15\tilde{I}\frac{(1+3e^{2}/2+e^{4}/8)}{(1-e^{2})^{7/2}}\tilde{a}^ {-2}. \tag{41}\] #### 4.1.3 Values of, \(\sigma\equiv\Omega_{r}/n_{0},\) separating Einstein and non inertial precession When \(\tilde{a}>\tilde{a}_{E}\) and Einstein precession is more important than tidal precession, the condition \(d\varpi_{RNI}/d\tau>d\varpi_{E}/d\tau\) gives \(\sigma>\sigma_{4}\), where \[\sigma_{4}=2.5\times 10^{-2}\alpha_{E}^{1/2}(1-e^{2})^{1/2}\tilde{a}^{2}. \tag{42}\] In addition the condition, \(d\varpi_{NI}^{(1)}/d\tau>d\varpi_{E}/d\tau\), yields \(\sigma>\sigma_{5}\), where \[\sigma_{5}=6.7\times 10^{-4}\alpha_{E}\tilde{I}\frac{(1+q)}{q}(1-e^{2})^{1/2} \tilde{a}^{2}. \tag{43}\] From the above considerations we see that for fixed \(\tilde{I}\), \(q\) and \(e\) regions in the \((\tilde{a},\sigma)\) plane can be determined where one of \(d\varpi_{T}/d\tau\), \(d\varpi_{E}/d\tau,\)\(d\varpi_{NI}^{1}/d\tau,\) or \(d\varpi_{RNI}/d\tau\) dominates. We denote the largest of these at a point in the \((\tilde{a},\sigma)\) plane as \(\dot{\varphi}\). #### 4.1.4 Critical curves There is a possibility that the contribution of the different terms on the right hand side of (24) cancel each other in such a way that we have \(d\hat{\varpi}/d\tau\approx 0\). For a given set of values of \(\alpha_{E}\), \(q\)\(\tilde{I}\) and initial values of \(e\) and \(\beta\), namely \(e_{0}\) and \(\beta_{0}\), respectively, the condition \(d\hat{\varpi}/d\tau=0\) leads to an algebraic equation for a curve in the \((\sigma,\tilde{a})\) plane, referred hereafter to as a 'critical curve', which may or may not have physical solutions depending on the values of the parameters entering (24). An analysis of the evolution of our system near critical curves is discussed below in Section 5. #### 4.1.5 The variation of \(\beta\) in the different regimes of apsidal precession Away from a critical curve a characteristic amplitude of variation of \(\beta\) in the course of time, \(\Delta\beta\), can be estimated as \(\Delta\beta\sim\dot{\phi}^{-1}d\beta/d\tau\), where \(d\beta/d\tau\) is given by equation (21). For the purpose of making crude estimates we replace \(\cos\beta\) and \(\sin 2\hat{\varpi}\) by unity, and \(\sin\beta\) by \(\sin\beta_{0}\), thus obtaining \[\Delta\beta \sim \frac{3}{5}\frac{q}{\tilde{I}\dot{\phi}}\frac{e^{2}(1+e^{2}/6)}{( 1-e^{2})^{9/2}}\sigma\tilde{a}^{-1}{\sin\beta_{0}}\quad{\rm or}\] \[\Delta\beta \sim \frac{3}{5}\frac{(1+q)}{\dot{\phi}}\frac{e^{2}(1+e^{2}/6)}{(1-e^{2 })^{5}}\sigma^{2}\tilde{a}^{-3}{\sin\beta_{0}}, \tag{44}\] depending on whether the second term in brackets in (21) dominates the first or vice versa. From (16) and (21)-(23), we see that the former case corresponds to the orbital angular momentum being larger than the rotational angular momentum, being realised when \(\sigma<\sigma_{2}\). Substituting estimates of \(d\varpi_{T}/d\tau\), \(d\varpi_{E}/d\tau\), \(d\varpi_{NI}^{(1)}/d\tau\), or, \(d\varpi_{RNI}/d\tau\) for \(\dot{\phi}\) in (44), we can find a typical amplitude of variation of \(\beta\) in the four regions of the \((\tilde{a},\sigma)\) plane, where these terms respectively dominate. In the first of these regions where \(d\varpi_{T}/d\tau\) dominates equation (44) becomes3 Footnote 3: Note that the first expression in (45) corresponds to the ‘standard evolution’ considered in IP for which the apsidal precession is dominated by the tidal term and the orbital angular momentum is more significant. \[\Delta\beta \sim \frac{3}{5}\frac{q}{\tilde{I}}\frac{e^{2}(1+e^{2}/6)(1-e^{2})^{1/2} \sigma}{\left(1+3e^{2}/2+e^{4}/8\right)\tilde{a}}\mathrm{sin}\,\beta_{0},\ \ \mathrm{or}\] \[\Delta\beta \sim \frac{3(1+q)e^{2}(1+e^{2}/6)\sigma^{2}}{5\left(1+3e^{2}/2+e^{4}/8 \right)\tilde{a}^{3}}\mathrm{sin}\,\beta_{0} \tag{45}\] the first alternative applying for \(\sigma<\sigma_{2}\) and the second for \(\sigma>\sigma_{2}\). Similarly, in the region dominated by Einstein precession equation (44) becomes \[\Delta\beta \sim 1.4\times 10^{4}\,\frac{\alpha_{E}^{-1}q^{2}e^{2}(1+e^{2}/6) \sigma}{\tilde{I}(1+q)(1-e^{2})^{7/2}\tilde{a}^{5}}\mathrm{sin}\,\beta_{0},\ \ \mathrm{or}\] \[\Delta\beta \sim 1.4\times 10^{4}\frac{\alpha_{E}^{-1}qe^{2}(1+e^{2}/6)\sigma^{2}} {(1-e^{2})^{4}\tilde{a}^{7}}\mathrm{sin}\,\beta_{0}, \tag{46}\] the first alternative applying for \(\sigma<\sigma_{2}\) and the second for \(\sigma>\sigma_{2}\). Finally, in the region where \(d\varpi_{NI}/d\tau\) dominates, which is always the case when \(\sigma\) is sufficiently large, we find, regardless of the magnitude of \(\sigma\) or which term in (38) dominates, that \[\Delta\beta\sim 9q\frac{e^{2}(1+e^{2}/6)}{(1-e^{2})^{3}}\tilde{a}^{-3}\mathrm{ sin}\,\beta_{0}. \tag{47}\] #### 4.1.6 Regimes of evolution as a function of \(\sigma\) Let us consider how different regimes of evolution arise when \(\sigma\) increases and all other quantities entering the equation for apsidal precession rate are kept fixed. Firstly consider the case \(\tilde{a}<\tilde{a}_{E}\) and precession due to tides is more important than Einstein precession. From equations (27-28) it follows that when \(\sigma\) is sufficiently small, that is less than the smaller of \(\sigma_{1}\) and \(\sigma_{3}\), the evolution will be dominated by tidal effects. On the other hand, when \(\tilde{a}>\tilde{a}_{E}\) and Einstein precession is more important than tidal precession, when \(\sigma\) is less than the smaller of \(\sigma_{4}\) and \(\sigma_{5}\) the evolution will be dominated by Einstein precession. When the evolution is dominated by either tidal or Einstein precession, the situation is referred to hereafter as "the standard evolution regime". When this is not the case we designate the situation as 'the rotational regime' for any value of \(\tilde{a}\). 1.7 Estimated change in \(\beta\) in the different regimes when precession due to tidal effects is more important than Einstein precession Let us consider the case \(\tilde{a}<\tilde{a}_{E}\) in more detail. It is easy to see from their definitions that if any two of the \(\sigma_{i}\), \(i=1,2,3,\) are equal then all of them are. Thus \(\sigma_{1}(\tilde{a}_{*})=\sigma_{2}(\tilde{a}_{*})=\sigma_{3}(\tilde{a}_{*})\) for any value \(\tilde{a}_{*}\) of \(\tilde{a}\) for which this occurs. In stating this we remark that \(\sigma_{1}\) does not in fact depend on \(\tilde{a}\). Note too that the parameters of the problem should be such that \(\tilde{a}_{*}\) exceeds \(\tilde{a}_{min}=1/(1-e)\) in order for this quantity to play a role. Equating \(\sigma_{1}\) and \(\sigma_{2}\) we get \[\tilde{a}\equiv\tilde{a}_{*}=\left(\frac{15(1+3e^{2}/2+e^{4}/8)(1+q)}{q} \right)^{1/4}\frac{\sqrt{\tilde{I}}}{(1-e^{2})}. \tag{48}\] As \(\sigma_{1}\) is independent of \(\tilde{a}\), \(\sigma_{2}\propto\tilde{a}^{2}\) and \(\sigma_{3}\propto\tilde{a}^{-2},\) we see that when \(\tilde{a}<\tilde{a}_{*}\) we have \(\sigma_{2}<\sigma_{1}<\sigma_{3}\). Thus the evolution is in the standard regime when \(\sigma<\sigma_{1}\) and in the rotational regime when \(\sigma>\sigma_{1}\). When \(\sigma<\sigma_{2}\) in the standard regime we should use the first expression for \(\Delta\beta\) in (45) otherwise the second is used. In the rotational regime (47) should be used. On the other hand, when \(\tilde{a}>\tilde{a}_{*}\), we have \(\sigma_{3}<\sigma_{1}<\sigma_{2}\). the evolution is in the standard regime when \(\sigma<\sigma_{3}\). As the orbital angular momentum is more important than the rotational angular momentum, the first expression in (45) should be used. When \(\sigma>\sigma_{3}\) the system is in the rotational regime and (47) applies. Equations (45-47) indicate that \(\Delta\beta\) increases with \(\sigma\) in the standard regime and does not depend on \(\sigma\) in the rotational regime. 1.8 Estimated change in \(\beta\) in the different regimes when Einstein precession is more important than precession driven by tidal effects When \(\tilde{a}>\tilde{a}_{E}\) from (40), (42) and (43) we see that \(\sigma_{2}\), \(\sigma_{4}\) and \(\sigma_{5}\) have the same dependence on \(\tilde{a}\), being \(\propto\tilde{a}^{2}\) Thus the condition for the non-inertial regime of evolution \(\sigma_{2}/\sigma_{5}>1\) is the same for all \(\tilde{a}>\tilde{a}_{E}\). It becomes a condition for mass ratio, \(q,\) to be sufficiently large \[\frac{q}{(1+q)}>2.5\cdot 10^{-2}\alpha_{E}^{1/2}\tilde{I}. \tag{49}\] When this condition is satisfied we have \(\tilde{a}_{*}<\tilde{a}_{E}\), and the evolution is in the standard regime when \(\sigma<\sigma_{5}\) and is rotationally dominated otherwise. Since (49) implies the orbital angular momentum exceeds the rotational angular momentum, for standard evolution we use the first expression in (46) for \(\Delta\beta\). In the rotationally dominated case (47) should be used. When the inequality (49) is reversed we obtain standard evolution when \(\sigma<\sigma_{4}\) and rotational evolution when \(\sigma>\sigma_{4}\). When \(\sigma<\sigma_{4}\) and \(\sigma<\sigma_{2}\), \(\Delta\beta\) is determined by the first expression in (46) and when \(\sigma_{2}<\sigma<\sigma_{4}\), \(\Delta\beta\) is determined by the second expression in (46). Finally, when \(\sigma>\sigma_{4},\)\(\Delta\beta\) should be evaluated using (47). #### 4.1.9 Approximate boundaries of the regimes of evolution in the \(\sigma,\tilde{a}\) plane It is important to note that in all cases when \(\sigma\) is large enough \(\Delta\beta\) is determined by (47). This gives the largest possible value of \(\Delta\beta\) for all \(\sigma,\) for given \(q,\)\(e\) and \(\tilde{a}\) provided that a single term dominates the apsidal precession rate given by equation (24). However, as mentioned above, there could be a situation where different terms in (24) compensate each other and the apsidal precession rate is close to zero. This situation is considered in the next Section. When \(\sigma<\sigma_{2},\) the orbital angular momentum is larger than the rotational angular momentum. Whether standard evolution or evolution in the rotational regime takes place is determined by the relation of, \(\sigma,\) to \(\sigma_{i},i=1,3,4,5,\) according as to whether, \(\tilde{a},\) is larger or smaller than \(\tilde{a}_{E}\) and \(\tilde{a}_{*}\) and also on whether \(q\) is such that the inequality (49) is satisfied. When this is satisfied and \(\tilde{a}_{*}<\tilde{a}_{E},\) so that tidal precession dominates. When \(\tilde{a}<\tilde{a}_{*}\) the border between the standard and rotational regimes is given by \(\sigma=\sigma_{1}.\) When \(\tilde{a}_{*}<\tilde{a}<\tilde{a}_{E}\) this border is given by \(\sigma=\sigma_{3}.\) When \(\tilde{a}>\tilde{a}_{E}\) and the inequality (49) is satisfied the border between the standard and rotational regime is given by \(\sigma=\sigma_{5}.\) When \(q\) is such that the inequality (49) is not satisfied, this border is given by \(\sigma=\sigma_{4}.\) These borders between standard and rotational regimes of evolution can be used to construct curves that separate regions where standard evolution occurs from those where rotationally dominant evolution occurs throughout allowed regions the \((\tilde{a},\sigma)\) plane for specified values of \(e\) and \(q.\) These are illustrated in Fig. 1 for \(q=1\), \(0.1\), \(10^{-2}\) and \(10^{-3}.\) For each of these cases \(e=0.5\), and \(\tilde{I}=0.1.\) Thus \(\tilde{a}_{min}=2\) throughout. When \(q=1,\)\(\tilde{a}_{*}=1\) and \(\tilde{a}_{E}=14.5.\) When \(q=0.1\), \(\tilde{a}_{*}=1.6\) and \(\tilde{a}_{E}=9.5.\) When \(q=10^{-2}\), \(\tilde{a}_{*}=2.8\) and \(\tilde{a}_{E}=5.5.\) When \(q=10^{-3}\), \(\tilde{a}_{*}=5\) and \(\tilde{a}_{E}=3.\) Note that when \(q=1\) or \(0.1,\)\(\tilde{a}_{*}<\tilde{a}_{min},\) and that only in the case with \(q=10^{-3}\) the inequality (49) is not satisfied. Finally, we recall that we set \((3\cos^{2}\beta-1)\) and \(\cos\beta\) to unity in equations (27) and (31) to obtain these borders. Given the form of these equations, this should provide a reasonable approximation for \(|\cos\beta|\) not too small. Polar orbits with \(\beta=\pi/2\) are discussed separately in Section 5.4 below. ## 5 Evolution near a critical curve on which \(d\tilde{\varpi}/d\tau=0\) For a particular set of the parameters entering eq. (24) \(d\varpi/d\tau=0\). For a given set of values of \(\alpha_{E}\), \(q\)\(\tilde{I}\) and some initial value of \(\beta\), \(\beta_{0}\), with corresponding initial eccentricity, \(e=e_{0}\) (see (34)), the condition \(d\varpi/d\tau=0\) can be represented as a curve \(\tilde{a}=\tilde{a}_{0}(\sigma)\). When \(\tilde{a}\) is close to \(\tilde{a}_{0}\) the rate of apsidal precession is small and variations of \(\beta\) are expected to be much larger than in the general case discussed above. The curve \(\tilde{a}=\tilde{a}_{0}(\sigma)\) is referred to hereafter as a critical curve. In this Section we analyse possible forms of critical curves and the variation of \(\beta\) when \(\tilde{a}\) is close to \(\tilde{a}_{0}(\sigma)\). ### Properties of critical curves Setting \(d\hat{\varpi}/d\tau=0\) in (24) results in biquadratic equation for \(\tilde{a}_{0}\) with the solutions \[\tilde{a}_{0}=\pm\left\{\frac{1}{2A}\left(-B\pm\sqrt{B^{2}-4AC}\right)^{1/2} \right\}^{1/2}, \tag{50}\] where \[A = \gamma_{E}\frac{(1+q)}{q(1-e_{0}^{2})},\ B=\frac{1}{15\tilde{I}} \frac{\sigma\cos\beta_{0}}{(1-e_{0}^{2})^{3/2}},\ \ \mathrm{and}\] \[C = \frac{(1+3e_{0}^{2}/2+e_{0}^{4}/8)}{(1-e_{0}^{2})^{5}}+\frac{(1+ q)(5\cos^{2}\beta_{0}-1)}{30q(1-e_{0}^{2})^{2}}\sigma^{2}, \tag{51}\] with \(\gamma_{E}=4.3\cdot 10^{-5}\alpha_{E}\). It is clear that that only solutions of (50) that are real and positive can be physically relevant. When simplifying expressions it is sometimes convenient to display the explicit dependence of the quantities \(A\), \(B\), and, \(4AC\), on \(\beta_{0}\) and \(\sigma\). Accordingly, we set \[B=b\sigma\cos\beta_{0},\ \ \mathrm{and}\ \ 4AC=d_{1}+d_{2}\sigma^{2}(5 \cos^{2}\beta_{0}-1),\] \[\mathrm{where}\ b=\frac{1}{15\tilde{I}}\frac{1}{(1-e_{0}^{2})^{3/2 }},\ d_{1}=\frac{4\gamma_{E}(1+q)}{q}\frac{(1+\frac{3e_{0}^{2}}{2}+\frac{e_{0 }^{4}}{8})}{(1-e_{0}^{2})^{6}},\] \[\mathrm{and}\ d_{2}=\frac{2\gamma_{E}}{15}\frac{(1+q)^{2}}{q^{2}} \frac{1}{(1-e_{0}^{2})^{3}}. \tag{52}\] From their definitions it follows that \(b\), \(d_{1}\) and \(d_{2}\) are always positive. In terms of these quantities (50) gives 4 Footnote 4: The possible unphysical solution with \(\tilde{a}_{0}\) has been omitted. \[\tilde{a}_{0} = \left\{\frac{1}{2A}(-b\cos\beta_{0}\sigma\pm\sqrt{d})\right\}^{1/ 2},\ \ \mathrm{where}\] \[d = (b^{2}\cos^{2}\beta_{0}+d_{2}(1-5\cos^{2}\beta_{0}))\sigma^{2}-d _{1}. \tag{53}\] ### Prograde rotation From (53) it is seen that when \(\beta_{0}<\pi/2\), \(-b\cos\beta_{0}\sigma<0\) and there can only be one branch corresponding to \((+)\) in (53). It is also necessary that \(d>b^{2}\cos^{2}\beta_{0}\sigma^{2}\) for the expression in the braces in (53) to be positive. Thus we require that \(d_{2}(1-5\cos^{2}\beta_{0})\sigma^{2}>d_{1}\). Accordingly, we can have physical solutions of (53) only when \[d_{1}<d_{2}\sigma^{2}\ \ {\rm and}\ \ \beta_{0}>\beta_{crit}=\cos^{- 1}\sqrt{\frac{1}{5}-\frac{d_{1}}{5d_{2}\sigma^{2}}}\equiv\] \[\ \ than \(\tilde{a}_{min}\) and \(\sigma\) should be smaller than \(\sigma_{max}\). However, in our analysis below we formally assume that \(\tilde{a}_{0}\) and \(\sigma\) are not constrained by these physical conditions, and instead illustrate them graphically for a specified value of \(\beta_{0}\). #### 5.3.1 The case \(-1/\sqrt{5}<\cos\beta_{0}<0\) The condition that \(d>0\) results in \[\sigma>\sqrt{\frac{d_{1}}{b^{2}\cos^{2}\beta_{0}+d_{2}(1-5\cos^{2}\beta_{0})}}, \tag{57}\] noting that in this case the expression under the square root is always positive. From (56) we see that two realisable solutions exist when \[\sigma<\sqrt{\frac{d_{1}}{d_{2}(1-5\cos^{2}\beta_{0})}}. \tag{58}\] As the right hand side of (57) is always smaller than that of (58), there always a region in the \((\tilde{a}_{0},\sigma)\) plane where two solutions are present subject to the physical constraints being met. In this region a special role is played by a value of \(\sigma=\sigma_{d}\), such that \(d\) is zero: \(d(\sigma_{d})=0\) found by turning the inequality to equality in (57). At this value the branches for each solution merge. The corresponding dimensionless semi-major axis is \(\tilde{a}_{d}\). When \(\sigma\) increases from \(\sigma_{d}\) we have \(d>0,\) Figure 1: We show the borders between the standard and rotational regimes represented by solid piecewise continuous curves for \(e=0.5\), \(\tilde{I}=0.1\), and various values of \(q\). Black, red, green and blue curves correspond to \(q=1\), \(0.1\), \(10^{-2}\) and \(10^{-3}\), respectively. But note that the first three of these curves have regions of overlap for \(\tilde{a}<\sim 10\) on segments where they are specified by \(\sigma=\sigma_{3}\), this quantity being independent of \(q\). \(\tilde{a}_{0}\) is then larger or smaller than \(\tilde{a}_{d}\) depending on whether the \((+)\) or \((-)\) sign is adopted for the square root in equation (53). There are no solutions for \(\sigma<\sigma_{d}\) in this case. Figure 3: The left panel is as in Fig. 2 but for the retrograde case with \(\beta_{0}=4\pi/5\). In this case black solid, red dashed, green dot dashed and blue double dot dashed curves are for \(q=10\), \(1\), \(10^{-1}\) and \(10^{-2}\) respectively. The right panel is as in Fig. 2 but critical curves for polar orbits with \(\beta_{0}=\pi/2\) are shown. In this case black solid, red dashed, green dot dashed, blue double dot dashed and magenta dot double dashed curves are for \(q=10\), \(1\), \(10^{-1}\), \(10^{-2}\) and \(10^{-3}\)respectively. Figure 2: The left panel shows critical curves for the prograde case with \(\beta_{0}=2\pi/5\) in the \((\sigma,\tilde{a}_{0})\) plane. The eccentricity \(e_{0}=0.5\) and \(\tilde{I}=0.1\). Black Solid, red dashed, green dot dashed and blue dot dot dashed curves have \(q=10^{-2}\), \(10^{-3}\), \(10^{-4}\) and \(10^{-5}\), respectively. The black dotted curve illustrates the condition (A6) that restational frequency cannot be too large. See the text for additional description of particular curves. The Figure 4: As for the right panel of Fig. 3, but with eccentricity \(e_{0}=0.2\) (left panel) and \(e_{0}=0.9\) (right panel). Figure 5: The characteristic amplitude of the variation of \(\beta\) defined in eq. (84), and evaluated along the critical curves as a function of \(\sigma\) is shown in the left panel for \(\beta_{0}=2\pi/5\). Curves with different line styles correspond to values of \(q\) in the same way as for the left panel of Fig. 2 The same quantity, but for \(\beta_{0}=3\pi/5\) is shown in the right panel. In this case the line styles associated with the different curves are related to \(q\) as in the right panel of Fig. 2. For both panels \(e_{0}=0.5\). As noted in Section 6.1.1 the sharp maxima that can be seen are unrealistic owing to the vanishing of \(\mathcal{D}\). In practice, as discussed there, the wings on either side should connect smoothly. #### 5.3.2 The case \(\cos\beta_{0}<-1/\sqrt{5}\) In this case the condition that \(d\) is positive is again given by (57), and the expression under the square root is positive for any \(\beta_{0}\) when \(b^{2}>5d_{2}\). When \(b^{2}<5d_{2}\) it is positive only when \[\cos^{2}\beta_{0}\leqslant\frac{d_{2}}{5d_{2}-b^{2}}. \tag{59}\] The condition (59) does not constrain possible values of \(\beta_{0}\) provided that the expression on the right hand side is larger than one. The latter requirement results in \[\frac{q^{2}}{(1+q)}>\frac{q_{crit}^{2}}{(1+q_{crit})^{2}}=120\gamma_{E}\tilde{I }^{2}. \tag{60}\] Noting that the smallness of \(\tilde{I}\) and \(\gamma_{E}\) we neglect \(q_{crit}\) in the factor \((1+q_{crit})\) in (60) thus obtaining \[q_{crit}=7.2\cdot 10^{-2}\alpha_{E}^{1/2}\tilde{I}. \tag{61}\] In summary, when \(q<q_{crit}\) the value of \(\cos\beta_{0}\) should be larger than \(-\sqrt{d_{2}/(5d_{2}-b^{2})}\) for the existence of critical curves. From equation (56) it is seen that when they do exist there always two solutions of (53). ### Polar orbits Equation (50), yielding values of \(\tilde{a}_{0}\) on a critical curve, has a solution with a simple form when the stellar rotational axis lies in the orbital plane and, accordingly, \(\beta_{0}=\pi/2\). In this case we have from (53) the single solution \[\tilde{a}_{0} =\left(\frac{\sigma^{2}(1+q)-30q(1+3e_{0}^{2}/2+e_{0}^{4}/8)(1-e_ {0}^{2})^{-3}}{30\gamma_{E}(1+q)(1-e_{0}^{2})}\right)^{1/4}\] \[\approx 5.3\left(\frac{\sigma^{2}(1+q)-30q(1+3e_{0}^{2}/2+e_{0}^{4}/8)(1 -e_{0}^{2})^{-3}}{\alpha_{E}(1+q)(1-e_{0}^{2})}\right)^{1/4} \tag{62}\] where we use the definitions of \(d_{1}\) and \(d_{2}\) given in (52), and, obviously, only values \(\tilde{a}_{0}>\tilde{a}_{min}\), as specified by (35), should be considered. From the condition \(\tilde{a}_{min}=\tilde{a}_{0}(\sigma_{min})\) we obtain the smallest allowed value of \(\sigma\) to be given by \[\sigma_{min}=\left(\frac{16(A\tilde{a}_{min})^{4}+d_{1}}{d_{2}}\right)^{1/2}. \tag{63}\] In the same way the largest allowed value of \(\tilde{a}_{0}\), \(\tilde{a}_{max}\) is obtained by substituting of (36) in (62), thus \(\tilde{a}_{max}=\tilde{a}_{0}(\sigma_{max})\). ### Graphical representation of critical curves We illustrate realisable critical curves in the \((\sigma,\tilde{a}_{0})\) plane in Figs. 2, 3, and 4 for different values of \(\beta_{0}\). The eccentricity, \(e_{0}\) is taken to be \(e_{0}=0.5\) in Figs. 2, and 3. In Fig. 4 we illustrate critical curves for polar orbits with \(\beta_{0}=\pi/2\). with \(e=0.2\) (left panel) and \(e=0.9\) (right panel), respectively. Curves with different line style are for different values of \(q\) except for dotted curves which always represent the limiting curve determined by equation (36), in which we have set \(q=1\). We remark that we consider values of \(q\leqslant 10\), and the difference in the maximum allowed \(\tilde{a}_{0}\), \(\tilde{a}_{lim}\), as determined from (36) is a factor of \(\sim 2\) or less. In this way we obtain \(\tilde{a}_{0}<\tilde{a}_{lim},\) where \(\tilde{a}_{lim}=(2\sigma)^{2/3}\), which is represented by dotted curves. It is implied, for a given \(\sigma\), that only values of \(\tilde{a}_{0}>a_{lim}\) should be taken into account. Additionally, we show only values of \(\tilde{a}_{0}\) larger than \(\tilde{a}_{min}\) given by equation (35). In the left panel of Fig. 2 we show critical curves on which \(\beta_{0}=2\pi/5\), which being \(<\pi/2\) corresponds to prograde rotation as discussed in Section 5.2. As explained there, there is only one branch of the curves in this case. Also, only rather small mass ratios are allowed as a result of the condition \(\tilde{a}_{0}>\tilde{a}_{lim}\). Black solid, red dashed, green dot dashed and blue dot dot dashed curves are for \(q=10^{-2}\), \(10^{-3}\), \(10^{-4}\) and \(10^{-5}\), respectively. As seen from these plots all curves are such that \(\tilde{a}_{0}\) grows monotonically with \(\sigma\) and, for a given \(\sigma\) larger values of \(\tilde{a}_{0}\) correspond to smaller mass ratios. In the right panel of Fig. 2 we illustrate critical curves with \(\beta_{0}=3\pi/5\), which has retrograde rotation and satisfies \(-1/\sqrt{5}<\cos(\beta_{0})<0\) as discussed in Section 5.3.1. Black solid, red dashed, green dot dashed, blue double dot dashed and magenta dot double dashed curves are for \(q=10\), \(1\), \(10^{-1}\), \(10^{-2}\) and \(10^{-3}\), respectively. As seen from these plots, the situation is quite different from the previous case. Apart from the case with \(q=10^{-3}\) there are two branches merging at \(\sigma=\sigma_{d}\), which is the smallest value of \(\sigma\) that can be realised on a critical curve with prescribed \(q\), \(e_{0}\) and \(\beta_{0}\). Also, contrary to the previous case values of \(\tilde{a}_{0}\) for a given \(\sigma,\) belonging to the upper branch, are larger for larger values of \(q\), values of \(\tilde{a}_{0}\) corresponding to upper (lower) branch increasing (decreasing) with \(\sigma\). In the left panel of Fig. 3 we illustrate critical curves for the retrograde case with \(\beta_{0}=4\pi/5\). In this case \(\cos(\beta_{0})<-1/\sqrt{5}\) a situation that is discussed in Section 5.3.2. Black solid, red dashed, green dot dashed and blue double dot dashed curves are for \(q=10\), \(1\), \(10^{-1}\) and \(10^{-2}\), respectively. This situation is similar to that previous retrograde case but the curve for \(q=10^{-3}\) is absent. This is because \(\sigma_{d}\) is larger than the largest value of \(\sigma,\) namely \(\sigma=100\) shown. Since larger values of \(\sigma,\) for which tides are significant, are unlikely to be realised in an astrophysical context, we conclude that when \(\cos(\beta_{0})<-1/\sqrt{5}\) and the mass ratio is sufficiently small, finding a system evolving close to a critical curve is unlikely. Critical curves for polar orbits with \(\beta_{0}=\pi/2\) with \(e_{0}=0.5\) are shown in the right panel of Fig. 3. In addition, critical curves for \(\beta_{0}=\pi/2\) but with \(e_{0}=0.2\) and \(e_{0}=0.9\) are illustrated in the left and right panels of Fig. 4, respectively. Black solid, red dashed, green dot dashed, blue double dot dashed and magenta dot double dashed curves are for \(q=10\), \(1\), \(10^{-1}\), \(10^{-2}\) and \(10^{-3}\), respectively. The case \(e_{0}=0.5\) shown in the right panel of Fig. 3 can be compared to the previous cases which all have the same value of \(e_{0}\). As for the prograde case there is only one branch of a critical curve for a given \(q\), but they exist for larger values of \(q\) at sufficiently large values of \(\sigma\). In addition, curves with small mass ratios reach smaller values of \(\sigma\). In the opposite limit of large \(\sigma\) all curves have the same asymptote. As seen from Fig. 4 when \(e_{0}\) is smaller (larger) the range of allowed \(\sigma\) is shifted towards smaller (larger) values. Note, however, that in case of large eccentricity it is more reasonable to compare the rotational frequency with a typical periastron passage frequency, which scales as \(n_{0}/(1-e)^{3/2}\). The condition \(d\Pi/dt=0\) and its limit when the ratio of spin angular momentum to orbital angular momentum is small To obtain \(d\Pi/dt\) we equate it to the right hand side of equation (24) making use of equations (25)-(27) for \(d\varpi_{T}/dt,d\varpi_{E}/dt,\) and \(d\varpi_{R}/dt\) respectively, and equation (37) to specify \(d\varpi_{NI}/dt\). Although the angle \(\Pi\) is not directly involved in the evolution of \(\beta,\) it may be of interest in the context of observations of apsidal motion and when this reverses direction, which happens when \(d\Pi/dt\) passes through zero. When this happens for some \(\beta_{0},\) and \(e_{0}\) can be determined in the same way as critical curves. To do this, from the discussion in Section 3.4.2 it follows that we should make the replacements \[\tilde{I}\rightarrow\tilde{I}\frac{(\sqrt{1-S^{2}/J^{2}\sin^{2} \beta}-S\cos\beta/J)}{\sqrt{1-S^{2}/J^{2}\sin^{2}\beta}-1},\;{\rm and}\] \[5\cos^{2}\beta\to 3\cos^{2}\beta. \tag{64}\] In particular this formulation is most useful in the limit \(S/J\to 0\) in which case \(B\) in equation (51) \(\rightarrow\;0,\) and as a consequence \(d\Pi/dt\) passes through zero when \[\frac{\gamma_{E}(1+q)\tilde{a}_{0}^{4}}{q(1-e_{0}^{2})}+\frac{(1+\frac{3c_{0}^ {2}}{2}+\frac{e_{0}^{4}}{8})}{(1-e_{0}^{2})^{5}}+\frac{(1+q)(3\cos^{2}\beta_{0 }-1)\sigma^{2}}{30q(1-e_{0}^{2})^{2}}=0. \tag{65}\] In the case of polar orbits with \(\beta_{0}=\pi/2\) this clearly yields the same critical curve condition given in Section 5.4. We also note that as the polar orbit is the most favourable for reversing the sign of \(d\Pi/dt,\) equation (62) gives an upper bound value on the values of of \(\tilde{a}_{0},\) for a given \(\sigma\), for which this is possible. ### Relationship to fixed points and the evolution of \(\beta\) Critical curves are such that on them \(d\hat{\varpi}/d\tau=0\). As the apsidal precession rate does not depend on \(\hat{\varpi}\) this is not required to define critical curves. However, if we insist that \(d\beta/d\tau=0\) in addition, we define a fixed point. For general \(\beta_{0}\) this requires \(\hat{\varpi}=0,\pi/2\) or \(3\pi/2\) (see eq. (16). When \(\hat{\varpi}\) takes on one of these values, \(\beta\) remains fixed at the value \(\beta_{0}\). However, if a different value is specified then \(\beta\) will vary with time displaying an oscillatory motion. From (16) when \(T_{*}\) or \(e\) is small the amplitude of this motion will be small. But the changes in \(\beta\) will exceed those found well away from critical curves as described in Section 4.1.5. This will be discussed further below. ## 6 The evolution equations and the behaviour of solutions in the neighbourhood of a critical curve Let us assume that at the moment of time \(t=0,\)\(\beta=\beta_{0},e=e_{0},\) and the solution crosses the critical curve. At this time, by definition, \(d\hat{\varpi}/d\tau=0\) and we have from equations (24-31) \[\frac{d\varpi_{T}}{d\tau}+\frac{d\varpi_{E}}{d\tau}+\frac{(1+q)(5\cos^{2} \beta_{0}-1)\sigma^{2}}{30q(1-e_{0}^{2})^{2}}+\frac{\sigma\tilde{a}_{0}^{2} \cos\beta_{0}}{15\tilde{I}(1-e_{0}^{2})^{3/2}}=0, \tag{66}\] When the system evolves with time, the eccentricity changes. Using the expression of conservation of angular momentum given by \((33)\) we can relate \(e\) to \(\beta,\beta_{0},\) and \(e_{0}.\) With the help of (34) this gives \[e^{2}-e_{0}^{2} =\frac{2\tilde{S}\sqrt{(1-e_{0}^{2})}(\cos\beta-\cos\beta_{0})}{ 1+2\tilde{S}\cos\beta/(\sqrt{1-e_{0}^{2}}+\sqrt{1-e^{2}})}\sim\] \[-\frac{2\tilde{S}\sqrt{(1-e_{0}^{2})}\sin\beta_{0}(\beta-\beta_{ 0})}{1+\tilde{S}\cos\beta_{0}(1-e_{0}^{2})^{-1/2}}, \tag{67}\] where the approximation on the right applies when \(\beta\) is close to \(\beta_{0}.\) Regarding \(d\hat{\varpi}/d\tau\) as a function of \(\beta\) and \(e^{2},\) as \(\beta_{0}\) and \(e_{0}^{2}\) correspond to a critical curve, we may write \[\frac{d\hat{\varpi}}{d\tau}\bigg{|}_{\beta,e^{2}}=\frac{d\hat{\varpi}}{d\tau }\bigg{|}_{\beta,e^{2}}-\frac{d\hat{\varpi}}{d\tau}\bigg{|}_{\beta_{0},e_{0}^ {2}}, \tag{68}\] where by \(d\hat{\varpi}/d\tau|_{\beta_{0},e_{0}^{2}}\) we mean the left hand side of (66). This is \(d\hat{\varpi}/d\tau\) evaluated for \(\beta=\beta_{0},\) and \(e=e_{0}\) and of course it is equal to zero. Formal subtraction of this expression in (68) suggests the usefulness of a first order Taylor expansion. This procedure is especially useful in the situation where we have small changes in \(\beta\) and the orbital angular momentum is approximately conserved and we have \(e\approx e_{0}.\) Equation (68) together with the equation for \(d\beta/d\tau\) given by equations (21) and (22) govern the evolution of the system. Dividing the second by the first, and then making use of (67) where necessary, leads to an equation of the generic form \[\mathcal{F}(\beta)\frac{d\beta}{d\hat{\varpi}}=\sin 2\hat{\varpi},\;\;\text{for \;an appropriate \;form of \;}\mathcal{F}(\beta). \tag{69}\] This yields on integration \[\mathcal{G}(\beta)=\cos 2\hat{\varpi}_{0}-2\int_{\beta_{0}}^{\beta}\mathcal{F}( \beta)d\beta=\cos 2\hat{\varpi}, \tag{70}\] where \(\hat{\varpi}_{0}\) is the initial value of \(\hat{\varpi}\) corresponding to \(\beta_{0}\). Then from (69) we obtain \[\frac{dy}{d\tau}=\pm\sqrt{1-y^{2}}\mathcal{F}^{-1}\sqrt{1-\mathcal{G}^{2}}.\; \;\text{where}\;\;y=\sin\beta \tag{71}\] From this it is expected that \(y\) is a periodic function of \(\tau\) oscillating between positive values such that one of the square roots vanishes. Given such a periodic solution, there is the possibility that \(\hat{\varpi}\) librates over a restricted domain if \[\oint_{period}\frac{d\hat{\varpi}}{d\tau}\bigg{|}_{\beta,e^{2}}d\tau=0, \tag{72}\] 5 or ultimately exploring all of \((0,2\pi)\) otherwise. Although (71) is soluble by quadratures, the integral is not expressible in terms of known functions. Accordingly, we limit studies to special cases to illustrate these generic features. Footnote 5: In this case \(\oplus/n\) will librate over a restricted domain provided the integer \(n\) is large enough. ### The evolution when the variation of \(\beta\) is small This would be expected to occur for example when \(\tilde{a}\) is large. In such a case case \(\delta=\beta-\beta_{0}\) and \(e^{2}-e_{0}^{2}\) are small such that we may perform a first order Taylor expansion of the right hand side of (68). This gives \[\frac{d\hat{\varpi}}{d\tau}\bigg{|}_{\beta,e^{2}}=\left(\frac{ \partial(d\hat{\varpi}/d\tau)}{\partial\beta}\bigg{|}_{\beta_{0},e_{0}^{2}}-\right.\] \[\left.\frac{\partial(d\hat{\varpi}/d\tau)}{\partial e^{2}} \right|_{\beta_{0},e_{0}^{2}}\frac{2\tilde{S}\sqrt{(1-e_{0}^{2})}\sin\beta_{0 }}{(1+\tilde{S}\cos\beta_{0}(1-e_{0}^{2})^{-1/2})}\right)\delta=-\mathcal{D}\delta, \tag{73}\] where we have made use of (67). Figure 6: The left panel is as for Fig. 5, but for \(\beta_{0}=4\pi/5\). In this case the line styles associated with the different curves are related to \(q\) as in the left panel of Fig. 3. The same quantity but for case with \(\beta_{0}=\pi/2\) is illustrated in the right panel. The line styles associated with the different curves are as for the right panel of Fig. 3. For both panels \(e_{0}=0.5\). Figure 7: The quantity \(|(\tilde{a}_{\mathcal{D}}(\sigma)-\tilde{a}_{0}(\sigma))/\tilde{a}_{0}( \sigma)|\) is plotted as a function of \(\sigma\). This quantity is evaluated for the critical curves illustrated in Fig. 2 for \(\beta_{0}=2\pi/5\). The line styles used are the same as those of Fig. 2 such that curves with the same line style correspond to each other. Figure 8: The left panel shows the evolution of the inclination angle \(\beta\) as a function of dimensionless time \(\tau\). The time \(\tau=0\) corresponds to the system being on the critical curve. For these calculations, \(q=1\), \(e_{0}=0.5\) and \(\beta_{0}=3\pi/5\), and \(\sigma\approx 2\). Curves of different line style correspond to different initial values of \(\dot{\varpi}\), namely \(\dot{\varpi}_{0}\), see the text for their description. The right panel shows the evolution of \(\dot{\varpi}-\dot{\varpi}_{0}\equiv\varpi-\varpi_{0}\) for these calculations. Curves with the same line style in each panel correspond to the same calculation. Figure 9: As for Fig. 8, but in the left panel we have the small mass ratio, \(q=10^{-4}\), and prograde stellar rotation with, \(\beta=2\pi/5\). In this case, \(\sigma\approx 5.31\), for which there is a sharp maximum on the curve representing the estimated amplitude \(\delta\) given by (84) ( see the left panel of Fig. 5 and the discussion in Section 6.1.1). The right panel shows the corresponding evolution of \(\dot{\varpi}-\dot{\varpi}_{0}\equiv\varpi-\varpi_{0}\). In this case \(\dot{\varpi}\) does not librate By appropriately differentiating the right hand side of equation (24) after making use of equations (25)-(27) and equation (31) we readily obtain a somewhat lengthy expression for \(\mathcal{D}\). This takes the form \[\mathcal{D}=\frac{(1+q)}{3q}\frac{\sigma^{2}\cos\beta_{0}\sin\beta _{0}}{(1-e_{0}^{2})^{2}}+\frac{\sigma\tilde{a}_{0}^{2}\sin\beta_{0}}{15\tilde{I} (1-e_{0}^{2})^{3/2}}+\] \[\frac{2\tilde{S}\sqrt{(1-e_{0}^{2})}\sin\beta_{0}}{(1+\tilde{S} \cos\beta_{0}(1-e_{0}^{2})^{-1/2})}\left(\frac{\gamma_{E}(1+q)\tilde{a}_{0}^{ 4}}{q(1-e_{0}^{2})^{2}}+\frac{52+50e_{0}^{2}+3e_{0}^{4}}{8(1-e_{0}^{2})^{6}}\right.\] \[\left.+\frac{(1+q)\sigma^{2}(5\cos^{2}\beta_{0}-1)}{15q(1-e_{0}^{ 2})^{3}}+\frac{\sigma\tilde{a}_{0}^{2}\cos\beta_{0}}{10\tilde{I}(1-e_{0}^{2}) ^{5/2}}\right) \tag{74}\] From equations (21) and (22) it follows that the evolution equation for the angle \(\beta\) can be represented in the form \[\frac{d\beta}{d\tau}=A_{\beta,e^{2}}\sin\beta\sin 2\hat{ \varpi},\ \ \mathrm{where}\] \[A_{\beta,e^{2}}=\frac{3qe^{2}(1+e^{2}/6)\sigma\tilde{a}_{0}^{-1 }}{5\tilde{I}(1-e^{2})^{9/2}}\left(1+\frac{(1+q)\cos\beta\tilde{I}\tilde{a}_{ 0}^{-2}\sigma}{q\sqrt{1-e^{2}}}\right). \tag{75}\] In the limit of small variation of \(\beta\), in (75) we set \(d\beta/d\tau=d\delta/d\tau\), \(\sin\beta=\sin\beta_{0}\), and \(A_{\beta,e^{2}}=A_{\beta_{0},e_{0}^{2}}\). The latter is a constant as \(\tilde{a}=\tilde{a}_{0}\) and \(\sigma\) are conserved. We note that when the spin angular momentum is much less than the orbital angular momentum as is expected for \(q\) of order unity \(A_{\beta_{0},e_{0}^{2}}\) is positive. However in the opposite case it can be negative. But, there is no restriction on \(\hat{\varpi}\) and we see that the system is invariant under the shift \(\hat{\varpi}\rightarrow\hat{\varpi}+\pi/2\) together with \(A_{\beta_{0},e_{0}^{2}}\rightarrow-A_{\beta_{0},e_{0}^{2}}\). Thus without loss of generality we may set \(A_{\beta_{0},e_{0}^{2}}\rightarrow|A_{\beta_{0},e_{0}^{2}}|\). We remark that when \(A_{\beta_{0},e_{0}^{2}}=0,\,\beta\) remains fixed at \(\beta_{0}\) while \(\hat{\varpi}\) is fixed at a value that can be chosen arbitrarily. We shall not consider this case further. For general \(\beta_{0}\) it is seen that \(\hat{\varpi}=\varpi_{j}=j\pi/2,j=0,1,2,3\) correspond to fixed points of the system which alternate between being stable and unstable. Note that a change from stability to instability and vice versa occurs when \(\mathcal{D}A_{\beta_{0},e_{0}^{2}}\sin\beta_{0}\cos 2\varpi_{j}\) changes sign. It is convenient to introduce a new time variable, \(\tau_{1}=A_{\beta_{0},e_{0}^{2}}\tau\) and express (73) and (75) in the form \[\frac{d\hat{\varpi}}{d\tau_{1}}=-b\delta \tag{76}\] and \[\frac{d\delta}{d\tau_{1}}=\sin\beta_{0}\sin 2\hat{\varpi}, \tag{77}\] where \[b=\frac{\mathcal{D}}{A_{\beta_{0},e_{0}^{2}}} \tag{78}\] Equations (76) and (77) are equivalent to a single second order differential equation for \(\hat{\varpi}\) as a function of time \[\frac{d^{2}\hat{\varpi}}{d\tau_{1}^{2}}=-b\sin\beta_{0}\sin(2\tilde{\varpi}). \tag{79}\] It is convenient to rescale \(\tau_{1}\) and \(\delta\) to remove \(b\) and \(\sin\beta_{0}\) from (77) and (79) using the substitution \(\tilde{\delta}=\delta\sqrt{|b|/\sin\beta_{0}}\), \(\tilde{\tau}=\tau_{1}\sqrt{|b|\sin\beta_{0}}\) and bring (76), (77) and (79) to the form \[\frac{d\hat{\varpi}}{d\tilde{\tau}}=-{\rm sgn}(b)\tilde{\delta},\quad\frac{d \tilde{\delta}}{d\tilde{\tau}}=\sin 2\hat{\varpi}\quad\frac{d^{2}\hat{\varpi}}{d \tilde{\tau}^{2}}=-{\rm sgn}(b)\sin(2\tilde{\varpi}). \tag{80}\] As noted above we can change the sign of \(\sin 2\hat{\varpi}\) on the right hand side of the last equation in (80) by making the shift \(\hat{\varpi}\rightarrow\hat{\varpi}+\pi/2\). Accordingly, we may take this sign to be negative without loss of generality. The last of eqns (80) is a standard pendulum equation. This can be easily integrated to give \[\left(\frac{d\hat{\varpi}}{d\tilde{\tau}}\right)^{2}-\cos(2\hat{\varpi})=C, \tag{81}\] For solutions that oscillate between \(|\sin\hat{\varpi}_{0}|\) and \(-|\sin\hat{\varpi}_{0}|\), \(C=-\cos(2\hat{\varpi}_{0})\). Taking \(\hat{\varpi}_{0}\) to be the initial value of \(\hat{\varpi}\) as stated above, 6 then at \(t=0\), both \(\tilde{\delta}\) and \(d\tilde{\varpi}/d\tilde{\tau}\) are equal to zero. Here we remark that the solutions with libration have the constant \(C\) such that \(|C|<1\). Solutions with \(C>1\) are such that \(\hat{\varpi}\) circulates. The amplitude of variation of \(\delta\) is similar to that of solutions librating with large amplitude when \(C\) slightly exceeds unity, but it decreases as \(C\) increases ultimately leading to values expected from the discussion of Sections 4.1.7 and 4.1.8. Footnote 6: As the system is autonomous, for solutions with libration we may choose a libration limit to be \(\hat{\varpi}_{0}\) and a time at which this occurs to be \(t=0\) without loss of generality. For solutions undergoing libration, the solution of (81) is brought into standard form by the substitution \(y=\sin\hat{\varpi}/|\sin\hat{\varpi}_{0}|\). Then, an implicit solution can be expressed in terms of an incomplete elliptic integral of the first kind \[\tilde{\tau}=\frac{1}{\sqrt{2}}\int_{y}^{1}\frac{dy^{{}^{\prime}}}{\sqrt{(1-y ^{{}^{\prime}2})(1-k^{2}{y^{{}^{\prime}}}^{2})}},\ \ {\rm where}\ \ k=|\sin\hat{\varpi}_{0}| \tag{82}\] When \(\sin\hat{\varpi}_{0}>0,\) equation (82) describes the solution as \(y\) oscillates between \(1\) and \(-1\). It subsequently retraces this moving between \(-1\) and \(1\), thereafter being periodic in \(\tilde{\tau}\) with period \(2\sqrt{2}K(k)\), where \(K(k)\) is the complete elliptic integral of the first kind. This evolution also applies when \(\sin\hat{\varpi}_{0}<0\). Though in this case the solution starts with \(y=-1\) and to describe the initial phase the sign of the integral in (82) is reversed. From the analysis made above we can deduce a number of important consequences. Namely, the motion is periodic, with the period in time \(t=t_{*}\tau_{1}\) being equal to \[P_{lib}=2\sqrt{2}t_{*}A_{\beta_{0},e_{0}^{2}}^{-1}(\sin\beta_{0}|b|)^{-1/2}K(k), \tag{83}\] and with a typical amplitude of variation of \(\delta\) given by \[|\delta/\tilde{\delta}|\sim\sqrt{\sin\beta_{0}/|b|}=\sqrt{A_{\beta_{0},e_{0}^{2}} \sin\beta_{0}/\mathcal{D}}. \tag{84}\] The angle \(\hat{\varpi}\) librates around zero 7. The amplitude of libration of \(\hat{\varpi}\) is \(|\hat{\varpi}_{0}|\) with \(\tilde{\delta}\) expected to be of order unity when this quantity is of order unity. Accordingly, we set \(\tilde{\delta}=1\) when using (84) to make estimates. Footnote 7: As the system is invariant to shifting \(\oplus_{0}\) by a multiple of \(\pi\), the libration centre may also be shifted in this way. #### 6.1.1 The amplitude of the variation in \(\beta\) as a function of parameters of the problem We represent the amplitude of \(\delta=\beta-\beta_{0},\) given by equation (84) and evaluated on critical curves, in Figs. 5, and 6 with input parameters the same as those adopted in Figs. 2, and 3, respectively. In particular, \(e_{0}=0.5\) in all cases. In what follows we simply denote this amplitude by \(\delta.\) The left panel of Fig. 5 illustrates the prograde case with \(\beta_{0}=2\pi/5,\) where only curves corresponding to small mass ratios, \(q,\) in the range \(10^{-5}-10^{-2}\) are shown. The right panel of Figs 5 and the left panel of Fig. 6 illustrate the retrograde cases with \(\beta_{0}=3\pi/5,\) and \(\beta_{0}=4\pi/5\) respectively. The polar case with \(\beta_{0}=\pi/2\) is illustrated in the right panel of Fig. 6. One can see from Figs. 5 and 6 that in general \(\delta\) is smaller than unity and, therefore, the assumption of the smallness of \(\delta\) made for our analytical work is justified for most allowed parameters. However, there are two possible exceptions. Firstly, \(\delta\) can be order of one when rotation is retrograde and \(q\) is sufficiently large, \(q\sim 1-10,\) see the regions of the solid and dashed curves in the right panel of Fig. 5 and the left panel of Fig. 6 for \(\sigma<\sim 1.\) This corresponds to the lower branches of the corresponding critical curves as they approach \(\tilde{a}_{min}\) defined in eq. (35) and, accordingly, the orbital periastron distance approaches the larger of the stellar radius or the tidal disruption radius. The second situation occurs when when the mass ratio and for some particular value of \(\sigma,\), \(\delta\) has a sharp maximum, ( see e.g. the dot dashed and dot double dashed curves in the left panel of Fig. 5). This happens when the quantity \(\mathcal{D}\) defined in eq. (74) is zero for a prescribed value of \(\sigma.\) We illustrate this effect in Fig. 7, where we plot the absolute value of \((\tilde{a}_{\mathcal{D}}(\sigma)-\tilde{a}_{0}(\sigma))/\tilde{a}_{0}(\sigma)\), where \(\tilde{a}_{\mathcal{D}}(\sigma)\) is defined by the condition \(\mathcal{D}(\tilde{a}_{\mathcal{D}},\sigma)=0.\) We note that here we do not display the dependence of \(\mathcal{D},\) and \(\tilde{a}_{\mathcal{D}}\) on quantities other than \(\sigma\) as these are fixed. The curves plotted correspond to the prograde case illustrated in Figs 2 and 5. We see that the values of \(\sigma\) for which the sharp maxima occur in the former Figure correspond to the sharp minima in the latter. However, these sharp maxima are unrealistic because the variation of \(\mathcal{D}\) with \(\delta\) has not been taken into account. Where \(\mathcal{D}=0,\) the right hand side of (67) should be replaced by \(-(\partial\mathcal{D}(\beta_{t})/\partial\beta_{0})\delta^{2}/2\). Here we recall that \(e_{0}^{2}\) is a function of \(\beta_{0}\) through (34). Following this (76) should be replaced by \[\frac{d\widehat{\varpi}}{d\tau_{1}}=-b^{\prime}\delta^{2},\ \ {\rm where}\ \ b^{\prime}=\frac{1}{2A_{\beta_{0},e_{0}^{2}}} \frac{\partial\mathcal{D}}{\partial\beta_{0}} \tag{85}\] From (77) and (85) it straight forward to obtain an estimate for the magnitude of \(\delta\) given by \(\delta\sim|\sin\beta_{0}/b^{\prime}|^{1/3}\). Thus, extreme maxima do not occur and the wings on each side should connect smoothly as has been verified numerically (see below). In fact the values of \(|\delta|\) may dip if the magnitude of the derivative of \(\mathcal{D}\) is large. Note that there could also be a situation where \(\delta\) sharply tends to zero, see the blue dot dot dashed curve in the right panel of Fig. 5. This happens when the quantity in braces in the expression (75) for \(A_{\beta_{0},e_{0}^{2}}\) is equal to zero for a particular value of \(\sigma\). ### Numerical verification of analytic estimates We present numerical solutions of equations (21) and (24) in Figs. 8, and 9 We use differential equation found by differentiating eq. (33) with respect to time to provide another equation enabling the determination of the evolution of the eccentricity. We assume that initial values of the parameters obtained by solving the evolution equations are such that the system is initially on a critical curve. We consider two typical cases, where large variations of \(\beta\) are expected from the discussion in Section 6.1.1. The first case has \(q=1\) and retrograde rotation with \(\beta_{0}=3\pi/5\). In addition \(e_{0}=0.5,\)\(\sigma\approx 2\), and \(\tilde{a}\approx 2.97\). It is illustrated it in Fig. 8. The critical curve is illustrated by the red dashed curve in the right panel of Fig. 2. The initial values of the run belong to the lower branch where the rotational frequency is close to its maximum value. As seen from the corresponding curve in the right panel of Fig. 5, the analytic theory predicts the amplitude of variations of, \(\beta\), as given by equation (84) to be \(\sim 0.5\). In Fig. 8 solid, dashed, dot dashed and dotted curves are for different initial values of \(\widehat{\varpi}\), namely \(\widehat{\varpi}_{0}=\pi/4\), \(\pi/3\), \(3\pi/4\) and \(5\pi/6\), respectively. As seen from Fig. 8 both angles, \(\beta\) and \(\widehat{\varpi}\) exhibit periodic motion, with the characteristic amplitude of variations of \(\beta\) being \(\sim 0.5\) as expected. However, this quantity depends on \(\widehat{\varpi}_{0}\). Also, variations of \(\beta\) with respect to \(\beta_{0}\) are asymmetric, being larger for values of \(\beta<\beta_{0}\). On the other hand, the system spends somewhat longer periods of time with \(\beta>\beta_{0}\). The difference \(\widehat{\varpi}-\widehat{\varpi}_{0}\) is negative when \(\widehat{\varpi}_{0}<\pi/2\) and positive otherwise. Interestingly, this case illustrates the possibility of having the evolution of the system causing it to oscillate between prograde and retrograde states. The second case we consider investigates the possibility of having a sharp resonance-like increase of the amplitude, \(\delta,\) in the low mass case near a point on the critical curve where \(\mathcal{D}(\sigma)=0.\) To illustrate this possibility we consider a calculation with \(q=10^{-4}\), \(e_{0}=0.5\), \(\beta_{0}=2\pi/5,\)\(\sigma\approx 5.31\) and \(\tilde{a}\approx 11.04\). The results are presented in Fig. 9 with line styles as in the previous Fig. As seen from the green dot dashed curve in the left panel of Fig. 5, when the parameters are chosen in this way, even though the tidal interaction is relatively weak, \(\delta,\) is expected to be relatively large. The numerical results shown in Fig. 9 confirm this prediction, giving a typical amplitude of variations of \(\beta\) order of one per cent. Though significant, in accordance with expectations from the discussion in Section 6.1.1, the large and extremely localised maximum seen in the left panel of Fig. 5 is absent. Note that unlike the previous case dependence of the evolution of \(\beta\) and \(\hat{\varpi}\) on \(\hat{\varpi}_{0}\) is practically absent. Also, the variation of \(\beta\) relative to \(\beta_{0}\) is symmetric. ## 7 Conclusions and Discussion In this work we have developed and generalised results reported in IP concerning the non-dissipative tidal evolution of the inclination angle between the stellar and orbital angular momenta, \(\beta\). This is applicable to a binary system with a stellar primary and a compact perturbing companion. The evolution of \(\beta\) is coupled to the rate of precession of the orbital line of apsides measured with respect to the line of nodes which itself precesses. IP considered only the classical contribution to apsidal motion due to tidal distortion (Sterne 1939). The extension to include the effects due to rotational distortion and Einstein precession, as well as the precession of the line of nodes mentioned above, for arbitrary orbital eccentricity was discussed in Sections 3 - 3.4.2, with technical details supplied in appendix A. Section 3 also included a brief review of the equation governing the evolution of \(\beta\) derived in IP. This evolution is a qualitatively new effect, arising from the symmetry breaking of the tidal bulge and associated gravitational field by rotation, leading to the appearance of a non dissipative torque acting between the primary star and orbit. Unlike the one leading to the usual precessional dynamics, this is directed in the plane containing the angular momentum vectors and so can change \(\beta\). Being non-dissipative, both the orbital and rotational energies are conserved, while conservation of angular momentum relates changes in \(\beta\) to changes in the eccentricity (see Section 3.4). In this paper we provided an extensive analytic treatment of this dynamics. We remark that this evolution occurs even when one of the binary component is point-like, which makes it qualitatively different to that associated with the usual precessional dynamics driven by stellar flattening. This could also lead to changes in \(\beta\), but only when both components are subject to the action of tidal forces, see e.g. Philippov & Rafikov (2013) and references therein. In Section 4 we considered the situation when only one physical source of apsidal precession dominates over the others and provided estimates for the expected typical change of the inclination angle \(\beta\), \(\Delta\beta\). We also found conditions on the parameters defining the system for one of these effects to dominate. We found that, when all properties of the system are fixed apart from the ratio of the rotation frequency of the primary to the orbital mean motion, \(\sigma=\Omega_{r}/n_{0}\), \(\Delta\beta\) approaches its maximal value when \(\sigma\) increases sufficiently. This value is given by equation (47). Estimates of \(\Delta\beta\) which are valid for smaller values of \(\sigma\) when different sources of apsidal motion dominate are given by eqns. (45) and (46). In Sections 5 and 6 we went on to consider the situation when a solution of our dynamical system crosses a so-called 'critical curve', on which the total apsidal precession rate is zero, for a given value of the inclination angle, \(\beta_{0}\). In this case we expect that that variation of \(\beta\) in the vicinity will be much larger than was found in Section 4 where it was assumed that one process dominated. In Section 5 we provided an extensive analysis of the properties of critical curves finding that their existence is possible only when \(\cos\beta_{0}<1/\sqrt{5}\). In Sections 6 - 6.2 we studied solutions of our dynamical system in the vicinity of these curves. We employed an analytic approach, under the simplifying assumption that \(\Delta\beta\) is small, in Sections 6 - 6.1.1. In addition, we performed direct numerical solutions of the dynamical equations describing our system for two representative cases, one with \(q=1\) and one with \(q=10^{-4},\) with the object of confirming our analytic estimates in Section 6.2. Solutions in the vicinity of critical curves are periodic, they demonstrate several unusual features. Namely, unlike for the standard situation the apsidal angle can change periodically (ibrate), while for a strong enough interaction, \(\Delta\beta\) could be large enough to change the rotation of the primary from being prograde to retrograde and vice versa. In an accompanying paper Ivanov & Papaloizou (2023) we provide, as an addition to the studies described here, a preliminary numerical analysis of the parameter space of the problem for mass ratios \(q=10^{-3}\), \(q=1\) and \(q=10^{3}\) and eccentricity \(e_{0}=0.5\). We find that when \(q=10^{-3}\)\(\Delta\beta\) is always small and the regime of critical curve crossing is not found. However, when \(q=1\) or \(q=10^{3},\) the latter case expected to correspond to a primary of planetary mass orbiting a compact object of stellar mass, large variations \(\Delta\beta\) and the existence of a critical curve crossing regime are found for a large range of the parameters of the problem considered, provided that \(\sigma\) is large enough, and the initial values \(\beta_{0}\) are larger than \(\sim 1\). Clearly, it is important to extend our results to the case of two tidally interacting components of a binary system as well as take into account the possible role of other perturbing bodies. Also, as mentioned in IP, the contribution of the toroidal component of the displacement to the tidal response as well as dynamic tides could be important. We intend to consider these issues within the framework of formalisms developed by us in previous work, (see e.g. Papaloizou & Ivanov, 2005; Ivanov & Papaloizou, 2007; Chernov, Ivanov & Papaloizou, 2017). ### Potential applications Finally, the processes leading to apsidal motion and variability of the inclination between orbital and spin angular momentum vectors discussed here could have applications to observations of eclipsing binaries such as DI Herculis (e.g. Shakura, 1985; Albrecht et al, 2009) or transiting exoplanets on misaligned orbits (see eg. Albrecht, Dawson & Winn, 2022). Furthermore potentially misaligned hot and warm Jupiters can be in orbits with significant eccentricity (Ulmer-Moll et al., 2022). A particular example is HD 80606 (see eg. Winn et al., 2009) which has \(q\sim 4\times 10^{-3}\) and \(e=0.93\). In connection with such exoplanet systems we note the following. As discussed above expected changed of \(\beta\) become quite small when \(q\) is small. Thus, while apsidal motion could potentially be reversed or libration occur, changes to the orbital inclination will be small when the stellar rotational axis is inclined with respect to the orbital plane. However, in the opposite limit \(q\gg 1\) where it is assumed that tides operating in the exoplanet are more important than those acting on the central star, expected values of \(\Delta\beta\) are the same order as in case with \(q\sim 1\), see Ivanov & Papaloizou (2023) for corresponding numerical examples. Thus, when a planet rotates sufficiently fast with its spin and orbital angular momentum misaligned, there may be a sizeable variation of \(\beta\) with a small corresponding variation of the inclination of orbital plane with respect to the line of sight on account of the conservation of angular momentum. Whether this possibility can be used to study rotational states of exoplanets for realistic parameters requires further study. ### The close binary DI Herculis To provide an illustration of the expected variation of the inclination angle we consider DI Herculis. The parameters of this system are given in Table 1 of Philippov & Rafikov (2013). It consists of two stars with masses \(M_{1,2}=2.68\) and \(2.48M_{\odot}\), radii \(R_{1,2}=5.15\) and \(4.25R_{\odot}\), with rotation ve locities at their surfaces \(v_{1,2}=\Omega_{r,1,2}R_{1,2}=122\) and \(118km/s\). The orbital period \(P_{orb}\approx 10d\) and eccentricity \(e\approx 0.5\). The observed apsidal precession rate for this system \(\ddot{\varpi}_{DI}\approx 7\cdot 10^{-10}s^{-1}\) is a factor of two smaller than expected when invoking only the contribution of Einstein precession. This is explained by the contribution of rotationally induced terms (see eg. Shakura, 1985). DI Herculis is of course a system with two stars of comparable densities, where the stellar spin axes can evolve by the standard mechanism associated with the interaction of the two oblate, separately precessing, stars as analysed in e.g. Philippov & Rafikov (2013). Moreover, our analysis above is not strictly speaking valid for such a system since we have assumed that one binary component is point-like. However, to make a crude estimate let us consider the secondary star as our primary star, since according to Philippov & Rafikov (2013) it has inclination, \(\beta_{2}\approx\pi/2\), while noting that the other star has \(\beta_{1}\) within two standard deviations of \(\pi/2\). Adopting this assumption we find that our dimensionless semi-major axis and angular velocity can be estimated as \(\tilde{a}\approx 13.5\) and \(\sigma\approx 10\), respectively. Using these data we estimate \(\dot{\phi}\) entering equation (44) as \(\dot{\phi}\approx 200\). Also, adopting \(\tilde{I}=0.1\) we easily see that the total angular momentum is mainly determined by its orbital part and, accordingly, the first expression in (44) should be used. This analysis leads to \(\Delta\beta_{1}\approx 3\cdot 10^{-2}\) and the corresponding evolution timescale \(\pi/\dot{\varpi}_{DI}\approx 150yr\). This is much smaller than given by the standard mechanism, which gives \(\Delta\beta_{1,2}\sim O(1)\) for the parameters they adopted, on a comparable timescale, see Fig. 6 of Philippov & Rafikov (2013)8. Footnote 8: Note that the angles \(\beta\) defined in Philippov & Rafikov (2013) are approximately the same as our \(\beta\) only for systems viewed edge on, as e.g. DI Herculis. However, this analysis neglects the possibility of evolution near a critical curve and the libration of \(\beta_{1}\), which might be expected for this system, since apsidal precessional frequencies of different physical origin and sign are expected to be comparable. Then \(\Delta\beta_{1}\) may be significantly larger than the above estimate. However, an accurate treatment of this possibility requires extension of our formalism on the case of two bodies of comparable densities, which is beyond of the scope of the present paper. In addition the phenomena considered here could also potentially modify orbital evolution on the longer time scale associated with dissipative tidal evolution. This is a problem for future work. ## Acknowledgments PBI was supported in part by the grant 075-15-2020-780 'Theoretical and experimental studies of the formation and evolution of extrasolar planetary systems and characteristics of exoplanets' of the Ministry of Science and Higher Education of the Russian Federation. ## 8 Data Availability There are no new data associated with this article.
2303.18151
Microscopic calculation of the pinning energy of a vortex in the inner crust of a neutron star
The structure of a vortex in the inner crust of a pulsar is calculated microscopically in the Wigner-Seitz cell approximation, simulating the conditions of the inner crust of a cold, non-accreting neutron star, in which a lattice of nuclei coexists with a sea of superfluid neutrons. The calculation is based on the axially deformed Hartree-Fock-Bogolyubov framework, using effective interactions. The present work extends and improves previous studies in four ways: i) it allows for the axial deformation of protons induced by the large deformation of neutrons due to the appearance of vortices; ii) it includes the effect of Coulomb exchange; iii) considers the possible effects of the screening of the pairing interaction; and iv) it improves the numerical treatment. We also demonstrate that the binding energy of the nucleus-vortex system can be used as a proxy to the pinning energy of a vortex and discuss in which conditions this applies. From our results, we can estimate the mesoscopic pinning forces per unit length acting on vortices. We obtain values ranging between $10^{14}$ to $10^{16}$ dyn/cm, consistent with previous findings.
P. Klausner, F. Barranco, P. M. Pizzochero, X. Roca-Maza, E. Vigezzi
2023-03-31T15:38:51Z
http://arxiv.org/abs/2303.18151v1
# Microscopic calculation of the pinning energy of a vortex in the inner crust of a neutron star ###### Abstract The structure of a vortex in the inner crust of a pulsar is calculated microscopically in the Wigner-Seitz cell approximation, simulating the conditions of the inner crust of a cold, non-accreting neutron star, in which a lattice of nuclei coexists with a sea of superfluid neutrons. The calculation is based on the axially deformed Hartree-Fock-Bogolyubov framework, using effective interactions. The present work extends and improves previous studies in four ways: i) it allows for the axial deformation of protons induced by the large deformation of neutrons due to the appearance of vortices; ii) it includes the effect of Coulomb exchange; iii) considers the possible effects of the screening of the pairing interaction; and iv) it improves the numerical treatment. We also demonstrate that the binding energy of the nucleus-vortex system can be used as a proxy to the pinning energy of a vortex and discuss in which conditions this applies. From our results, we can estimate the mesoscopic pinning forces per unit length acting on vortices. We obtain values ranging between \(10^{14}\) to \(10^{16}\) dyn/cm, consistent with previous findings. ## I Introduction Pulsars are characterized by the regular emission of electromagnetic radiation. These stars spin down steadily, but sudden spin-ups, called glitches, have been observed. Such events were recorded first in the Vela pulsar and subsequently in many other stars (see [1] for a statistical study of the properties of glitches observed in 141 stars). Soon after the first observations, it was proposed that the glitch phenomenon was closely associated with the existence of a neutron superfluid in the interior of the star [2], see [3; 4] for a review. According to the current theoretical understanding of neutron star structure, the layer extending from a density of about \(10^{-3}\) fm\({}^{-3}\) to \(0.04\) fm\({}^{-3}\), called the inner crust, is composed of a lattice of heavy nuclei immersed in a sea of free neutrons and electrons [5; 6]. Negele and Vautherin carried out a seminal study [7] within the Wigner-Seitz approximation. They determined the optimal radius of a spherical cell with a nucleus at its center, the number of protons and of neutrons bound to the nucleus and the number of unbound neutrons, as a function of the neutron density at the edges of the cell. Their results have been refined and extended in many subsequent works, see [8; 9; 10; 11; 12; 13; 14] and references therein. Moreover, given the typical range of temperature expected in the inner crust of mature neutron stars (from \(10^{7}\) to \(10^{9}\) K, that is from 1 to 100 keV, a very low value with respect to the Fermi energy ranging from 10 to 100 MeV), neutrons are likely to be superfluid [15]. Due to the rotation of the star, the superfluid neutrons form a (possibly disordered) array of quantum vortices [16], whose average density is closely linked to the pulsar angular velocity via a generalization of the so-called Feynman-Onsager relation [17]. Anderson and Itoh [18] proposed that the interaction between the heavy nuclei at the lattice sites and the vortices can anchor the vortices in particularly energetically favourable positions, a phenomenon referred to as "pinning". If this is the case, the superfluid component cannot follow the regular slowdown of the crust and rotates faster, becoming a reservoir of angular momentum. This gives rise to hydrodynamical lift forces (Magnus forces), which act on the vortex lines and tend to push them away from their sites. The glitch phenomenon would then occur when Magnus forces take over and a catastrophically large number of vortices suddenly unpin from their positions, releasing their angular momentum to the crust. There are still some unanswered questions regarding several central aspects of this model. First of all, the trigger which leads to the collective vortex unpinning is not well established yet; there are several possibilities advanced in the literature, like vortex avalanches [18; 19] or hydrodynamical instabilities [20; 21]. Secondly, it has been pointed out that the angular momentum contained in the crust may not be sufficient [22; 23] to explain glitches, albeit this conclusion is less clear if the statistical uncertainty on the observed glitch activity [24] or the possible presence of lattice defects [25] are taken into account. Finally, there is no definitive answer on the strength of the pinning interaction throughout the inner crust. The greater the ability of pinning to withstand the hydrodynamical lift, the higher the amount of angular momentum that the superfluid can store, so that it is possible to constrain the unpinning threshold (i.e. the theoretical upper limit of the distribution of pinning forces [26]) with observations of large glitches [17]. The microscopic computation of the single-nucleus pinning potential is very challenging and has never been performed in the literature. In fact, existing studies resorted to the pinning energy [27; 28; 29], defined as the energy difference between two extreme situations: one where the vortex is on top of the nucleus (nuclear pinning), and one where the vortex is equidistant between two adjacent nuclei in the lattice (interstitial pinning). A negative (positive) value of this quantity indicates that the former (latter) situation is energetically favourable. Different methods have been used to estimate the single-nucleus pinning potential. Epstein and Baym [29] used hydrodynamic considerations in combination with the Ginzburg-Landau theory of superfluidity to compute the free energy of a nucleus as a function of the distance from a vortex line, ignoring the internal structure of the nucleus and using instead schematic expressions for the kinetic and condensation energies. They found that vortices pin on nuclei in the deeper layers of the inner crust, while they are repelled in the low-density regions. The model by Epstein and Baym was later improved [30], providing estimates of pinning energies obtained by making use of a semiclassical treatment based on the Local Density Approximation [27]. The first microscopical quantum calculation was then carried out by Avogadro _et al._[28; 31], based on the solution of the axially symmetric Hartree-Fock-Bogoliubov (HFB) equations in the Wigner-Seitz approximation for various densities in the crust. Specifically, it was found that the nuclear shell structure has relevant effects on the spatial configuration of the vortex and that pinning occurs only in the less dense regions of the inner crust. The solution of the HFB equations was carried out assuming spherical symmetry for the proton density, thus breaking self-consistency. In the present paper, we remove this assumption, which was based on the fact that proton orbitals are deeply bound. Furthermore, we include the effect of the Coulomb exchange, which was previously neglected, and improve the numerical treatment, devoting particular attention to the convergence of our results. We are then able to present new and more reliable values of the binding energy and, based on them, we present our best estimation of the pinning energy. We also show detailed results for neutron and proton deformation at different densities. We also study the dependence of our results on the strength of the pairing interaction, in keeping with the analysis carried out in [27]. Due to the fact that hydrodynamics is non-linear, the pinning potential is not immediately related to the pinning "landscape" that defines the dynamics of a finite-size vortex segment [26]. We then estimate the typical strength of the pinning landscape by taking the mean value of the pinning force for unit length acting on a vortex line [32], see also the discussion in [26]. Other recent efforts, based on a microscopic quantal picture, have also been made. The most significant advance concerns a three-dimensional dynamical simulation of the vortex motion, based on the time-dependent superfluid local density approximation (TDSLDA), leading to an estimate of the force between the vortex and the nucleus as a function of their separation [33; 34](see also [35]). Results were obtained for two densities and showed that the vortex is repelled by nuclei. At the same time, it was found that the vortex-nucleus interactions induce a deformation of the nucleus and lead to a bending of the vortex line shape. These findings represent an important confirmation of our results and extend them toward a complete characterization of the vortex-nucleus interaction. On the other hand, TDSLDA computations are very costly, while we are able to present systematic calculations of the pinning energy with different functionals and pairing forces and to provide a detailed description of the nuclear deformation. We also report that the properties of a quantum vortex were recently studied at finite temperature in infinite matter using Brussels-Montreal energy functionals [36]. We begin in Section II by explaining the general features of the calculation and giving some details about the computation of the pinning energy. Our results are presented in Section III. Finally, in Section IV we give our closing remarks. ## II Method ### General description In this paper, we expand and improve the work done in [28] (hereafter referred to as Paper I). There, the authors approached the problem of pinning energy by solving the Hartree-Fock-Bogolyubov (HFB) equations in a cylindrical Wigner-Seitz cell of radius \(R_{WS}\) and height \(h_{WS}\) in four different configurations. HFB equations (also called Bogliubov-De Gennes equations) are well suited to study the pairing properties of quantal inhomogeneous systems, like the inner crust of a neutron star, where a lattice of heavy nuclei coexists with a sea of superfluid neutrons. With this technique both the nuclear potential and the pairing correlations are treated simultaneously and self consistently. Explicitly, the HFB equations read \[\begin{cases}(h(\mathbf{x})-\lambda)\,u_{i}(\mathbf{x})+\Delta(\mathbf{x})v_{i }(\mathbf{x})=E_{i}u_{i}(\mathbf{x})\\ \Delta^{*}(\mathbf{x})u_{i}(\mathbf{x})-(h(\mathbf{x})-\lambda)\,v_{i}(\mathbf{ x})=E_{i}v_{i}(\mathbf{x})\end{cases} \tag{1}\] where \(E_{i}\) is the quasi-particle energy of level \(i\) and \(u_{i}\) and \(v_{i}\) are the quasi-particle amplitudes relative to that level, \(\lambda\) is the chemical potential, \(\Delta(\mathbf{x})\) is the pairing field and \(h(\mathbf{x})=T+U^{HF}\) is the single particle Hartree-Fock Hamiltonian, sum of the kinetic term \(T\) and the self-consistent potential \(U^{HF}\). From the solutions of (1), one can compute the normal and abnormal densities of the system \[\begin{split} n(\mathbf{x})=\sum_{i}|v_{i}(\mathbf{x})|^{2}\\ \kappa(\mathbf{x})=\sum_{i}u_{i}(\mathbf{x})v_{i}(\mathbf{x})^{ \star}\end{split} \tag{2}\] from which one can find new \(h(\mathbf{x})\) and \(\Delta(\mathbf{x})\) which in turn give rise to a new set of equations (1) (see Appendix A). The HFB equations are therefore solved via an iterative process. As for the interaction chosen in the HF sector, we adopt the Skyrme SLy4 and the SkM* parameterizations (see [37]) and neglect the spin-orbit term, because we expect that the pinning energy is not significantly affected by this term (cf. Paper I and our discussion below). For the pairing field, we start from a neutron pairing potential, adopting a density-dependent, contact interaction of the form \[V_{pair}(\mathbf{x},\mathbf{x}^{\prime})=V_{0}\left(1-\eta\left(\frac{n( \mathbf{x})}{0.08}\right)^{a}\right)\delta(\mathbf{x}-\mathbf{x}^{\prime}) \tag{3}\] where \(V_{0}=-481\) MeV \(\cdot\) fm\({}^{3}\), \(\eta=0.7\) and \(a=0.45\) have been used. This leads in turn to the pairing field \[\Delta(\mathbf{x})=-V_{pair}(\mathbf{x},\mathbf{x}^{\prime})\kappa(\mathbf{x}) \tag{4}\] The adopted parameters, together with a cutoff energy \(E_{cut}=60\) MeV, reproduce the pairing gap of uniform neutron matter as predicted by a realistic nucleon-nucleon interaction [38], and are the same as those used in Paper I. We will also perform calculations with two weaker pairing interactions. We aimed for pairing gaps reduced by a factor \(\beta=2\) and \(\beta=3\); we found \(V_{0}^{\beta=2}=432.9\) MeV \(\cdot\) fm\({}^{3}\) and \(V_{0}^{\beta=3}=408.85\) MeV \(\cdot\) fm\({}^{3}\). These interactions are introduced only to have a rough qualitative assessment of the effects of correlations beyond the mean field, which generally lead to a reduction of the pairing gaps (see [39; 40] for recent reviews). However, such reductions show a dependence on the neutron density which is not taken into account by the constant reduction factors considered here. Nonetheless, we will still label the results by \(\beta=2\) and \(\beta=3\). The pairing interaction has been neglected in the case of protons since \(Z\)=40 is used throughout this work and this value corresponds to a magic number in our calculations. We carry out our calculations in a cylindrical box, so it is natural to use cylindrical coordinates \(\mathbf{x}=(\rho,z,\varphi)\). Eqs. (1) are expanded on a single-particle basis. All the calculation details are presented in Appendix A. The pairing field (4) is defined as (Paper I and [41]) \[\Delta(\rho,z,\varphi)=\Delta(\rho,z)\,e^{i\nu\varphi} \tag{5}\] so that the vortex is created along the \(z\)-axis keeping the cylindrical symmetry. The integer parameter \(\nu\) can be interpreted as the number of units of angular momentum carried by each Cooper pair along the \(z-\)axis. The standard solution of the HFB equations corresponds to \(\nu=0\) and to Cooper pairs coupled to zero angular momentum while \(\nu=1\) defines an excited solution in which Cooper pairs of different parity couple to one unit of angular momentum. This solution describes a vortex, as it gives rise to an azimuthal velocity field \(V\) of the form \[V(\rho,z,\varphi)=-\frac{i\hbar}{mn\rho}\sum_{i}v_{i}^{*}(\rho,z,\varphi) \frac{\partial v_{i}(\rho,z,\varphi)}{\partial\varphi}. \tag{6}\] It is noted that nuclear shell effects act quite differently on the \(\nu=1\) gap, as compared to \(\nu=0\). This point is discussed at length in Paper I. In particular, one expects that the spin-orbit interaction, which is neglected in the present work, tends to shift the energy of the single-particle pairs involved in the formation of \(S=0,\nu=1\) Cooper pairs by the same amount (see Fig. 21 in Paper I). We have changed considerably the part of the computation relative to protons with respect to Paper I. In Paper I, the proton density was forced to be spherically symmetric. This was achieved by taking spherical averages of the cylindrical neutron densities to compute the proton potential \(U_{prot}^{HF}\) at each step of the iterative process. The reasoning behind this choice was that protons are deeply bound and one does not expect them to be much affected by the neutron density deviation from sphericity. As we will show, this is an accurate approximation only for the outermost layers of the inner crust. Summarizing, we have extended and improved the calculations of Paper I as follows: * we add the Coulomb exchange term in the proton potential using the Slater approximation. * we adopt cylindrical symmetry also in the case of protons. * we consider, although schematically, the effects associated with the possible reduction of the pairing interaction due to screening effects. * we improve the numerical aspects of the code, namely the derivation and integration techniques. Improving the numerical precision is crucial for computing the pinning energy, as we will show in the next section. ### Binding and pinning energy We solve the HFB equations in the following configurations (see Fig. 1 for a sketch): * **Neutron sea (NS):** the neutron sea, with neither a nucleus (\(Z=0\)) nor a vortex (\(\nu=0\)); * **Nucleus (Nu):** a nucleus (\(Z\neq 0\)) with no vortex (\(\nu=0\)), surrounded by the neutron sea; * **Interstitial pinning (IP):** a vortex (\(\nu=1\)) with no nucleus (\(Z=0\)), surrounded by the neutron sea; * **Nuclear pinning (NP):** a nucleus (\(Z\neq 0\)) and a vortex (\(\nu=1\)) on top of it, surrounded by the neutron sea. By comparing the total energies of each configuration, we computed the _binding_ energy of the vortex onto the nucleus. This quantity is defined as the difference between the energy needed to build a vortex on top of a nucleus and the energy necessary to build a vortex in uniform matter. Equivalently, the binding energy can be defined as the energy needed to move the vortex from its site on top of the nucleus to an infinite distance from it (see Fig. 1). A negative value means that the favorable position for the vortex is on top of the nucleus, whilst a positive value means that the favorable position is far away from it. A simple combination of the total energies of each configuration gives the explicit expression of the binding energy \[E_{b} = E^{NP}+E^{NS}-(E^{IP}+E^{Nu}) \tag{7}\] \[- \lambda_{n}\left[(N^{NP}+N^{NS}-(N^{IP}+N^{Nu})\right]\] where \(E^{i}\) is the total energy of the specified configuration. We added a correction term proportional to the neutron chemical potential \(\lambda_{n}\) to ensure that we compare calculations with the same number of particles, since the vortex, if present, reduces the number of neutrons \(N^{i}\) found in each cell. Numerical precision is crucial to compute the binding energy. The energy terms in (7) range from some hundreds of MeVs up to tens of thousands MeVs as a function of neutron density in the inner crust. The values of the nucleus-vortex binding energy, on the other hand, range from some hundreds of keVs up to tens of MeVs. Even small numerical errors can have substantial effects on the final values of the binding energy. The binding energy is a different quantity with respect to the _pinning_ energy \(E_{p}\). The latter is influenced by the presence of the surrounding nuclear lattice and therefore we are unable to calculate it directly. Nonetheless, we can find an estimate through the binding energy. Epstein and Baym in [29] realized that there is a kinetic component to the vortex-nucleus interaction, that accounts for the amount of superfluid flow displaced by the nucleus. It reads \[K_{n}(\rho)=\frac{3}{2}M_{s}\left(\frac{\zeta-1}{\zeta+2}\right)\left(\frac{ \hbar}{2m_{0}\rho}\right)^{2} \tag{8}\] where \(m_{0}\) is the nucleon mass, \(M_{s}\) is the mass of the neutron superfluid of density \(n_{\infty}\) displaced by a sphere of radius \(R_{n}\) (i.e., the nuclear radius) and \(\zeta\) is the ratio of the nucleus density \(n_{n}\) to the neutron superfluid density \(n_{\infty}\). \(K_{n}\) is always positive and it is inversely proportional to the square of the distance \(\rho\) between the nucleus center and the vortex axis. On the other hand, the other component of the interaction is of nuclear nature. If we assume that such nuclear interaction is short-ranged, then after a certain critical distance \(\rho^{*}\) it will become negligible, along with its contribution to the pinning energy. We can estimate such distance as the sum of the nuclear radius \(R_{n}\) and the coherence length \(\xi\) of the vortex \[\rho^{*}\sim R_{n}+\xi \tag{9}\] where \(\xi=\hbar^{2}k_{F}/\pi m_{0}\Delta\), with \(k_{F}\) the Fermi momentum. From our calculations, \(\xi\) ranges between 3 and 10 fm approximately, depending on the density of the neutron sea. To compute the pinning energy, we must compare \(\rho^{*}\) with \(R_{WS}\). We assume that the nuclear contribution to the vortex-nucleus interaction is negligible for \(\rho\gtrsim\rho^{*}\). If \(\rho^{*}<R_{WS}\), we then suppose that at \(\rho=R_{WS}\) the vortex-nucleus interaction is dominated by the kinetic term (8). Therefore, from the definition of pinning energy, we write \[E_{p}\simeq E_{b}-K_{n}(R_{WS}) \tag{10}\] At \(R_{WS}\), the contribution of \(K_{n}(R_{WS})\) is of the order of a few tens of keV, so that it usually represents a small correction to the pinning energy. If, on the contrary, \(\rho^{*}\gtrsim R_{WS}\), there would still be a substantial overlap between the vortex and the nucleus at a distance \(\rho=R_{WS}\). In this case, we are unable to estimate the non-negligible nuclear component to the interaction and therefore we cannot provide an estimate on the pinning energy. ### Computational details Similarly to Paper I, we present the calculated value of the pinning energy as a function of the density of the neutron sea far from the nucleus, \(n_{\infty}\). We investigated eight different density zones, from \(n_{\infty}=0.001\) fm\({}^{-3}\) to \(n_{\infty}=0.038\) fm\({}^{-3}\). At each density, we have carried out six sets of calculations, using two different Skyrme models, namely SLy4 and SkM*, and three different pairing strengths (marked by the pairing-interaction reduction factor \(\beta\)). For each set, we iteratively solved two HFB equations, one for protons and one for neutrons, for each of the four different configurations. The neutron chemical potential was chosen so as to reproduce the external densities predicted in [7] and studied in Paper I. On the other hand, the proton chemical potential was adjusted to give the proton number \(Z=40\)[7]. We took special care in estimating the errors due to the convergence of the calculations and also those due to the size of the box, which is essential for our results to be reliable. Specifically, we adopted the following convergence criterion for the computation of a given configuration: the program halts when the relative total energy difference between the last and second-last iteration is less than \(5\times 10^{-6}\) for three consecutive iteration cycles. In some cases, we observed that this criterion was not stringent enough; we let therefore the computation continue until the relative energy difference reached \(5\times 10^{-8}\) for three consecutive iteration cycles. After the binding energy was obtained, we computed the critical distance \(\rho^{*}\) (9) as well as the kinetic contribution (8) (which within our approximation does not depend on the box radius). If the criterion \(\rho^{*}<R_{WS}\) was met, we were able to compute the corresponding pinning energies via (10); otherwise, we concluded that our method could not produce a result for the particular parameter set. In Appendix C we show the values of \(\rho^{*}\) we obtained. ## III Results ### Vortex effects on pairing gaps and proton deformation In Fig. 2 we compare contour plots of the pairing gaps associated with the NP (left), Nu (center), and IP (right) configurations in the \((\rho,z)\) plane, calculated with the SLy4 interaction for the density \(n_{\infty}=0.008\) fm\({}^{-3}\). One can see that the gap acquires its asymptotic value for \(\rho\gtrsim 10\) fm in the IP configuration, while the presence of the nucleus distorts the gap profile in the NP configuration so that the vortex enlarges and incorporates the nucleus, and the gap reaches its asymptotic value only for \(\rho\gtrsim 15\) fm. Our results are qualitatively consistent with those obtained in [33], where the vortex-nucleus interaction was studied with dynamical simulations (see Fig. 2 in [33], where one can actually observe the vortex bending to avoid the nuclear region). The gap profiles for the NP, Nu, and IP configuration along the equator \(z=0\) are shown in Fig. 3 for the SLy4 interaction and the three values of \(\beta\) we have considered. The density is \(n_{\infty}=0.026\) fm\({}^{-3}\). In all cases, the gap is suppressed for \(\rho\leq 10\) fm and rapidly reaches the asymptotic value corresponding to the given value of \(\beta\). There is a slight dependence on the interaction, which essentially depends on the different values of the effective mass associated with the SLy4 and with the SkM\({}^{*}\) interaction. In Fig. 4 we present contour plots in the \((\rho,z)\) plane of the differences between the density distributions calculated in the NP and in the Nu configuration with the SLy4 interaction (see also [42]). Upper and lower panels refer to neutrons and to protons respectively. We have set the same color scale for both neutrons and protons and we display results obtained for four different Wigner-Seitz cells corresponding to varying depths in the inner crust. Deformation effects increase as a function of density. The deformation of the nucleus tends to be prolate, that is, aligning the nuclear density with the axis of the vortex. In the neutron case, it is possible to observe a density depletion (circular blue shadow) surrounding the nucleus (\(\rho\lesssim 7\) fm and \(z\lesssim 7\) fm). This is an expected effect of the internal regions of a fermionic vortex (see Paper I and [36] for more details), that takes place at all densities and for the three \(\beta\) factors. The only exceptions are found in the case of the SkM* interaction where one observes some penetration of the vortex into the nucleus at the two highest neutron sea densities (not shown in the figures). In general, the deformation of the distribution of protons is similar in shape and magnitude to that of neutrons (giving rise to variations in the density up to 5-10% in the case of high-density cells). This can be considered to be the result of the general tendency of the nucleus to maximize the overlap between the distribution of neutrons and protons. We will assess the effect of the deformation on pinning energies below. It is reasonable to think that this trend should continue as we move to deeper and denser areas of the crust, where the pasta phase will most likely produce negative pinning energy, thus giving rise to a hitherto unexplored hybrid mode of pinning. Hence, the vortex-nucleus interaction may favor the appearance of the pasta phase, thought to be present at higher densities than the ones studied here [43]. Moreover, the appearance of the nuclear pasta is expected to Figure 1: Visual representation of (7). The binding energy is shown as the energy cost to move a vortex from its position on top of a nucleus to an infinite distance from it. Figure 4: Difference between the densities calculated in the NP and Nu configurations, expressed in fm\({}^{-3}\), as a function of \((\rho,z)\) in a \(\varphi\)-constant plane for several neutron sea densities \(n_{\infty}\). In the top half, we show neutron quantities, while in the bottom half proton quantities. Figure 3: Typical pairing gaps obtained in our calculations for the NP, Nu, and IP configurations, for the SLy4 interactions, and for the three adopted values of \(\beta\), as a function of the distance from the vortex axis in the \(z=0\) plane. Figure 2: Contour plots of the pairing gaps of the NP (left), Nu (center), and IP (right) configurations. influence the pinning interaction, with consequences for the macroscopic hydrodynamic behavior of the superfluid in the pasta layers [26]. This interesting subject is left for future studies. The effect of deformation on the pinning energy will be discussed in the next section. ### Pinning Energies In Fig. 5 we show our results for the pinning energy as a function of the neutron sea density \(n_{\infty}\) for both SLy4 (straight line) and SkM* (dotted line) interactions. The corresponding numerical values are reported in Tab. 1 and 2. The value of the pinning energy depends considerably on the value of the interstitial pairing gap, which could be much lower than the bare gap (especially at high densities) due to screening effects. For this reason, we have carried out calculations with \(\beta\)=2 and 3. We first point out that with \(\beta=3\) and the SLy4 interaction we find \(\rho^{*}>R_{WS}\) at the highest density, so the criteria we explained in 2 are not met. Therefore our method cannot produce a pinning energy value for that point. Generally, the pinning energy has the same qualitative behavior for both interactions, with SkM* systematically predicting higher values. At the lowest densities, the pinning energy is slightly negative and therefore nuclear pinning is favored. On the other hand, the pinning energy grows considerably with \(n_{\infty}\) up to about \(n_{\infty}=0.02\) fm\({}^{-3}\), implying that vortex lines are repelled at intermediate densities. At the highest densities, the pinning energy either becomes roughly stable, as in the case of SkM*, or decreases, as for SLy4, where it even becomes negative again for \(\beta=2\) and 3. At a given density the pinning energy decreases as a function of \(\beta\). This can be understood, considering that the vortex radius (expressed in terms of its coherence length \(\xi\)) grows with \(n_{\infty}\) and with \(\beta\), as a larger value of \(\beta\) corresponds to a lower pairing field \(\Delta\). We have previously seen that the vortex tends to incorporate the nucleus. This costs less energy if the vortex radius is larger, that is, for larger values of \(\beta\), because the deformation needed is clearly less significant. The nuclear pinning configuration, while still being not convenient, becomes less unfavorable and the pinning energy decreases considerably with \(\beta\). We carefully checked the dependence of our results on the radius of the Wigner-Seitz cell. We have found that generally, the computed pinning energies tend to stabilize for \(R_{WS}\) larger than 35 fm. For each set of parameters, we performed three calculations for \(\rho_{WS}\)= 38 fm, 40 fm, 42 fm, and the same height (\(h_{WS}=40\) fm). The resulting pinning energies differ by less than \(\sim\) 10 keV at the lowest density we have considered, that is, \(n_{\infty}=0.001\) fm\({}^{-3}\) and by less than 300 keV at \(n_{\infty}=0.017\) fm\({}^{-3}\). For a given density, we will report the value averaged over the three boxes. We have found that at the two largest computed densities, namely \(n_{\infty}=0.026\) fm\({}^{-3}\) and \(n_{\infty}\) = 0.037 fm\({}^{-3}\), the convergence pattern is more complicated, and we considered also larger values of \(R_{WS}\), up to 48 fm. The HFB self-consistent process for the NP configurations can lead to two solutions having a different pairing and density spatial dependence, according to the box radius, and differing from each other by about 1.5 MeV. For these two densities, the boxes displaying the deepest minima were selected, in keeping with the variational nature of our approach. The resulting uncertainty on the pinning energy is equal to about 500 keV. We conclude this section comparing our results with those reported in Paper I in Fig. 6. The pinning energies computed with the SLy4 and the SkM* interaction are shown in the left and right panel respectively. Only the value \(\beta=1\) was considered in Paper I. The results obtained for the SkM* interaction are similar, aside from a sharp fall of the pinning energy in the second density zone. On the other hand, for SLy4 the situation is rather different: the new results are more regular and grow monotonously with \(n_{\infty}\), while the previous ones present a distinct oscillatory behavior. Quantitatively, the difference with the results of Paper I is substantial at \begin{table} \begin{tabular}{c c c c} \(n_{\infty}\) [fm\({}^{-3}\)] & \multicolumn{3}{c}{\(E_{p}\) [MeV] (SkM*)} \\ & \(\beta=1\) & \(\beta=2\) & \(\beta=3\) \\ \hline 0.001 & \(-0.19\) & \(-0.30\) & \(-0.27\) \\ 0.002 & \(-0.10\) & \(-0.35\) & \(-0.50\) \\ 0.004 & \(1.63\) & \(0.18\) & \(-0.23\) \\ 0.008 & \(7.47\) & \(2.72\) & \(1.19\) \\ 0.011 & \(8.06\) & \(3.41\) & \(1.68\) \\ 0.017 & \(11.12\) & \(5.81\) & \(3.59\) \\ 0.026 & \(19.07\) & \(10.31\) & \(6.47\) \\ 0.037 & \(18.69\) & \(12.07\) & \(6.43\) \\ \end{tabular} \end{table} Table 2: Pinning energy and its uncertainty for eight different values of the neutron sea density. We show our results of the neutron sea density. We show our results of \(\beta\). \begin{table} \begin{tabular}{c c c c} \(n_{\infty}\) [fm\({}^{-3}\)] & \multicolumn{3}{c}{\(E_{p}\) [MeV] (SLy4)} \\ & \(\beta=1\) & \(\beta=2\) & \(\beta=3\) \\ \hline 0.001 & \(-0.72\) & \(-0.48\) & \(-0.27\) \\ 0.002 & \(-0.91\) & \(-0.75\) & \(-0.70\) \\ 0.004 & \(-0.89\) & \(-0.97\) & \(-0.93\) \\ 0.008 & \(2.73\) & \(0.40\) & \(-0.43\) \\ 0.011 & \(3.01\) & \(0.63\) & \(-0.26\) \\ 0.017 & \(10.00\) & \(3.90\) & \(1.06\) \\ 0.026 & \(11.78\) & \(3.77\) & \(-0.94\) \\ 0.037 & \(9.85\) & \(-1.49\) & - \\ \end{tabular} \end{table} Table 1: Pinning energy and its uncertainty for eight different values of the neutron sea density. We show our results with the SLy4 interaction for the three different values of \(\beta\). The highest density point with \(\beta=3\) is absent because it does not satisfy our requirement \(\rho^{*}>R_{WS}\) (see section 2). the largest densities, where the present pinning energies are larger by 5-10 MeV. To study these differences in more detail, in Fig. 7 we consider first the effect of proton deformation and of Coulomb exchange, which were not taken into account in Paper I. Proton deformation decreases the energy of the NP configuration; on the other hand, it does not affect the Nu configuration, in which we consider a spherical, closed shell nucleus. As a consequence (see Eq. (7)) the pinning energy decreases, and therefore this effect cannot explain why the pinning energies are larger than those calculated in Paper I. In any case, one sees in Fig. 7 (see in particular the inset) that this effect is significant only for the largest densities, where it amounts to about 600-700 KeV. Neglecting deformation but including Coulomb exchange, on the other hand, decreases the pinning energy by at most about 100 keV. We then conclude that the differences with Paper I must be related to the improvements in the computational algorithms. This point is further considered in Appendix B. ### Mesoscopic pinning forces The pinning energy contains information about the microscopic interaction between a vortex and a single nucleus. Nonetheless, inner crust vortices are much longer than the lattice spacing and are expected to interact with many pinning sites [26; 32], giving rise to pinning at the Figure 5: Pinning energies as a function of the neutron sea density \(n_{\infty}\), for three values of \(\beta\) and for both SLy4 (straight line) and SkM* (dotted line) interactions. The highest density point with SLy4 and \(\beta=3\) is absent because it does not satisfy our requirement \(\rho^{*}>R_{WS}\) (see section II.2). Figure 6: Comparison between our new results (blue dots) on the pinning energy and the results of Paper I [28] (purple triangles). As previously, we show the values as a function of the exterior neutron sea density \(n_{\infty}\) for both Sly4 (left) and SkM* (right) interactions and for \(\beta=1\). mesoscopic scale (an intermediate scale in between the lattice spacing and the typical distance between two vortices in a pulsar). Seveso _et al._[32] found a simple prescription to estimate the mesoscopic pinning force per unit length \(f_{L}\) acting on a vortex segment of length \(L\), which is a better representative of the vortex-lattice interaction than the single-nucleus pinning energy, see the discussion in [26]. They found an analytic approximation where the force per unit length \(f_{L}=f_{L}(E_{p},\,R_{WS},L)\) is a function of the pinning energy \(E_{p}\) and the dimension of the WS cell \(R_{WS}\). This function depends also on the parameter \(L\), the typical length over which a vortex filament in the inner crust could be approximated as straight. Finally, the estimate of \(f_{L}(E_{p},\,R_{WS},L)\) also depends on the geometrical properties of the lattice and on whether there is nuclear or interstitial pinning. However, the authors found that this distinction has a low impact on the pinning strength results, a result that is confirmed also by the dynamical simulations of an ensemble of vortices in complex pinning landscapes performed in [44; 26]. By following the procedure in [32], we can calculate new estimates for the typical pinning force for three different values of the parameter \(L\) that defines the scale on which a vortex can be considered straight (\(L=1000,2500,5000\)\(R_{WS}\), see [32]). Our results are shown in Fig. 8. We plot the absolute value of the force per unit length; where it is marked with dots, it is repulsive, otherwise, it is attractive where marked by circles. The mesoscopic pinning force values are of the same order of magnitude as the results of [32]: the force per unit length ranges from \(\sim 10^{13}\) dyn/cm up to \(\sim 10^{16}\) dyn/cm. While most of the remarks present in [32] are valid for our results too, we briefly underline the following aspect. The force decreases as the vortex length increases. Note that for an infinitely long and rigid vortex, the pinning force should vanish. In fact, if the vortex were to move, the number of nuclei with which it interacts would not change[45; 32]. We can also compare our findings with the results of [33], which are obtained through a different method. In particular, from inset (b) of Fig. 3 of their work, we can see that they found a repulsive force of the order of \(\sim 0.5\) MeV/fm when the vortex-nucleus distance is approximately 20 fm; after conversion to appropriate units, this is broadly consistent with our results. ## IV Conclusions Microscopic pinning energies are a crucial ingredient in the dynamics of vortex-mediated pulsar glitches. The stronger the pinning of a vortex line, the larger the amount of angular momentum that can be stored in the inner crust in the form of a persistent (dissipationless) neutron current, which can then be potentially released in a glitch [4]. Most of the past estimates of the pinning energies relied on a classical or semiclassical picture and had to use significant approximations to describe nuclei. Working in the microscopic HFB framework solves these problems, as was done in Paper I [28]. We have expanded and improved the latter work in four respects: we have i) allowed for the axial deformation of protons; ii) included the effect of the Coulomb exchange; iii) considered, although schematically, the effects of the screening of the pairing interaction; and iv) improved the numerical treatment giving special attention to the convergence of our results. Based on these improvements, we found new and more reliable results on the pinning energy. Our results show that nuclei attract vortices for the lower external neutron sea densities, while the situation is the opposite at higher densities unless the pairing gap is strongly screened. From our estimates of the pinning Figure 7: The pinning energy calculated with the SLy4 interaction for \(\beta=1\) as a function of neutron density, already shown in Fig. 5. Our results (blue line) are compared with the one obtained neglecting both proton deformation and Coulomb exchange (green line) or neglecting only proton deformation (red line). The results obtained at the highest densities are shown in more detail in the inset. binding energy, we then extracted the typical force per unit length acting on a vortex, consistently with the procedure developed in [32]. This force defines a theoretical upper limit on the depinning threshold [26] and, accordingly, an upper limit on the glitch amplitude in general relativity [17]. Therefore, in Sec. III.3 we have checked that our mesoscopic pinning forces are sufficiently large to be consistent with observations of giant glitches in the Vela pulsar. ###### Acknowledgements. The Authors thank M. Antonelli for useful discussions and the careful reading of the manuscript including many useful suggestions. F. B. acknowledges the I+D+i project with Ref. PID2020-114687GB-I00, funded by MCIN/AEI/10.13039/501100011033. ## Appendix A Numerical details Within the HF approximation, one can obtain an explicit expression for the self-consistent potential of the Skyrme potential \[h(\mathbf{x})=-\nabla\frac{\hbar^{2}}{2m_{q}^{*}(\mathbf{x})}\nabla+U_{q}( \mathbf{x})+\delta_{q,p}V_{C} \tag{10}\] where \(q\) can stand for \(p\) (protons) or \(n\) (neutrons). Remembering that \(n_{q}\) and \(\tau_{q}\) are the density and the kinetic density of either protons or neutrons, and that \(n=n_{p}+n_{n}\) and \(\tau=\tau_{p}+\tau_{n}\), we write the terms in (10) following [37]. The effective mass \(m_{q}^{*}\) is \[\frac{\hbar^{2}}{2m_{q}^{*}(\mathbf{x})}=\frac{\hbar^{2}}{2m_{q}}+\frac{1}{8} \bigg{[}t_{1}(2+x_{1})+t_{2}(2+x_{2})\bigg{]}n(\mathbf{x})\] \[-\frac{1}{8}\bigg{[}t_{1}(1+2x_{1})+t_{2}(1+2x_{2})\bigg{]}n_{q}( \mathbf{x}) \tag{11}\] Figure 8: Absolute value of the pinning force per unit length as a function of the neutron sea density \(n_{\infty}\), for both SLy4 (upper half) and SkM* (lower half) interactions. Where it is attractive, we used a hollow circle, while where it is repulsive we used a dot. The values have been found using the prescription in [32] for three different maximum-straight lengths \(L=1000\) (straight line), \(2500\) (line-dot), and \(5000\) (dotted line) \(R_{WS}\). We plotted the results for the three different values of \(\beta\) used. As for the corresponding pinning energy, the highest density point with SLy4 and \(\beta=3\) is absent because it does not satisfy our requirement \(\rho^{*}>R_{WS}\) (see section II.2). the self-consistent potential \(U_{q}\) reads \[\begin{split} U_{q}(\mathbf{x})&=\frac{1}{2}t_{0} \bigg{[}(2+x_{0})n+(1+2x_{0})n_{q}\bigg{]}\\ &+\frac{1}{24}t_{3}\bigg{\{}(2+x_{3})(2+\alpha)n^{\alpha+1}-\\ &(2x_{3}+1)\left[2n^{\alpha}n_{q}+\alpha n^{\alpha-1}(n_{p}^{2}+ n_{n}^{2})\right]\bigg{\}}\\ &+\frac{1}{8}\bigg{[}t_{1}(2+x_{1})+t_{2}(2+x_{2})\bigg{]}\tau+ \\ &\frac{1}{8}\bigg{[}t_{2}(1+2x_{2})-t_{1}(1+2x_{1})\bigg{]}\tau_{q }\\ &+\frac{1}{16}\bigg{[}t_{2}(2+x_{2})-3t_{1}(2+x_{1})\bigg{]}\nabla^ {2}n\\ &+\frac{1}{16}\bigg{[}t_{2}(1+2x_{2})+3t_{1}(1+2x_{1})\bigg{]} \nabla^{2}n_{q}\end{split} \tag{23}\] Lastly, the Coulomb potential, with the Slater approximation for the exchange part, reads \[V_{C}(\mathbf{x})=e^{2}\left(\int\frac{n_{p}(\mathbf{x}^{\prime})\mathrm{d}_{3 }x^{\prime}}{|\mathbf{x}-\mathbf{x}^{\prime}|}-\left(\frac{3}{\pi}\right)^{ \frac{1}{3}}n_{p}(\mathbf{x})^{\frac{1}{3}}\right) \tag{24}\] In the code, we neglect the spin-orbit interaction, taking into account the spin simply with a degeneracy factor \(g=2\). Each term of the potentials contributes to a term of the energy density of the system \(\mathcal{H}_{\mathcal{HF}}(\mathbf{x})\), which in turn is subdivided into different components \[\mathcal{H}_{\mathcal{HF}}=\mathcal{K}+\mathcal{H}_{0}+\mathcal{H}_{3}+ \mathcal{H}_{eff}+\mathcal{H}_{fin}+\mathcal{H}_{C} \tag{25}\] where each term reads \[\begin{split}\mathcal{K}&=\frac{\hbar^{2}}{2m}\tau\\ \mathcal{H}_{0}&=\frac{1}{4}t_{0}\bigg{[}(2+x_{0})n^ {2}-(2x_{0}+1)(n_{p}^{2}+n_{n}^{2})\bigg{]}\\ \mathcal{H}_{3}&=\frac{1}{24}t_{3}n^{\alpha}\bigg{[} (2+x_{3})n^{2}-(2x_{3}+1)(n_{p}^{2}+n_{n}^{2})\bigg{]}\\ \mathcal{H}_{eff}&=\frac{1}{8}\bigg{[}t_{1}(2+x_{1}) +t_{2}(2+x_{2})\bigg{]}\tau n\\ &+\frac{1}{8}\bigg{[}t_{2}(2x_{2}+1)-t_{1}(2x_{1}+1)\bigg{]}(\tau _{p}n_{p}+\tau_{n}n_{n})\\ \mathcal{H}_{fin}&=\frac{1}{32}\bigg{[}3t_{1}(2+x_{1} )-t2(2+x_{2})\bigg{]}\left(\nabla n\right)^{2}\\ &-\frac{1}{32}\bigg{[}3t_{1}(2x_{1}+1)+3t_{2}(2x_{2}+1)\bigg{]} \left[\left(\nabla n_{p}\right)^{2}+\left(\nabla n_{n}\right)^{2}\right]\\ \mathcal{H}_{C}&=e^{2}\left(\frac{n_{p}}{2}\int\frac{n _{p}(\mathbf{x}^{\prime})\mathrm{d}_{3}x^{\prime}}{|\mathbf{x}-\mathbf{x}^{ \prime}|}-\frac{3}{4}\left(\frac{3}{\pi}\right)^{\frac{1}{3}}n_{p}(\mathbf{x} )^{\frac{1}{3}}\right)\end{split} \tag{26}\] We solve (1) in a cylindrical box with height \(h_{box}\) and radius \(\rho_{box}\). We search for a solution expanded on a single-particle basis so that the amplitudes \(u_{qm}(\rho,z,\varphi)\) and \(v_{qm}(\rho,z,\varphi)\) for the quasi-particle level \(q\) with projection of angular momentum along the \(z\)-axis \(m\) are \[\begin{split} u_{qm}(\rho,z,\varphi)&=\sum_{nl}U_{ qm}^{nl}f_{nm}(\rho)g_{l}(z)e^{im\varphi}\\ v_{qm}(\rho,z,\varphi)&=\sum_{nl}V_{qm}^{nl}f_{nm -\nu}(\rho)g_{l}(z)e^{i(m-\nu)\varphi}\end{split} \tag{27}\] On the \(\rho\) axis, functions \(f_{nm}(\rho)\) are the solution of the Schrodinger equation for free particles \[-\frac{\hbar^{2}}{2m_{0}}\left(\frac{1}{\rho}\frac{\partial}{\partial\rho} \left(\rho\frac{\partial}{\partial\rho}\right)+\frac{m^{2}}{\rho^{2}}\right) f_{nm}(\rho)=e_{nm}f_{nm}(\rho) \tag{28}\] where \(m_{0}\) is the bare nucleon mass and the index \(n\) is the number of nodes of function \(f_{nm}(\rho)\) on the \(\rho\) axis. On the \(z\) axis, functions \(g_{l}(z)\) are normalized plane waves \[g_{l}(z)=\sqrt{\frac{2}{h_{box}}}\sin\left(k_{l}\left(z+\frac{h_{box}}{2} \right)\right),\;k_{l}=\frac{\pi}{h_{box}},\frac{2\pi}{h_{box}},\ldots \tag{29}\] so that we have \[\begin{split}-\frac{\hbar^{2}}{2m_{0}}\left(\frac{\partial^{2}}{ \partial z^{2}}+\frac{1}{\rho}\frac{\partial^{2}}{\partial\varphi^{2}}+\frac{ 1}{\rho}\frac{\partial}{\partial\rho}\left(\rho\frac{\partial}{\partial\rho} \right)\right)& f_{nm}(\rho)g_{l}(z)e^{im\varphi}=\\ &\left(e_{nm}+\frac{\hbar^{2}k_{l}^{2}}{2m_{0}}\right)& f _{nm}(\rho)g_{l}(z)e^{im\varphi}\end{split} \tag{30}\] As for the boundary condition, each single-particle function vanishes at the edge of the box. To solve (1), we project it onto generic basis states \(|m_{i},n_{i},l_{i}\rangle=|\alpha_{i}\rangle\). Therefore our system of equations becomes, in matrix form \[\left(\begin{array}{cc}\langle\alpha_{2}|h-\lambda|\alpha_{1}\rangle& \langle\alpha_{2}|\Delta|\alpha_{1}\rangle\\ \langle\alpha_{2}|\Delta^{*}|\alpha_{1}\rangle&-\langle\alpha_{2}|h-\lambda| \alpha_{1}\rangle\end{array}\right) \tag{31}\] Since \(h\) depends only on the density, and the density does not depend on the azimuthal angle \(\varphi\), it holds \[\langle m_{2},n_{2},l_{2}|h|m_{1},n_{1},l_{1}\rangle=\delta_{m_{1},m_{2}}\left\langle n _{2},l_{2}|h|n_{1},l_{1}\rangle \tag{32}\] On the other hand, \(\Delta=\Delta(\rho,z)e^{i\nu\varphi}\). It follows \[\langle m_{2},n_{2},l_{2}|\Delta|m_{1},n_{1},l_{1}\rangle=\delta_{m_{1},m_{2}+ \nu}\left\langle n_{2},l_{2}|\Delta(\rho,z)|n_{1},l_{1}\rangle\right. \tag{33}\] We can now rewrite (1) explicitly. From (31) and (5), we find \[\begin{cases}\sum_{n_{2}l_{2}}\left(h_{n_{1}l_{1}n_{2}l_{2}}^{m}-\lambda \right)U_{n_{2}l_{2}}^{qm}+\Delta_{n_{1}l_{1}n_{2}l_{2}}^{m}V_{n_{2}l_{2}}^{ qm}&=E^{qm}U_{n_{1}l_{1}}^{qm}\\ \sum_{n_{2}l_{2}}\Delta_{n_{1}l_{1}n_{2}l_{2}}^{m}U_{n_{2}l_{2}}^{qm}-\left(h_{n_ {1}l_{1}n_{2}l_{2}}^{m}-\lambda\right)V_{n_{2}l_{2}}^{qm}&=E^{qm}V_{n_{1}l_{1}}^{ qm}\end{cases} \tag{34}\] where \[h^{m}_{n_{1}l_{1}n_{2}l_{2}} =2\pi\int_{0}^{h_{box}}2\mathrm{d}z\;\int_{0}^{p_{box}}\rho\, \mathrm{d}\rho\Bigg{\{}f_{n_{2}m}(\rho)g_{l_{2}}(z)\left(U(\rho,z)+\left(\frac{m_ {0}}{m^{*}(\rho,z)}\right)\left(e_{n_{1}m}+\frac{\hbar^{2}k_{l_{1}}^{2}}{2m_{0}} \right)-\lambda\right)f_{n_{1}m}(\rho)g_{l_{1}}(z)\] \[+f_{n_{2}m}(\rho)g_{l_{2}}(z)\left(\frac{\partial}{\partial\rho} \left(\frac{\hbar^{2}}{2m^{*}(\rho,z)}\right)\cdot\frac{\partial f_{n_{1}m}( \rho)}{\partial\rho}\right)g_{l_{1}}(z)+f_{n_{2}m}(\rho)g_{l_{2}}(z)\left(\frac {\partial}{\partial z}\left(\frac{\hbar^{2}}{2m^{*}(\rho,z)}\right)\cdot\frac {\partial g_{l_{1}}(z)}{\partial z}\right)f_{n_{1}m}(\rho)\Bigg{\}} \tag{15}\] and \[\Delta^{m}_{n_{1}l_{1}n_{2}l_{2}}=2\pi\int_{0}^{h_{box}}2\mathrm{d}z\;\int_{ 0}^{p_{box}}\rho\,\mathrm{d}\rho\left(f_{n_{2}m-\nu}(\rho)g_{l_{2}}(z)\Delta( \rho,z)f_{n_{1}m}(\rho)g_{l_{1}}(z)\right) \tag{16}\] Since protons and neutrons feel different self-consistent potentials (13), they give rise to two systems (14). From the solution of such systems, we then compute new densities, which we can use to write a new set of equations (14). This iterative process stops once the relative energy difference between subsequent iterations is lower than an appropriate value. Since protons are confined in the nucleus, the dimension of their box is smaller, fixed at 15 fm: so that it's big enough to contain all the protons but small enough to shorten the calculation times. Finally, we do not consider proton pairing. ## Appendix C \(\rho^{*}\) criterion We show here the values of the critical distance \(\rho^{*}=R_{N}+\xi\) (see eq. (9)) for the two adopted Skyrme parametrizations and for three values of the gap-reduction factor \(\beta\). We observe that the value of \(\rho^{*}\) is mostly determined by the pairing gap. As a consequence, \(\rho^{*}\) has a minimum at intermediate densities, where the pairing gap reaches its maximum value. ## Appendix B Numerical Test We test the accuracy of our axially deformed HFB code by applying it to the spherical nucleus \({}^{40}\)Ca and comparing the results with the those obtained with the spherical code hfbcs-qrpa[46]. For this test, we use the SLy4 interaction without the spin-orbit terms. In Table 3 we show the total energy, divided among its contributions, as listed in (16); the only exception being \(E_{12}\), which is defined as \(E_{12}=E_{fin}+E_{eff}\). The relative difference between the hfbcs-qrpa results and our program amount to 0.1-0.3%. In Table 6 we list the single-particle energy levels of neutrons and protons. We see that the present code reproduces the degeneracy of the Levels with the same values of the angular momentum \(l\) within a few keVs, while deviations of the order of 100 keV are found in the original code. ## Appendix C \(\rho^{*}\) criterion We show here the values of the critical distance \(\rho^{*}=R_{N}+\xi\) (see eq. (9)) for the two adopted Skyrme parametrizations and for three values of the gap-reduction factor \(\beta\). We observe that the value of \(\rho^{*}\) is mostly determined by the pairing gap. As a consequence, \(\rho^{*}\) has a minimum at intermediate densities, where the pairing gap reaches its maximum value. ## Appendix C \(\rho^{*}\) criterion We show here the values of the critical distance \(\rho^{*}=R_{N}+\xi\) (see eq. (9)) for the two adopted Skyrme parametrizations and for three values of the gap-reduction factor \(\beta\). We observe that the value of \(\rho^{*}\) is mostly determined by the pairing gap. As a consequence, \(\rho^{*}\) has a minimum at intermediate densities, where the pairing gap reaches its maximum value. ## Appendix B Numerical Test We test the accuracy of our axially deformed HFB code by applying it to the spherical nucleus \({}^{40}\)Ca and comparing the results with the those obtained with the spherical code hfbcs-qrpa[46]. For this test, we use the SLy4 interaction without the spin-orbit terms. In Table 3 we show the total energy, divided among its contributions, as listed in (16); the only exception being \(E_{12}\), which is defined as \(E_{12}=E_{fin}+E_{eff}\). The relative difference between the hfbcs-qrpa results and our program amount to 0.1-0.3%. In Table 6 we list the single-particle energy levels of neutrons and protons. We see that the present code reproduces the degeneracy of the Levels with the same values of the angular momentum \(l\) within a few keVs, while deviations of the order of 100 keV are found in the original code. ## Appendix C \(\rho^{*}\) criterion We show here the values of the critical distance \(\rho^{*}=R_{N}+\xi\) (see eq. (9)) for the two adopted Skyrme parametrizations and for three values of the gap-reduction factor \(\beta\). We observe that the value of \(\rho^{*}\) is mostly determined by the pairing gap. As a consequence, \(\rho^{*}\) has a minimum at intermediate densities, where the pairing gap reaches its maximum value. ## References * (1) J. M. \begin{table} \begin{tabular}{c c c c c c} & & Neutrons & & Protons \\ & & Ref. [46] & Present work & Ref. [46] & Present work \\ \hline 2s & \(l_{z}=0\) & \(-16.95\) & \(-16.889\) & \(-9.48\) & \(-9.459\) \\ & \(l_{z}=2\) & \(-18.85\) & \(-18.785\) & \(-11.40\) & \(-11.361\) \\ & \(l_{z}=1\) & \(-18.85\) & \(-18.786\) & \(-11.40\) & \(-11.362\) \\ 1d & \(l_{z}=0\) & \(-18.85\) & \(-18.789\) & \(-11.40\) & \(-11.371\) \\ & \(l_{z}=-1\) & \(-18.85\) & \(-18.786\) & \(-11.40\) & \(-11.362\) \\ & \(l_{z}=-2\) & \(-18.85\) & \(-18.785\) & \(-11.40\) & \(-11.361\) \\ & \(l_{z}=1\) & \(-33.21\) & \(-33.184\) & \(-25.29\) & \(-25.282\) \\ 1p & \(l_{z}=0\) & \(-33.21\) & \(-33.182\) & \(-25.29\) & \(-25.277\) \\ & \(l_{z}=-1\) & \(-33.21\) & \(-33.184\) & \(-25.29\) & \(-25.282\) \\ 1s & \(l_{z}=0\) & \(-47.82\) & \(-47.799\) & \(-39.36\) & \(-39.356\) \\ \end{tabular} \end{table} Table 4: Energies of each single particle level, both for protons and neutrons, expressed in MeV. \begin{table} \begin{tabular}{c c c c c} \(n_{\infty}\) [fm\({}^{-3}\)] & \(R_{WS}\) [fm] & \multicolumn{3}{c}{\(\rho^{*}\) [fm] (SkM*)} \\ & & \(\beta=1\) & \(\beta=2\) & \(\beta=3\) \\ \hline 0.001 & 43.7 & 11.3 & 15.9 & 20.6 \\ 0.002 & 41.5 & 11.7 & 16.3 & 21.0 \\ 0.004 & 38.8 & 11.5 & 13.9 & 19.7 \\ 0.008 & 33.7 & 10.7 & 12.7 & 14.2 \\ 0.011 & 31.8 & 10.7 & 12.4 & 13.9 \\ 0.017 & 28.9 & 10.7 & 12.4 & 13.8 \\ 0.025 & 25.6 & 11.2 & 12.9 & 14.3 \\ 0.038 & 21.4 & 12.3 & 14.0 & 14.0 \\ \end{tabular} \end{table} Table 6: Critical distance \(\rho^{*}\) from our calculations with the SkM* Skyrme parametrization. \begin{table} \begin{tabular}{c c c c c} \(n_{\infty}\) [fm\({}^{-3}\)] & \(R_{WS}\) [fm] & \multicolumn{3}{c}{\(\rho^{*}\) [fm] (SkM*)} \\ & & \(\beta=1\) & \(\beta=2\) & \(\beta=3\) \\ \hline 0.001 & 43.7 & 11.8 & 16.3 & 21.0 \\ 0.002 & 41.5 & 11.9 & 16.0 & 20.0 \\ 0.004 & 38.8 & 11.4 & 14.2 & 16.7 \\ 0.008 & 33.7 & 11.1 & 13.1 & 14.9 \\ 0.011 & 31.8 & 11.2 & 13.1 & 14.7 \\ 0.017 & 28.9 & 11.6 & 13.6 & 15.3 \\ 0.026 & 25.6 & 12.5 & 15.0 & 17.2 \\ 0.037 & 21.4 & 14.5 & 18.5 & 21.7 \\ \end{tabular} \end{table} Table 5: Critical distance \(\rho^{*}\) from our calculations with the SLy4 Skyrme parametrization. For \(\beta=3\), the value of \(\rho^{*}\) is comparable to the dimension of the WS cell; therefore our method cannot estimate the pinning energy for this case.
2301.01279
The relationship between content marketing and the traditional marketing communication tools
Digitalization is making a significant impact on marketing. New marketing approaches and tools are emerging which are not always clearly categorised. This article seeks to investigate the relationship between one of the novel marketing tools, content marketing, and the five elements of the traditional marketing communication mix. Based on an extensive literature review, this paper analyses the main differences and similarities between them. This article aims to generate a debate on the status of content marketing. According to the authors' opinion, content marketing can be considered as the sixth marketing communication mix element. However, further research is needed to fill in the existing knowledge gap.
Szabolcs Nagy, Gergo Hajdu
2022-12-26T09:38:13Z
http://arxiv.org/abs/2301.01279v1
###### Acknowledgements. ###### Abstract The relationship between content marketing and the traditional marketing communication tools is studied. The relationship between content marketing and the traditional marketing communication tools is studied. The relationship between content marketing and the traditional marketing communication tools is studied. The relationship between content marketing and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing tools is studied. The relationship between content and the traditional marketing communication tools is studied. The relationship between content and the traditional marketing tools is studied. The relationship between content and the traditional marketing tools is studied. marketing communication tools to generate discussion if content marketing is the sixth element of the revised marketing communication mix. #### Literature review #### Content marketing definition, functions, and spending Content marketing is the creation and distribution of relevant, timely, and valid content (Wang et al., 2017). Its primary purpose is to create customer trust and value (Repoviener, 2017). Content marketing may have entertaining or educational functions (Duc Le 2016; Lindstrom and J\(\acute{\text{o}}\)meus, 2016). Content marketing can be effectively used both in B2C and B2B markets (Iankova et al. 2019). According to Kotler et al. (2017), the content can serve brand-building or sales promotion purposes. According to Moutsos (2019), 55% of the companies were capable of generating sales and income, and 53% of them were capable of increasing their existing customers' loyalty through content marketing in 2018. So, content marketing can be used to generate income and sales, and also, to increase customers' loyalty. #### Content types and formats Content marketing may appear in various formats based on the type of content. It could be audio and/or visual content (videos, live streaming, webinars); written digital content (articles, blogs, ebooks), images (infographics, photos, GIFs, charts), in-person content (events, presentations, workshops); audio-only digital content (podcasts, audiobooks), and written print content (magazines, books, brochures). Figure 1. shows the different types of content and how B2B marketers changed their use of content types/formats. Figure 2. shows the very same trends in B2C markets. As Figure 1 illustrates, in B2B markets, the use of audio/visual content; written digital content and images became more popular, while the use of written print content significantly decreased compared to the other types. The same trends can be seen in the B2C markets (Figure 2). The only slight difference between the two markets is in the use of audio-only digital content, which significantly dropped in the B2C market. Figure 1: The change of use of content types/format in B2B markets In practice, various types of content can be used to reach out to consumers. As far as the type of content concerned, e-mail campaign is the most popular one, used by 87% of the companies (Murton Beets 2018). However, the following content types are also frequently used (values in brackets show the percentage of companies using the given content type): educative content (77%), actions calling for the next step (62%), events involving personal interactions (61%), telling stories (45%), offers (27%) and community building involving the public (23%). Trends and forecasts are less popular, only 5% of the companies used them (Murton Beets 2018). ### The goals of content marketing Content marketing helps to achieve several goals. The goal of content marketing is to gain customers (Barker, 2017) and to build customer relationships (Pazeratie and Repoviene, 2018). Content marketing can very effectively be used to create brand awareness, educate audiences, generate demand/leads, and build credibility/trust (Figure 3.). Also, content marketing is an effective tool for nurturing subscribers/audience/leads; driving attendance to one or more in-person events, building loyalty with existing clients, and supporting the launch of a new product. It can even be used to achieve sales/revenue generation and build a subscribed audience. Figure 3. presents the possible goals companies managed to successfully achieve by using content marketing. Figure 2: The change of use of content types/format in B2C markets ## Research methodology This paper seeks to generate a debate on the current state of content marketing, and it aims to create a base for future quantitative research. It synthesizes the relevant literature to analyze the relationship between content marketing and the traditional marketing communication tools. It makes an attempt to distinguish content marketing from the other elements in marketing communication mix, which are advertising, sales promotion (SP), public relations (PR), personal selling, and direct marketing (DM). In the following section, based on extensive literature review, the five traditional marketing communication tools are compared to content marketing to reveal the similarities and differences between them regarding the type, purpose, standardization, time span and reach of communication and the target groups. ## Research findings and discussion ### The relationship between advertising and content marketing Advertising is the most prominent element of the traditional communication mix. According to Horvath and Bauer (2013) advertising is an impersonal form of communication that reaches out to the recipients through mass media. Advertising mainly focuses on the product, specific product features, added services, price, packaging unit, trademark, logo, value and ideas worth considering from a social point of view (CSR). Kotler and Keller (2012) are committed to a narrower interpretation of advertising stating that advertising is only related to products, brands and/or services. In advertising, recipients (target group members) are usually aware of the fact that the main intention of marketers with the ads is to persuade and influence their behaviour. Since companies use advertising channels to relay commercials, their target group members can be reached indirectly. In this respect, content marketing is quite different. According to Kotler et al (2017), content marketing communicates with the marketer's own public. Content marketing also Figure 3: Content marketing goals has an appropriately distinguished and defined target audience that receives more personalized content (Hajdu, 2018). Kotler et al (2017) express that the concept of traditional media is "one to many", while content marketing, especially social media, almost always mean two-way interactions. Furthermore, advertising helps to sell the product, while content marketing helps the customers to solve their problems and achieve their individual goals. According to Kotler et al (2017), consumers are ready to share the content, while the traditional ads, which are limited in time and space, are rather "skimmed over" by the target audience. It is almost sure to say that advertisements disturb a lot of people since they interrupt their favorite series, or delay videos they want to watch instantly, or fill their mailboxes with emails. Therefore, we can conclude that advertising has an intervening feature. Content marketing aims to maintain a lasting relationship with the target population (Pazeraite and Repoviene, 2018), while advertising is often seasonal and campaign-based (Kotler-Keller, 2012). Table 1 illustrates the main differences between advertising and content marketing. So, as Scott (2013) concluded, marketers can buy attention (advertising) or can own attention by creating something interesting and valuable that is published online for free (content marketing). #### The relationship between direct marketing and content marketing Direct marketing (DM) is an addressed and interactive form of communication. It aims to achieve measurable responses, which can be orders, purchases, inquiries, or donations. Direct marketing is essentially built on databases. "It allows the potential customers to obtain information, it helps to establish the popularity of a brand or induces immediate purchases" (Horvath and Bauer, 2013, pp. 242.). The fact that direct marketing is built on databases implies that the customer value can be targeted quite accurately. Also, this marketing communication tool is easily optimizable. Telemarketing, mail advertisement, direct mail and direct response advertising are the forms of direct marketing (Horvath and Bauer, 2013). Building brand awareness and credibility are definitely a common point in direct marketing and content marketing. However, direct marketing is less digital than content marketing. In general, the internet as a medium is less dominant in direct marketing, except for e-mail marketing. The purpose of communication in direct marketing is to present the product to make bids. Therefore, \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline & **traditional advertising** & **content marketing** \\ \hline **type of communication** & one-way: “I speak only” & two-way: “let’s talk” \\ \hline **purpose of communication** & promotion of products, brands and services & solving the customer’s problem at no cost \\ \hline **perception of communication from the customer’s viewpoint** & intervening, disturbing & giving a helping hand \\ \hline **reach** & a wide range of the population & individuals or groups \\ \hline **standardization level** & standardized and impersonal & specified and more personalized \\ \hline **target groups** & not own & own \\ \hline **time span of communication** & short and campaign-based & a lasting relationship \\ \hline **limitation** & limited & free \\ \hline **target group reaction** & rejection, skimming over & sharing \\ \hline \end{tabular} \end{table} Table 1: The comparison of the traditional advertising and content marketing direct marketing is usually related to selling (receiving orders); the eye-catching presentation of products (catalogs) and advertising (mail advertisement). According to Tapp (1999, pp. 23) "direct marketing is rather a sales system than a communication tool". Although nowadays direct marketing has widely been accepted as a marketing communication tool, its sales function cannot be ignored. This point of view is also appeared in Kotler and Keller (2012). According to Horvath and Bauer (2013), direct marketing provides the recipient with a clear opportunity to respond and directly targets the previously defined target groups. Although it also has a pre-defined target group (Hajdu 2018), content marketing places less emphasis on the sales-related responses. In content marketing, the responses affect the content itself. In content marketing, building trust, solving the customer's problem and providing further contents contribute to initiating purchases (Barker 2017). There is another significant difference between direct marketing and content marketing. Direct marketing advertises a product or a service in a targeted manner to increase sales volume through immediate selling. That is why direct marketing is also called "direct order marketing", or "direct advertising". Consequently, direct marketing focuses only on the product, which offers the value for the customer (Kotler and Keller, 2012). Content marketing creates value and provides consumers with it. However, content marketing does not aim to sell immediately, only in one step (Fivetechnology, 2019), it has got longer time-orientation. Combining direct marketing with content marketing can be very effective. If a customer registers an account online, he or she can receive free content (e.g. an ebook), which is content marketing, however, the data provided during the registration are also used to build a database, which can be used for direct marketing purposes. Content marketing that builds an audience not only identify demands but also generate it. #### The relationship between personal selling and content marketing Few researchers have addressed the question how personal selling and content marketing can be connected. Personal selling is a face-to-face selling technique where the emphasis is on personal interaction. In an event, which can be related to personal selling or could be a content marketing format, the company (brand) and its potential and existing customers can meet in person and/or online. However, it is important to note that the event is only one of several content marketing types, which are mostly digital. Nowadays, the theory of selling as the most important task of the sales staff has already become outdated since the sales department is usually responsible for many other tasks, such as searching for potential customers, providing information, choosing the target market, providing services, collecting information and distribution (Kotler-Keller 2012, pp. 637). Information that the sales staff provide about the products and services, in principle, can refer to the content marketing. Furthermore, services can also link personal selling and content marketing when the sales personnel try to solve the customer's problem. Personal selling and content marketing can sometimes be combined but they can hardly be fell into one category due to the fundamental differences in their characteristics. #### The relationship between public relations and content marketing Content marketing should not be confused with public relations (Percy, 2018). In many cases, content marketing is a communication form used on a regular (daily or weekly) bases (Insights 2018). Content marketing aims to be part of the consumer's life and seeks to provide value to the customers in an educating and entertaining manner (Lindstrom and Jornaeus, 2016). Public relation (PR) is a strategic tool aiming to turn brand messages into stories that are appealing to the media and its target audiences (Konczosne Szombathelyi 2018). Thus, PR builds credibility and trust among the stakeholders (Horvath-Bauer 2013). Since public relations is not sales-oriented, it is the changes in the mindset of the target audience that should be measured, not its effects on sales (Jozsa et al, 2005). PR seeks to build a good reputation of the company; promote the success of the brand; deals with counselling and consulting. All these goals are very similar to those of content marketing, which among other things aims to build credibility and trust. However, content marketing is not a replacement for public relations (Mathewson and Moran, 2016) Jozsa et al (2005) emphasize that whatever the goal of PR is, the focus should be on creating trust by emphasizing understanding and willingness to cooperate to gain support from the stakeholders of the company. Trust is also a key factor in building strong brands. Both PR and content marketing can be regarded as regular and systematic communication activities (Jozsa et al 2005; Muotsos 2017), and both use rather similar tools such as articles, newsletters, blogs, publications, social media, statistics, e-books, events, etc. (Probusiness, 2018). However, there are some differences between PR and content marketing. Although trust is essential in PR, counselling is only a PR tool or technique. Counseling in PR refers to how we communicate with our clients. It is a recommended course of action that will serve the client's goals. On the contrary, in content marketing, the valuable content is always provided in the form of education, relevant information or entertainment (Lindstrom es Jorneus, 2016). The problem of measuring the effect of PR on sales is also a major difference. The impact of content marketing on sales is a lot easier to measure (Hajdu, 2018), moreover, one of the explicit goals of content marketing is to convert the target public into customers (Barker 2017). Effectiveness of content marketing can easily be measured due to its digital nature. Content marketing is customer-centred, focuses only on selected stakeholders and seeks to solve the customer's problem by providing information or educational content in an entertaining way (Lindstrom and Jorneus 2016; Duc Le 2016). In content marketing, the goal is not to provide all the information but only the relevant content (Wang et al 2017). According to Hajdu (2018), content marketing is a profit-oriented tactical activity to gain customers and make deals. This means that content marketing acquires customers within a reasonable time-period. Content marketing not only produces content, but it also distributes it through its own channels, whereas PR works quite differently in this respect. It is advisable is to combine content marketing with PR since they complement each other. PR can help marketers to make a better story about the brand (Spencer 2014). #### The relationship between sales promotion and content marketing There is a scarcity of literature devoted to the analysing the relationship between sales promotion and content marketing. Horvath and Bauer (2013) refer to sales promotion as a direct influence on consumer behaviour and an impetus to action. With reference to Bauer and Beracs (2006), they emphasize that the primary goal of sales promotion is to promote product sales. "Sales promotion is a set of short-term incentive tools which aim to make consumers purchase more products more frequently or buy specific products or services" (Kotler-Keller, 2012, pp. 596). Regarding the consumer's benefit, sales promotion tools can be divided into two categories. Utilitarian and hedonistic tools can be distinguished. The utilitarian tools provide financial benefits (e.g. price discounts), whereas the hedonistic tools are focusing on entertainment, customer experience and loyalty (Yeshin, 2006). Product samples, gifts, contests and events (trade shows and exhibitions), the tools of sales promotion used to create the customer experience (hedonism), are very much related to content marketing (Jozsa, 2014). Product samples make it easier for the customers to try the products. It is an important link between content marketing and sales promotion because content marketing also provides customers with free and useful content when offering a solution to the customer's problem. Thereby, the company can demonstrate its competence and excellence by offering the best solutions to the customer's problem. Gifts are also commonly used in content marketing in the form of free content. On the contrary, gifts in sales promotion are not free, they are only given to the customers after the purchase (Horvath and Bauer, 2013). Events can belong to content marketing and sales promotion; and sometimes even to PR, depending on their goals and their implementation (Jozsa et al 2005; Danko, 2008, Kranz-Pulizzi, 2011). The content of the event is the decisive factor. In an event, if marketers present information about how the customer could solve his or her problem, it is highly likely to be content marketing. Contests, games or phone applications can also be content marketing tools (Kranz-Pulizzi, 2011). However, contests and games as sales promotion tools are commonly used to increase sales. In this latter case, purchase is often a pre-requisite of entering the contest. In sales promotion, the customer experience is directly linked to the purchase, while in content marketing it is not the case. According to Yeshin (2006), in sales promotion, customer loyalty is gained through financial benefits and consumption (e. g. through loyalty points, gifts, etc.). On the contrary, content marketing seeks to achieve the same goal by providing free content that is useful and/or entertaining. The primary goal of sales promotion is to increase product sales, which can be a distinguishing factor between content marketing and sales promotion. Although content marketing is also sales-oriented in the long run, here the deal is achieved in several steps (Fivetechnology, 2019). In this process, the very first step is building trust by giving value without asking for compensation or purchase (Repoviener, 2017; Maczuga et al, 2015). We can conclude that the main differences between content marketing and sales promotion can be found in their objectives and time-orientation. Content marketing, which is not a short-term tool, is often regarded as is an introductory stage of sales as it does not aim to make purchases quickly. ## Conclusions This paper investigates the relationship between content marketing and the five traditional marketing communication tools. The goal of the article is to generate a discussion on the status of content marketing. In this paper, content marketing was compared to advertising, direct marketing, personal selling, public relations and sales promotion to find out the main differences and similarities. An extensive literature review explored some fundamental differences between the traditional marketing communication tools and content marketing. Based on this result, _content marketing can be regarded as a novel marketing communication tool and the sixth element of the revised marketing communication mix_. Content marketing can be effectively used in marketing campaigns in the digital environment. Because of its digital nature, content marketing can be more effective in digitally advanced target markets. One of the positive effects of COVID-19 is the accelerated digitalization, which favorable to the use of content marketing. ## Acknowledgements "The described article/presentation/study was carried out as part of the EFOP-3.6.1-16-2016-00011 "Younger and Renewing University - Innovative Knowledge City - institutional development of the University of Miskolc aiming at intelligent specialisation" project implemented in the framework of the Szechenyi 2020 program. The realization of this project is supported by the European Union, co-financed by the European Social Fund." "A cikkben/eloadasban/tanulmanyban ismertettett kutato munka az EFOP-3.6.1-16-2016-00011 jelu,,Fatalodo es Megujulo Egyetem - Innovativ Tudasvaros - a Miskolci Egyetem intelligens szakosodast szolgalo intezmenyi feijesztese" projekt reszekent - a Szechenyi 2020 kereteben - az Europai Unio tamogatasaval, az Europai Szocialis Alap tarsfinanszirozasaval valosul meg"
2309.04523
The automation of SMEFT-Assisted Constraints on UV-Complete Models
The ongoing Effective Field Theory (EFT) program at the LHC and elsewhere is motivated by streamlining the connection between experimental data and UV-complete scenarios of heavy new physics beyond the Standard Model (BSM). This connection is provided by matching relations mapping the Wilson coefficients of the EFT to the couplings and masses of UV-complete models. Building upon recent work on the automation of tree-level and one-loop matching in the SMEFT, we present a novel strategy automating the constraint-setting procedure on the parameter space of general heavy UV-models matched to dimension-six SMEFT operators. A new Mathematica package, match2fit, interfaces Matchmakereft, which derives the matching relations for a given UV model, and SMEFiT, which provides bounds on the Wilson coefficients by comparing with data. By means of this pipeline and using both tree-level and one-loop matching, we derive bounds on a wide range of single- and multi-particle extensions of the SM from a global dataset composed by LHC and LEP measurements. Whenever possible, we benchmark our results with existing studies. Our framework realises one of the main objectives of the EFT program in particle physics: deploying the SMEFT to bypass the need of directly comparing the predictions of heavy UV models with experimental data.
Jaco ter Hoeve, Giacomo Magni, Juan Rojo, Alejo N. Rossia, Eleni Vryonidou
2023-09-08T18:00:01Z
http://arxiv.org/abs/2309.04523v2
# The automation of SMEFT-Assisted Constraints on UV-Complete Models ###### Abstract The ongoing Effective Field Theory (EFT) program at the LHC and elsewhere is motivated by streamlining the connection between experimental data and UV-complete scenarios of heavy new physics beyond the Standard Model (BSM). This connection is provided by matching relations mapping the Wilson coefficients of the EFT to the couplings and masses of UV-complete models. Building upon recent work on the automation of tree-level and one-loop matching in the SMEFT, we present a novel strategy automating the constraint-setting procedure on the parameter space of general heavy UV-models matched to dimension-six SMEFT operators. A new Mathematica package, match2fit, interfaces MatchMakerEFT, which derives the matching relations for a given UV model, and SMEFiT, which provides bounds on the Wilson coefficients by comparing with data. By means of this pipeline and using both tree-level and one-loop matching, we derive bounds on a wide range of single- and multi-particle extensions of the SM from a global dataset composed by LHC and LEP measurements. Whenever possible, we benchmark our results with existing studies. Our framework realises one of the main objectives of the EFT program in particle physics: deploying the SMEFT to bypass the need of directly comparing the predictions of heavy UV models with experimental data. SMEFT, Beyond the Standard Model, LHC Phenomenology, EFT Matching + Footnote †: preprint: Nikhef 2023-011 a,b]Giacomo Magni, a,b]Juan Rojo, a,b]Alejo N. Rossia, b]Eleni Vryonidou a,b][email protected], [email protected], [email protected], [email protected], [email protected] ###### Contents * 1 Introduction * 2 Tree-level and one-loop matching in the SMEFT * 2.1 Tree-level matching * 2.2 One-loop matching * 2.3 UV invariants and SMEFT matching * 3 Implementation in SMEfit * 4 Results * 4.1 One-particle models matched at tree level * 4.2 Multi-particle models matched at tree level * 4.3 Single-particle models matched at one-loop * 5 Summary and outlook * A Baseline SMEFiT global analysis * B The match2fit package * C Origin of the logarithms in 1-loop matching formulas. * D Additional details on UV models. Introduction One of the main motivations for the ongoing Standard Model Effective Field Theory (SMEFT) program in particle physics, see [1] for a recent review, is to streamline the connection between experimental data and UV-complete scenarios of new physics beyond the Standard Model (BSM) that contain new particles which are too heavy to be directly produced at available facilities. In this paradigm, rather than comparing the predictions of specific UV-complete models directly with data to derive information on its parameters (masses and couplings), UV-models are first matched to the SMEFT and subsequently the resulting Wilson coefficients are constrained by means of a global EFT analysis including a broad range of observables. The main advantage of this approach is to bypass the need to recompute predictions for physical observables with different UV-complete models. The global SMEFT analysis essentially encapsulates, for a well-defined set of assumptions, the information provided by available experimental observables, while the matching relations determine how this information relates to the masses and couplings of the UV-complete model. This feature becomes specially relevant whenever new BSM models are introduced: one can then quantify to which extent their parameter space is constrained by current data from a pre-existing global SMEFT analysis, rather than having first to provide predictions for a large number of observables and then compare those with data. In recent years, several groups [2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15] have systematically studied the matching between UV-complete models and the Wilson coefficients of the SMEFT, with various degrees of automation and in many cases accompanied by the release of the corresponding open-source codes. In order to realise the full potential of such EFT matching studies, it is however necessary to interface these results with global SMEFT analyses parameterizing the constraints provided by the experimental data. Such an interface must be constructed in a manner that benefits from the automation of EFT matching tools and that does not impose restrictions in the type of UV-models to be matched. In particular, it must admit non-linear matching relations with additional constraints such as parameter positivity. Several groups have reinterpreted global SMEFT fits in terms of matched UV models [16; 17; 18], but their focus is limited to pre-determined, relatively simple models with few parameters. No framework has been released to date that enables performing such fits with generic, user-specified, multi-particle UV-complete models. Here we bridge this gap in the SMEFT literature by developing a framework automating the limit-setting procedure on the parameter space of generic UV-models which can be matched to dimension-six SMEFT operators. This is achieved by extending SMEFiT [19; 20; 21; 22; 23] with the capabilities of working directly on the parameter space of UV-models, given arbitrary matching relations between UV couplings and EFT coefficients as an input. To this end, we have designed an interface to the MatchMakereFT code [12] such that for any of the available UV-models it outputs a run card with the relevant Wilson coefficients entering the SMEFiT analysis. This interface, consisting of the Mathematica package match2fit (available on Github), also provides a list of the UV variables (denoted as UV-invariants) that can be inferred from the data and corresponding to specific combinations of UV couplings and masses. The adopted procedure removes any limitations on the type of matching relations involved. We exploit this new pipeline to derive bounds on a broad range of UV-complete scenarios both at linear and quadratic order in the SMEFT expansion from a global dataset composed by LHC and LEP measurements, using either tree-level or one-loop matching relations. We consider both relatively simple single-particle extensions of the SM as well as more complex multi-particle extensions, in particular with a benchmark model composed by two heavy vector-like fermions and one heavy vector boson. We study the stability of the fits results with respect to the order in the EFT expansion and the perturbative QCD accuracy for the EFT cross-sections. Whenever possible, we compare our results with existing SMEFT matching studies in the literature. The framework presented in this work is made publicly available both in terms of the latest SMEFit release and with the independent match2fit interface, providing a valuable resource for the EFT community streamlining the connection between UV-models and EFT studies. Our work brings one step closer one of the primary goals of the SMEFT program: constraining the parameters of general BSM Lagrangians using EFTs as a bridge between UV models and experimental data. The structure of this paper is as follows. First, Sect. 2 discusses the general strategy adopted to automate the matching between UV-models and the corresponding SMEFT fits. Sect. 3 describes how the SMEFiT framework is extended to enable constraint-setting directly in the parameter space of UV-complete models. The main results of this work are presented in Sect. 4, where we derive bounds on single- and multi-particle BSM models from a global dataset and compare our findings with existing results. We conclude and summarise possible future developments in Sect. 5. Technical details of our work are provided in the appendices. App. A summarises the baseline SMEFT analysis used, in particular reporting on a recent update concerning the description of electroweak precision observables (EWPOs) from electron-positron colliders. App. B presents a concise description of the Match2Fit package. App. C discusses the origin of the logarithms arising in the one-loop matching formulae. Additional details on the single-particle model fits from Sect. 4 are provided in App. D. ## 2 Tree-level and one-loop matching in the SMEFT The low-energy phenomenology of a quantum field theory can in many cases be described by an EFT with a reduced number of dynamical degrees of freedom. This feature presents clear advantages in terms of BSM searches, since a general-enough EFT could describe a plethora of different UV completions. The success of the Standard Model in describing the physics explored at the energy scales accessible by current experiments justifies the use of an EFT based upon it, which is known as the SMEFT [1; 24], to search for BSM physics in a model-independent manner. To ensure that the UV theory and the low-energy EFT predict the same observables in the energy range where the EFT is valid, the Wilson coefficients of the latter must be computed in terms of the parameters of the UV theory. This procedure is called matching. Matching an EFT, such as the SMEFT, with the associated UV-complete model can be performed by means of two well-established techniques, as well as with another recently developed method based on on-shell amplitudes [15]. The first of these matching techniques is known as the functional method and is based on the manipulation of the path integral, the action, and the Lagrangian [2; 5; 7; 8; 11; 25]. It requires to specify the UV-complete Lagrangian, the heavy fields, and the matching scale, while the EFT Lagrangian is part of the result although not necessarily in the desired basis. The second technique is the diagrammatic method, based on equating off-shell Green's functions computed in both the EFT and the UV model, and therefore it requires the explicit form of both Lagrangians from the onset [3; 12]. Both methods provide the same final results and allow for both tree-level and one-loop matching computations. The automation of this matching procedure up to the one-loop level is mostly solved in the case of the diagrammatic technique [12], and is well advanced in the functional method case [4; 14]. Let us illustrate the core ideas underlying this procedure by reviewing the matching to the SMEFT at tree and one-loop level of a specific benchmark UV-complete model. This is taken to be the single-particle extension of the SM resulting from adding a new heavy scalar boson, \(\phi\). This scalar transforms under the SM gauge group in the same manner as the Higgs boson, i.e. \(\phi\sim\left(1,2\right)_{1/2}\), where we denote the irreducible representations under the SM gauge group as \((\mathrm{SU}(3)_{\mathrm{c}},\mathrm{SU}(2)_{\mathrm{L}})_{\mathrm{U}(1)_{ \mathrm{V}}}\). Following the notation of [3], the Lagrangian of this model reads \[\begin{split}\mathcal{L}_{\mathrm{UV}}=&\mathcal{L}_ {\mathrm{SM}}+|D_{\mu}\phi|^{2}-m_{\phi}^{2}\phi^{\dagger}\phi-\left((y_{ \phi}^{c})_{ij}\,\phi^{\dagger}\bar{e}_{R}^{i}\ell_{L}^{j}+(y_{\phi}^{d})_{ij} \,\phi^{\dagger}\bar{d}_{R}^{i}q_{L}^{j}\right.\\ &\left.+(y_{\phi}^{u})_{ij}\,\phi^{\dagger}i\sigma_{2}\bar{q}_{L}^ {T,i}u_{R}^{j}+\lambda_{\phi}\,\phi^{\dagger}\varphi|\varphi|^{2}+\mathrm{h.c. }\right)-\mathrm{scalar\ potential}\,,\end{split} \tag{1}\] with \(\mathcal{L}_{\mathrm{SM}}\) being the SM Lagrangian and \(\varphi\) the SM Higgs doublet. We do not write down explicitly the complete form of the scalar potential in Eq. (1), of which \(\lambda_{\phi}\,\phi^{\dagger}\varphi|\varphi|^{2}\) is one of the components, since it has no further effect on the matching outcome as long as it leads to an expectation value satisfying \(\langle\phi\rangle=0\), such that \(m_{\phi}^{2}>0\) corresponds to the pole mass. This heavy doublet \(\phi\) interacts with the SM fields via the Yukawa couplings \((y_{\phi}^{u,d,e})_{ij}\), the scalar coupling \(\lambda_{\phi}\), and the electroweak gauge couplings. In the following, we consider as "UV couplings" exclusively those couplings between UV and SM particles that are not gauge couplings. The model described by Eq. (1) corresponds to the two-Higgs doublet model (2HDM) in the decoupling limit [26; 27]. For simplicity, we assume that all the couplings between the SM and the heavy particle are real and satisfy \((y_{\phi}^{\psi})_{ij}=\delta_{i,3}\delta_{j,3}\,(y_{\phi}^{\psi})_{33}\) for \(\psi=u,\,d,\,e\), and the only SM Yukawa couplings that we consider as non-vanishing are the ones of the third-generation fermions. ### Tree-level matching The matching of UV-complete models to dimension-6 SMEFT operators at tree level has been fully tackled in [3], which considers all possible UV-completions with particles of spin up to \(s=1\) generating non-trivial Wilson coefficients. These results can be reproduced with the automated codes MatchingTools[28] and MatchMakereFT[12] based on the diagramatic approach. At tree level, the diagrammatic method requires computing the tree-level Feynman diagrams contributing to multi-point Green's functions with only light external particles. Then, the covariant propagators \(\Delta_{i}\) must be expanded to a given order in inverse powers of the heavy masses. The computation of the Feynman diagrams in the EFT can be performed in a user-defined operator basis. The outcome of matching the model defined by the Lagrangian in Eq. (1) to the SMEFT at tree level is provided in [3]. Table 2.1 summarizes the dimension-6 operators in the Warsaw basis [29] generated by both tree-level (in blue) and one-loop (in black) matching. A representative subset of the resulting \begin{table} \begin{tabular}{c|c} \hline \hline Operator type & Generated operators \\ \hline \(X^{3}\) & \(\overline{\mathcal{O}}_{W}\) \\ \(\varphi^{6}\) & \(\overline{\mathcal{O}}_{\varphi}\) \\ \(\varphi^{4}D^{2}\) & \(\overline{\mathcal{O}}_{\varphi\Box}\),\(\overline{\mathcal{O}}_{\varphi D}\) \\ \(\psi^{2}\varphi^{3}\) & \(\overline{\mathcal{O}}_{u\varphi}\), \(\overline{\mathcal{O}}_{d\varphi}\), \(\overline{\mathcal{O}}_{e\varphi}\) \\ \(X^{2}\varphi^{2}\) & - \\ \(\psi^{2}X\varphi\) & \(\overline{\mathcal{O}}_{dB}\), \(\overline{\mathcal{O}}_{dG}\), \(\overline{\mathcal{O}}_{dW}\), \(\overline{\mathcal{O}}_{uB}\), \(\overline{\mathcal{O}}_{uG}\), \(\overline{\mathcal{O}}_{uW}\), \(\mathcal{O}_{eB}\), \(\mathcal{O}_{eW}\) \\ \(\psi^{2}\varphi^{2}D\) & \(\overline{\mathcal{O}}_{pu}\), \(\overline{\mathcal{O}}_{\varphi d}\), \(\overline{\mathcal{O}}_{\varphi e}\), \(\overline{\mathcal{O}}_{\varphi d}^{(1)}\), \(\overline{\mathcal{O}}_{\varphi d}^{(3)}\), \(\overline{\mathcal{O}}_{\varphi q}^{(1)}\), \(\overline{\mathcal{O}}_{\varphi ud}^{(3)}\) \\ \hline \multirow{4}{*}{\(\psi^{4}\)} & \(\overline{\mathcal{O}}_{qu}^{(1)}\), \(\overline{\mathcal{O}}_{qu}^{(8)}\), \(\overline{\mathcal{O}}_{\ell e}\), \(\overline{\mathcal{O}}_{equ}^{(1)}\), \(\overline{\mathcal{O}}_{qd}^{(8)}\), \(\overline{\mathcal{O}}_{qud}^{(1)}\), \(\overline{\mathcal{O}}_{\varphi ud}\), \(\overline{\mathcal{O}}_{\ell edq}\), \\ & \(\overline{\mathcal{O}}_{uu}\), \(\overline{\mathcal{O}}_{dd}\), \(\overline{\mathcal{O}}_{\ell l}\), \(\overline{\mathcal{O}}_{d\ell}\)\(\overline{\mathcal{O}}_{ee}\), \(\overline{\mathcal{O}}_{cd}\), \(\overline{\mathcal{O}}_{eu}\), \(\overline{\mathcal{O}}_{\ell q}^{(1)}\), \(\overline{\mathcal{O}}_{\ell q}^{(3)}\), \\ & \(\overline{\mathcal{O}}_{\ell u}\), \(\overline{\mathcal{O}}_{qe}\), \(\overline{\mathcal{O}}_{qq}^{(1)}\), \(\overline{\mathcal{O}}_{qq}^{(3)}\), \(\overline{\mathcal{O}}_{quqd}^{(8)}\), \(\overline{\mathcal{O}}_{ud}^{(1)}\), \(\overline{\mathcal{O}}_{u8}^{(8)}\), \(\mathcal{O}_{\ell equ}^{(3)}\). \\ \hline \hline \end{tabular} \end{table} Table 2.1: SMEFT dimension-6 operators in the Warsaw basis generated by the integration of the heavy doublet scalar \(\phi\) from the Lagrangian defined in Eq. (1). The operators generated at tree level are highlighted in blue, while the operators that appear at one-loop level are in black. We assume that the non-vanishing couplings of this heavy particle with the SM are \(\lambda_{\phi}\) and \((y_{\phi}^{\psi})_{ij}=\delta_{i,3}\delta_{j,3}\,(y_{\phi}^{\psi})_{33}\) for \(\psi=u,\,d,\,e\). The operators with a bar over their name survive the further constraint of \((y_{\phi}^{e})_{33}=(y_{\phi}^{d})_{33}=0\), required to fulfil the SMEFiT flavour assumption at tree level. We take all the SM Yukawa couplings and masses to be zero, except those for the third-generation fermions. tree-level matching expressions is given by \[\frac{\left(c_{qd}^{(1)}\right)_{3333}}{\Lambda^{2}}=-\frac{\left(y_ {\phi}^{d}\right)_{333}^{2}}{6\,m_{\phi}^{2}},\quad\frac{\left(c_{qd}^{(8)} \right)_{3333}}{\Lambda^{2}}=-\frac{\left(y_{\phi}^{d}\right)_{333}^{2}}{m_{ \phi}^{2}},\quad\frac{\left(c_{d\varphi}\right)_{33}}{\Lambda^{2}}=\frac{ \lambda_{\phi}\,\left(y_{\phi}^{d}\right)_{33}}{m_{\phi}^{2}},\quad\frac{c_{ \varphi}}{\Lambda^{2}}=\frac{\lambda_{\phi}^{2}}{m_{\phi}^{2}}, \tag{2}\] \[\frac{\left(c_{qu}^{(1)}\right)_{3333}}{\Lambda^{2}}=-\frac{\left(y _{\phi}^{u}\right)_{333}^{2}}{6\,m_{\phi}^{2}},\quad\frac{\left(c_{qu}^{(8)} \right)_{333}}{\Lambda^{2}}=-\frac{\left(y_{\phi}^{u}\right)_{333}^{2}}{m_{ \phi}^{2}},\quad\frac{\left(c_{\omega\varphi}\right)_{33}}{\Lambda^{2}}=- \frac{\lambda_{\phi}\,\left(y_{\phi}^{u}\right)_{33}}{m_{\phi}^{2}},\quad \frac{\left(c_{eq}^{(3)}\right)_{33}}{\Lambda^{2}}=0.\] Eq. (2) showcases of constraints on the EFT coefficients that tree-level matching can generate. First of all, the relations between UV couplings and Wilson coefficients will be in general non-linear. Second, some coefficients such as \(\left(c_{\varphi q}^{(3)}\right)_{33}\) are set to zero by the matching relations. Third, other coefficients acquire a well-defined sign, such as \(c_{\varphi}\,\left(\left(c_{qd}^{(1)}\right)_{3333}\right)\) which becomes positive-definite (negative-definite) after matching. Fourth, several EFT coefficients become related among them by means of both linear and non-linear relations such as \[\left(c_{qu}^{(8)}\right)_{3333}= 6\,\left(c_{qu}^{(1)}\right)_{3333}, \tag{3}\] \[\frac{\left(c_{qd}^{(1)}\right)_{3333}}{\left(c_{qu}^{(1)}\right) _{3333}}= \left(\frac{\left(c_{d\varphi}\right)_{33}}{\left(c_{\omega\varphi} \right)_{33}}\right)^{2}. \tag{4}\] These relations must be taken into account when performing the EFT fit to the experimental data, using the dedicated techniques discussed in Sect. 3 and App. A. When considering multi-particle UV scenarios, rather than single-particle extensions such as the model defined by Eq. (1), non-vanishing EFT coefficients generally consist of the sum of several rational terms. For example, assume that one adds to the model of Eq. (1) a second heavy scalar with gauge charges \(\Phi\sim\left(8,2\right)_{1/2}\) and with mass \(m_{\Phi}\) which couples to the SM fields by means of \[\mathcal{L}_{\text{UV}}\supset-\left(y_{\Phi}^{qu}\right)_{ij}\, \Phi^{A\dagger}\,i\sigma_{2}\,\bar{q}_{L,i}^{T}T^{A}u_{R,j}+\text{h.c.}\,. \tag{5}\] Integrating out this additional heavy scalar field modifies two of the tree-level matching relations listed in Eq. (2) as follows \[\frac{\left(c_{qu}^{(1)}\right)_{3333}}{\Lambda^{2}}=-\frac{\left(y_{\phi}^{u} \right)_{333}^{2}}{6\,m_{\phi}^{2}}-\frac{2\left(y_{\Phi}^{qu}\right)_{33}^{2} }{9\,m_{\Phi}^{2}}\,,\quad\frac{\left(c_{qu}^{(8)}\right)_{3333}}{\Lambda^{2}} =-\frac{\left(y_{\phi}^{u}\right)_{33}^{2}}{m_{\phi}^{2}}+\frac{\left(y_{\Phi }^{qu}\right)_{33}^{2}}{6\,m_{\Phi}^{2}}\,. \tag{6}\] Hence, the simple linear relation Eq. (3) is not valid anymore, while Eq. (4) now becomes \[\frac{\left(c_{qd}^{(1)}\right)_{3333}}{\frac{1}{9}\left(c_{qu}^{(1)}\right)_{3 333}+\frac{4}{27}\left(c_{qu}^{(8)}\right)_{3333}}=\left(\frac{\left(c_{d \varphi}\right)_{33}}{\left(c_{\omega\varphi}\right)_{33}}\right)^{2}, \tag{7}\] which shows that multiplicative relations might involve non-trivial linear combinations of the Wilson coefficients. In this context, we observe that many of the conditions on the EFT coefficients imposed by assuming a certain UV completion are non-linear and hence the resulting posterior distributions inferred from the data will in general be non-Gaussian. The tree-level matching results discussed up to now do not comply with the flavour symmetry adopted by the current SMEFIT analysis, namely U(2)\({}_{q}\times\)U(2)\({}_{u}\times\)U(3)\({}_{d}\times(\)U(1)\({}_{\ell}\times\)U(1)\({}_{e})^{3}\). This would cause ambiguities at the moment of performing the fit, since for example SMEFit assumes that the coefficient \(\left(c_{qd}^{(1)}\right)_{33ii}\) has the same value for \(i=1,\,2,\,3\), while the matching result instead gives a non-vanishing coefficient only for \(i=3\). In this specific case, the appropriate flavour symmetry used at the EFT fit level can be respected after tree-level matching by further imposing \[\left(y_{\phi}^{e}\right)_{33}=\left(y_{\phi}^{d}\right)_{33}=0\,, \tag{8}\] and leaving \(\lambda_{\phi}\) and \(\left(y_{\phi}^{u}\right)_{33}\) as the only non-vanishing UV couplings, as we will assume in the rest of this work unless otherwise specified. Notice that this implies that the heavy new particle interacts only with the Higgs boson and the top quark, a common situation in well-motivated UV models. The operators that remain after this additional restriction is imposed are indicated with a bar in Table 1. Once this flavour symmetry is imposed, one can map unambiguously the naming of the operators and EFT coefficients provided by the MatchMakereFT output to the ones defined in SMEFiT. The non-vanishing coefficients after integrating out the heavy scalar \(\phi\) at tree level are then \[c_{\varphi}=c_{\varphi},\quad c_{t\varphi}=\left(c_{u\varphi}\right)_{33}, \quad c_{Qt}^{(1)}=\left(c_{qu}^{(1)}\right)_{3333},\quad c_{Qt}^{(8)}=\left(c _{qu}^{(8)}\right)_{3333}, \tag{9}\] where in the l.h.s. of the equalities we use the SMEFiT convention and on the r.h.s the Warsaw convention adopted in the matching output [3; 12]. ### One-loop matching Extending the diagrammatic matching technique to the one-loop case is conceptually straightforward, and requires the computation of one-loop diagrams in the UV model with off-shell external light particles and at least one heavy-particle propagator inside the loop. From the EFT side, diagrammatic matching at one-loop involves the calculation of the diagrams with the so-called Green's basis, which includes also those operators that are redundant by equatoions of motion (EoMs). The dimension-6 and dimension-8 Green's bases in the SMEFT have been computed in [30; 6; 31]. Further technicalities such as evanescent operators acquire special relevance at this order [13]. The automation of one-loop matching with the diagrammatic technique is provided by MatchMakereFT [12] for any renormalisable UV model with heavy scalar bosons and spin-1/2 fermions. The equivalent automation applicable to models containing heavy spin-1 bosons is work in progress. In this work we use MatchMakereFT (v1.1.3) to evaluate the one-loop matching of a selected UV model. When applied to the Lagrangian of Eq. (1) with the SMEFiT flavour assumptions, this procedure generates additional non-vanishing operators in comparison to those arising at tree level, as indicated in Table 1, where we also show operators generated in the more general case of \[\left(y_{\phi}^{\psi}\right)_{ij}=\delta_{i,3}\delta_{j,3}\,\left(y_{\phi}^{ \psi}\right)_{33}\,,\qquad\psi=e,u,d\,. \tag{10}\] Notice that setting \(\left(y_{\phi}^{e}\right)_{33}=\left(y_{\phi}^{d}\right)_{33}=0\) has a smaller impact on the number of loop-generated operators than for the tree-level case. The reason is that, in many cases, the operators are still generated via SM-gauge couplings so the EFT coefficients take the generic form \(c\sim g_{\text{SM}}^{4}/16\pi^{2}\). These contributions are typically flavour-universal and if they break the desired flavour symmetry, the breaking is suppressed by the SM gauge couplings and/or the loop factor. For this reason, in this work we only enforce the SMEFiT flavour symmetry for tree-level matching. An example of the results of one-loop matching corrections to the EFT coefficients is provided for \(c_{Qt}^{(8)}\), for which the tree-level matching relation in Eq. (2) is now extended as follows \[\frac{c_{Qt}^{(8)}}{\Lambda^{2}} = -\frac{\left(y_{\phi}^{u}\right)_{33}^{2}}{m_{\phi}^{2}}-\left[ \frac{25g_{1}^{2}}{1152\pi^{2}}+\frac{3g_{2}^{2}}{128\pi^{2}}-\frac{3\left(y_{ t}^{\text{SM}}\right)^{2}}{16\pi^{2}}+\frac{g_{3}^{2}}{16\pi^{2}}\left(1-\log \left(\frac{m_{\phi}^{2}}{\mu^{2}}\right)\right)\right]\frac{\left(y_{\phi}^{u }\right)_{33}^{2}}{m_{\phi}^{2}} \tag{11}\] \[+\frac{3}{64\pi^{2}}\left[1-2\log\left(\frac{m_{\phi}^{2}}{\mu^{ 2}}\right)\right]\frac{\left(y_{\phi}^{u}\right)_{33}^{4}}{m_{\phi}^{2}}\,,\] with \(\mu\) being the matching scale, \(g_{i}\) the SM gauge coupling constants, and \(y_{t}^{\rm SM}\) the top Yukawa coupling in the SM. To estimate of the numerical impact of loop corrections to matching in the specific case of Eq. (11), one can substitute the corresponding values of the SM couplings. One finds that the term proportional to \(\left(y_{\phi}^{u}\right)_{33}^{2}\) receives a correction at the few-percent level from one-loop matching effects. The logarithms in the matching scale \(\mu\) appearing in Eq. (11) are generated by the running of the couplings and Wilson coefficients between the heavy particle mass \(m_{\phi}\) and \(\mu\), as further discussed in App. C. Since here we neglect RGE effects [32], we simplify the matching expressions by choosing the scale \(\mu\) to be equal to the mass of the integrated-out UV heavy field such that the logarithms vanish. As compared to the tree-level matching relations, common features of the one-loop contributions are the appearance of terms proportional to the UV couplings at the fourth order and the presence of the SM gauge couplings. Here we assume that the latter have fixed numerical values determined by other measurements i.e. the PDG averages [33]. This implies that at the fit level some of the EFT coefficients are specified entirely by the mass of the UV heavy particle, such as for the dimension-six triple SU(2) field strength tensor operator for the considered model \[\frac{c_{WWW}}{\Lambda^{2}}=\frac{g_{2}^{3}}{5760\pi^{2}\,m_{\phi}^{2}}\,. \tag{12}\] While one may expect that one-loop matching relations such as Eq. (12), which depend only on the gauge couplings of the heavy particle and its mass, provide useful sensitivity to the heavy mass \(m_{\phi}\), we have verified that this is in practice very weak and hence not competitive. It is also possible to find EFT coefficients that are matched to the sum of a piece depending only on SM couplings and the UV mass and a piece proportional to the UV couplings, e.g., \[\frac{c_{\varphi Q}^{(3)}}{\Lambda^{2}}=-\frac{g_{2}^{4}}{3840\pi^{2}m_{\phi} ^{2}}-\frac{\left(y_{t}^{\rm SM}\right)^{2}\left(y_{\phi}^{u}\right)_{33}^{2} }{192\pi^{2}m_{\phi}^{2}}+\frac{g_{2}^{2}\left(y_{\phi}^{u}\right)_{33}^{2}}{1 152\pi^{2}m_{\phi}^{3}}. \tag{13}\] This kind of relations could in principle favour non-vanishing UV couplings even when the EFT coefficients are very tightly constrained, provided that the gauge-coupling term is of similar size to the other terms in the matching relation. As for tree-level matching, exemplified by Eqns. (3) and (4), at the one-loop level one can also automatically evaluate linear relations among the Wilson coefficients, though the analogous result for non-linear relations is more challenging. Nevertheless, as discussed in Sect. 3, in this work we carry out the fit directly at the level of UV couplings and hence the constraints between different coefficients are provided implicitly by the matching relations, rather than directly as explicit restrictions in a fit of EFT coefficients. ### UV invariants and SMEFT matching The inverse problem to matching lies at the heart of the interpretation of the SMEFT global fit results in terms of UV couplings and masses. This inverse problem is known to be plagued by degeneracies since many UV completions could yield the same EFT coefficients up to a fixed mass dimension [34]. Matching relations define a mapping \(f\) from \(U\), the parameter space spanned by the UV couplings \(\mathbf{g}\), to \(W\), the space spanned by the Wilson coefficients \(\mathbf{c}\), \[f:U\to W\,. \tag{14}\] The previous discussion indicates that the matching relation \(f\) associated to UV-models such as the one defined by Eq. (1) is in general non-injective and hence non-invertible. Therefore, even choosing a particular UV model does not lift completely these degeneracies. They might be partially or fully removed by either matching at higher loop orders or considering higher-dimensional operators. In particular, dimension-8 operators offer advantages to disentangle UV models [34], but their study is beyond the scope of this work. Since the fit is, at best, only sensitive to \(f(\mathbf{g})\), one can only meaningfully discriminate UV parameters \(\mathbf{g}\) that map to different points in the EFT parameter space \(W\) under the matching relation \(f\). Thus, we define "UV invariants" as those combinations of UV parameters such that \(f\) remains invariant under a mapping \(h\), defined as \[h:U\to I, \tag{15}\] such that \(f(h(\mathbf{g}))=f(h(\mathbf{g}^{\prime}))\). We denote by \(I\) the space of UV invariants that is now bijective with \(W\) under \(f\) and that contains all the information that we can extract about the UV couplings by measuring \(f\) from experimental data in the global EFT fit. To illustrate the role of UV invariants, we consider again the tree-level matching relations for our benchmark model Eq. (1) given by Eq. (2). Expressing the UV couplings in terms of the EFT coefficients leads to two different solutions, \[\left(y_{\phi}^{d}\right)_{33}=-\sqrt{-\left(c_{qd}^{(8)}\right)_{3333}\frac{m _{\phi}}{\Lambda}},\quad\left(y_{\phi}^{u}\right)_{33}=-\frac{c_{t\varphi}}{c _{b\varphi}}\sqrt{-\left(c_{qd}^{(8)}\right)_{3333}\frac{m_{\phi}}{\Lambda}}, \quad\lambda_{\phi}=\frac{c_{b\varphi}}{\sqrt{-\left(c_{qd}^{(8)}\right)_{333 3}}}\frac{m_{\phi}}{\Lambda}, \tag{16}\] and \[\left(y_{\phi}^{d}\right)_{33}=\sqrt{-\left(c_{qd}^{(8)}\right)_{3333}\frac{m _{\phi}}{\Lambda}},\quad\left(y_{\phi}^{u}\right)_{33}=\frac{c_{t\varphi}}{c _{b\varphi}}\sqrt{-\left(c_{qd}^{(8)}\right)_{3333}\frac{m_{\phi}}{\Lambda}}, \quad\lambda_{\phi}=-\frac{c_{b\varphi}}{\sqrt{\left(c_{qd}^{(8)}\right)_{333 3}}}\frac{m_{\phi}}{\Lambda}, \tag{17}\] where we have omitted the flavour indices of the coefficients for clarity. The resulting sign ambiguity stems from the sensitivity to the sign of only two products of the three UV couplings. Other non-vanishing EFT coefficients do not enter these solutions since they are related to the present ones via linear or non-linear relations. Eqns. (16) and (17) hence indicate that the EFT fit is not sensitive to the sign of these UV couplings for this specific model. The sought-for mapping \(h\) between the original UV couplings \(\left(\left(y_{\phi}^{d}\right)_{33},\left(y_{\phi}^{u}\right)_{33},\lambda_{ \phi}\right)\) and the UV invariants which can be meaningfully constrained by the global EFT fit is given by \[h:\left(\left(y_{\phi}^{d}\right)_{33},\left(y_{\phi}^{u}\right)_{33},\lambda _{\phi}\right)\mapsto\left(\left|\left(y_{\phi}^{u}\right)_{33}\right|,\, \lambda_{\phi}\,\mathrm{sgn}\left(\left(y_{\phi}^{u}\right)_{33}\right),\, \left(y_{\phi}^{d}\right)_{33}\,\mathrm{sgn}\left(\lambda_{\phi}\right) \right), \tag{18}\] with \(\mathrm{sgn}(x)\) being the sign function. Notice the degree of arbitrariness present in this construction, since for example one could have also chosen \(\left(\left|\lambda_{\phi}\right|,\,\left(y_{\phi}^{u}\right)_{33}\,\mathrm{ sgn}\left(\lambda_{\phi}\right)\right)\) instead of the first two invariants of Eq. (18). This simple example displays only a sign ambiguity, but one could be unable to distinguish two UV couplings altogether since, e.g. they always appear multiplying each other. The match2fit package automates the computation of the transformations \(h\) defining the UV-invariants for the models at the tree-level matching level, see App. B for more details. Furthermore, one can also illustrate the concept of UV-invariants by fitting the heavy scalar doublet model of Eq. (1) to the experimental data by following the procedure which will be outlined in Sect. 3. Fig. 2.1 shows the resulting marginalised posterior distributions in the space \(U\) of UV parameters \(\left(\left(y_{\phi}^{u}\right)_{33},\lambda_{\phi}\right)\) and in the space \(I\) of UV invariants \(\left(\left|\left(y_{\phi}^{u}\right)_{33}\right|,\,\lambda_{\phi}\,\mathrm{ sgn}\left(\left(y_{\phi}^{u}\right)_{33}\right)\right)\). The red points indicate two different sets of UV couplings in \(U\), \(\mathbf{g}\neq\mathbf{g}^{\prime}\), that are mapped to the same point in \(I\) upon the transformation \(h\). Presenting results at the level of UV-invariants has the benefit of making explicit the symmetries and relations between UV-couplings that may be hidden otherwise if one presents results in the UV parameters space \(U\). It is worth stressing that UV invariants do not necessarily correspond to combinations of UV parameters that one can constrain in a fit. Rather, they represent what can be said about the UV couplings from given values for the WCs and hence serve to map out the UV parameter space such that no redundant information is shown. ## 3 Implementation in SMEFit Here we describe how the SMEFit global analysis framework [19; 20; 21; 22; 23] has been extended to operate, in addition to at the level of Wilson coefficients, directly at the level of the parameters of UV-complete models. We also present the baseline SMEFT analysis which will be used in Sect. 4 to constrain these UV parameters, reporting on a number of improvements as compared to its most recent version presented in [23], in particular concerning the implementation of precision electroweak observables (EWPOs) from the LEP legacy measurements. App. A provides additional details about the SMEFiT functionalities and this baseline EFT global fit. Assume a UV-complete model defined by the Lagrangian \(\mathcal{L}_{\rm UV}(\mathbf{g})\) which contains \(n_{\rm uv}\) free parameters \(\mathbf{g}\). Provided that this model has the SM as its low-energy limit with linearly realised electroweak symmetry breaking, one can derive matching relations between the SMEFT coefficients and the UV couplings of the form \(\mathbf{c}=\mathbf{f}(\mathbf{g},\mu)\) for a given choice of the matching scale \(\mu\) as discussed in Sect. 2. Once these matching conditions are evaluated, the EFT cross-sections \(\sigma(\mathbf{c})\) entering the fit can be expressed in terms of the UV couplings and masses \(\sigma(\mathbf{f}(\mathbf{g},\mu))\). By doing so, one ends up with the likelihood function \(L\) now expressed in terms of UV couplings, \(L(\mathbf{g})\). Bayesian sampling techniques can now be applied directly to \(L(\mathbf{g})\), assuming a given prior distribution of the UV coupling space, in the same manner as for the fit of EFT coefficients. The current release of SMEFiT enables the user to impose these matching conditions \(\mathbf{c}=\mathbf{f}(\mathbf{g},\mu)\) via run cards thanks to its support for a wide class of different constraints on the fit parameters, see App. A. The code applies the required substitutions on the theoretical predictions for the observables entering the fit automatically. Additionally, the availability of Bayesian sampling means that the functional relationship between the likelihood function \(L\) and the fitted parameters \(\mathbf{g}\) is unrestricted. In order to carry out parameter inference directly at the level of UV couplings within SMEFiT, three main ingredients are required: * First, the matching relations \(\mathbf{f}\) between the parameters of the UV Lagrangian \(\mathcal{L}_{\rm UV}(\mathbf{g})\) and the EFT Wilson coefficients, \(\mathbf{c}=\mathbf{f}(\mathbf{g})\) in the Warsaw basis used in the fit. As discussed in Sect. 2, this step can be achieved automatically both for tree-level and for one-loop matching relations by using MatchMakerEFT[12]. Other matching frameworks may also be used in this step. Figure 2.1: Left: marginalised posterior distributions in the space \(U\) of UV parameters \(\left(\left(y_{\phi}^{u}\right)_{33},\lambda_{\phi}\right)\) in the heavy scalar doublet model given by Eq. (1) fitted to the data according to the procedure of Sect. 3. Right: the same results represented in the space \(I\) of UV invariants \(\left(\left|\left(y_{\phi}^{u}\right)_{33}\right|,\,\lambda_{\phi}\,\mathrm{ sgn}(\left(y_{\phi}^{u}\right)_{33}\right)\right)\). The red points indicate two different sets of UV couplings in \(U\) that are mapped to the same point in \(I\) upon the transformation \(h\). Blue (orange) points indicate positive (negative) values of the UV-invariant \(\lambda_{\phi}\,\mathrm{sgn}\!\left(\left(y_{\phi}^{u}\right)_{33}\right)\). * Second, the conversion between the output of the matching code, MatchMakerEFT in our case, and the input required for the SMEFiT run cards specifying the dependence of the Wilson coefficients \(\mathbf{c}\) on the UV couplings \(\mathbf{g}\) such that the replacement \(\sigma(\mathbf{c})\to\sigma(\mathbf{f}(\mathbf{g}))\) on the physical observables entering the fit can be implemented. For this we use the new Mathematica package match2fit summarised in App. B. The automation of this step is currently limited to tree-level matching results. SMEFiT then performs the replacements \(\sigma(\mathbf{c})\to\sigma(\mathbf{f}(\mathbf{g}))\) specified by the runcards. * Third, a choice of prior volume in the space \(U\) spanned by the UV couplings \(\mathbf{g}\) entering the fit. In this work, we assume a flat prior on the UV parameters \(\mathbf{g}\) and verify that results are stable with respect to variations of this prior volume. We note that, for the typical (polynomial) matching relations between UV couplings and EFT coefficients, this choice of prior implies non-trivial forms for the priors on the space of the latter. This observation is an important motivation to support the choice of fitting directly at the level of UV couplings. Once these ingredients are provided, SMEFiT performs the global fit by comparing EFT theory predictions with experimental data and returning the posterior probability distributions on the space of UV couplings \(\mathbf{g}\) or any combination thereof. Fig. 3.1 displays a schematic representation of the pipeline adopted in this work to map in an automated manner the parameter space of UV-complete models using the SMEFT as a bridge to the data and based on the combination of three tools: MatchMakerEFT to derive the matching relations, match2fit to transform its output into the SMEFiT-compliant format, and SMEFiT to infer from the data bounds on the UV coupling space. Concerning the UV-invariants introduced in Sect. 2.3, we should note here that depending on the specific UV-complete model, one might be able to constrain only the absolute value of certain UV parameters or of their product. For this reason, here we will display results mostly at the level of UV-invariants \(\mathbf{I}_{\text{UV}}(\mathbf{g})\) determined from the matching relations. We will find that certain UV-invariants may nevertheless remain unconstrained, due to the lack of sensitivity to specific EFT coefficients in the fitted data. In any case, the user can easily define arbitrary combinations of the UV parameters to be fitted to the data. The baseline global SMEFiT analysis adopted in this work to constrain the parameter space of UV-complete models is based on the analysis presented in [23], which in turn updated the global SMEFT Figure 3.1: Schematic representation of the pipeline adopted in this work to map the parameter space of UV-complete models using the SMEFT as a bridge to the data. The starting point is a UV-Lagrangian containing a number of free parameters \(\mathbf{g}\) such as its masses and coupling constants, for which a flat prior is assumed. We then determine the matching relations between the UV parameters \(\mathbf{g}\) and the corresponding EFT coefficients \(\mathbf{c}\) at the matching scale \(\mu\) using MatchMakerEFT. Then the match2fit interface enables expressing cross-sections for processes entering the EFT in terms of the UV-parameters \(\mathbf{g}\). Finally, these UV-parameters are constrained from the data using the sampling methods of SMEFiT applied to the figure of merit \(\chi^{2}(\mathbf{g})\) evaluated on a global dataset. analysis of Higgs, top quark, and diboson data from [22]. In comparison with [23], a significant upgrade is the new implementation of the information provided by the legacy EWPOs [35] from LEP and SLC in the Wilson coefficients. Previously, the constraints provided by these EWPOs in the SMEFT parameter space were accounted in an approximate manner, assuming EWPOs measured with vanishing uncertainties.1 Footnote 1: For the purposes of benchmarking with the ATLAS EFT analysis of [36], in [23] also fits with EWPOs were considered, but these were based on the exact same theory inputs as in the ATLAS paper. As a consequence of this improvement, the present global SMEFT analysis considers 14 additional Wilson coefficients \(\mathbf{c}\) with respect to the basis used in [22; 23], leading to a total of 50 independent parameters constrained from the data. Since these new degrees of freedom are constrained not only by the EWPOs, but also by LHC observables, we have recomputed all EFT cross-sections for those processes where such operators enter. As in the original analysis [22], we use mg5_aMC@NLO[37] interfaced to SMEFT@NLO[38] to evaluate linear and quadratic EFT corrections which account up to NLO QCD perturbative corrections, whenever available. A dedicated description of this new implementation of the EWPOs in SMEFiT and of their phenomenological implications for present and future colliders will be discussed in a forthcoming publication [39]. Figure 3.2: Marginalised single-parameter posterior probability distributions in the global SMEFT fit of [23], based on an approximate implementation of the EWPOs, compared with the baseline fit adopted in this work and implementing the EWPOs constraints in a exact manner. Note that the results labelled as “Approx EWPOs” differ slightly from those in [23] due to the use of updated theory predictions for LHC processes, see text. In both cases, the fits include quadratic EFT effects and NLO QCD corrections to the EFT cross-sections. The impact of this new exact EWPOs implementation is illustrated in Fig. 3, comparing the marginalised one-parameter posterior distributions obtained in the global SMEFT fit of [23] and based on the approximate EWPOs implementation, with the baseline fit adopted in this work where EWPOs are taken into account in a exact manner. Note that the results labelled as "Approx EWPOs" differ slightly from those in [23] due to the use of updated theory predictions. In both cases, the fits include quadratic EFT effects and NLO QCD corrections to the EFT cross-sections. In general, there is good agreement at the level of Wilson coefficients between results based on the approximate and the exact implementation of the EWPOs, with some noticeable differences. While the posterior distributions for most operators are only moderately affected by the EWPOs, their impact is visible for some of the coefficients affecting electroweak interactions such as \(c_{\nu t}\) (where a new double-peak structure is revealed), \((c_{\ell\ell})_{1111}\) (which was previously unconstrained), \(c_{\nu d}\), \(c_{\nu Q}^{(3)}\), and \(c_{\nu Q}^{(-)}\). Nevertheless, differences between exact and approximate implementation of EWPOs, which will be further studied in [39], are well contained within the respective 95% CL intervals. ## 4 Results We now present the main results of this work, namely the constraints on the parameter space of a broad range of UV-complete models obtained using the SMEFit global analysis integrated with the pipeline described in Sects. 2 and 3. We discuss the following results in turn: one-particle models matched at tree level, multi-particle models also matched at tree level, and one-particle models matched at the one-loop level. For each category of models, we study both the impact of linear and quadratic corrections in the SMEFT expansion as well as of the QCD accuracy. Whenever possible, we provide comparisons with related studies to validate our results. The UV models considered in this work are composed of one or more of the heavy BSM particles listed in Table 1 and classified in terms of their spin as scalar, fermion, and vector particles. For each particle, we indicate the irreducible representation of the SM gauge group under which they transform. These particles can couple linearly to the SM fields and hence generate dimension-6 SMEFT operators after being integrated out at tree level. The complete tree-level matching results for these particles were computed in [3], from where we adopt the notation. The only exception is the heavy scalar field \(\varphi\), which we rename \(\phi\) to be consistent with the convention used in Sect. 2. Concerning one-particle models, we include each of the particles listed in Table 1 one at the time, and then impose restrictions on the couplings between the UV heavy fields and the SM particles such that these models satisfy the SMEFit flavour assumption after matching at tree level. A discussion on the UV couplings allowed for each of the considered models under such restriction can be found in App. D. We have discarded from our analysis those heavy particles considered in [3] for which we could not find a set of restrictions on their UV couplings such that they obeyed the baseline EFT fit flavour assumptions. With regard to multi-particle models, we consider combinations of the one-particle models mentioned above without additional assumptions on their UV couplings unless specified. For illustration purposes, we will present results for the model that results from adding the custodially-symmetric models with vector-like fermions and an SU(2)\({}_{L}\) triplet vector boson, presented in [40]. We analyse the cases of both degenerate and different heavy particle masses. An overview of which Wilson coefficients are generated by each UV particle at tree level is provided in Table 2 for heavy scalars and vector bosons and in Table 3 for heavy vector-like fermions. Black ticks indicate the EFT coefficients probed when applying the SMEFit flavour restrictions, while red ones indicate those probed only when the flavour-universal UV couplings assumption of the FitMaker analysis [16] is used. Notice that some operators are not generated by any of the models considered in this work, e.g. four-fermion operators with two-light and two-heavy quarks or with purely light-quark ones. This is due, in part, to the correlations among operators imposed by UV models and the restrictions imposed by the assumed flavour symmetry. We recall that, in general, one-loop matching will introduce more coefficients as compared to the tree-level ones listed in Tables 4.2 and 4.3 Finally, we will consider the model composed of a heavy scalar doublet \(\phi\) matched to the SMEFT at the one-loop level. In this case, we impose the SMEFit flavour restriction only at tree level. The operators generated by this model at one-loop were already discussed in Sect. 2.2. In the following, confidence level (CL) intervals for the UV model parameters are evaluated as highest density intervals (HDI or HPDI) [41]. The HDI is defined as the interval such that all values inside the HDI have higher probability than any value outside. In contrast to equal-tailed intervals (ETI) which are based on quantiles, HDI does not suffer from the property that some value inside the interval might have lower probability than the ones outside the interval. Whenever the lower bound is two orders of magnitude smaller than the width of the CL interval, we round it to zero. ### One-particle models matched at tree level Here we present results obtained from the tree-level matching of one-particle extensions of the SM. Motivated by the discussion in Sect. 2.3, we present results at the level of UV invariants since these are the combinations of UV couplings that are one-to-one with the Wilson coefficients under the matching relations. We study in turn the one-particle models containing heavy scalars, fermions, and vector bosons listed in Tables 4.2 and 4.3. We also present a comparison with the one-particle models considered in the FitMaker analysis [16]. Heavy scalars.The upper part of Table 4.4 shows the 95% CL intervals obtained for the UV invariants associated to the one-particle heavy scalar models considered in this work and listed in Table 4.1. We compare results obtained with different settings of the global SMEFT fit: linear and quadratic level in \begin{table} \begin{tabular}{|c|c||c|c||c|c|} \hline \multicolumn{2}{|c||}{Scalars} & \multicolumn{2}{c||}{Fermions} & \multicolumn{2}{c|}{Vectors} \\ \hline Particle & Irrep & Particle & Irrep & Particle & Irrep \\ \hline \(\mathcal{S}\) & \(\left(1,1\right)_{0}\) & \(N\) & \(\left(1,1\right)_{0}\) & \(\mathcal{B}\) & \(\left(1,1\right)_{0}\) \\ \(\mathcal{S}_{1}\) & \(\left(1,1\right)_{1}\) & \(E\) & \(\left(1,1\right)_{-1}\) & \(\mathcal{B}_{1}\) & \(\left(1,1\right)_{1}\) \\ \(\phi\) & \(\left(1,2\right)_{1/2}\) & \(\Delta_{1}\) & \(\left(1,2\right)_{-1/2}\) & \(\mathcal{W}\) & \(\left(1,3\right)_{0}\) \\ \(\Xi\) & \(\left(1,3\right)_{0}\) & \(\Delta_{3}\) & \(\left(1,2\right)_{-3/2}\) & \(\mathcal{W}_{1}\) & \(\left(1,3\right)_{1}\) \\ \(\Xi_{1}\) & \(\left(1,3\right)_{1}\) & \(\Sigma\) & \(\left(1,3\right)_{0}\) & \(\mathcal{G}\) & \(\left(8,1\right)_{0}\) \\ \(\omega_{1}\) & \(\left(3,1\right)_{-1/3}\) & \(\Sigma_{1}\) & \(\left(1,3\right)_{-1}\) & \(\mathcal{H}\) & \(\left(8,3\right)_{0}\) \\ \(\omega_{4}\) & \(\left(3,1\right)_{-4/3}\) & \(U\) & \(\left(3,1\right)_{2/3}\) & \(\mathcal{Q}_{5}\) & \(\left(8,3\right)_{0}\) \\ \(\zeta\) & \(\left(3,3\right)_{-1/3}\) & \(D\) & \(\left(3,1\right)_{-1/3}\) & \(\mathcal{Y}_{5}\) & \(\left(\bar{6},2\right)_{-5/6}\) \\ \(\Omega_{1}\) & \(\left(6,1\right)_{1/3}\) & \(Q_{1}\) & \(\left(3,2\right)_{1/6}\) & & \\ \(\Omega_{4}\) & \(\left(6,1\right)_{4/3}\) & \(Q_{7}\) & \(\left(3,2\right)_{7/6}\) & & \\ \(\Upsilon\) & \(\left(6,3\right)_{1/3}\) & \(T_{1}\) & \(\left(3,3\right)_{-1/3}\) & & \\ \(\Phi\) & \(\left(8,2\right)_{1/2}\) & \(T_{2}\) & \(\left(3,3\right)_{2/3}\) & & \\ & & \(Q_{5}\) & \(\left(3,2\right)_{-5/6}\) & & \\ \hline \end{tabular} \end{table} Table 4.1: Heavy BSM particles entering the UV-complete models considered in this work. For each particle, we indicate the irreducible representation of the SM gauge group under which they transform, with notation \((\mathrm{SU(3)_{c},SU(2)_{L}})_{\mathrm{U(1)_{Y}}}\). the EFT expansion and either LO or NLO QCD accuracy for the EFT cross-sections. We exclude from Table 4.4 the results of heavy scalar model \(\phi\) that is characterised by two UV invariants. In all cases, we assume that the mass of the heavy particle is \(m_{\text{UV}}=1\) TeV, and hence all the UV invariants shown are dimensionless. The resulting bounds for these heavy scalar models are well below the naive perturbative limit \(g\lesssim 4\pi\) in most cases, and only for a few models in linear EFT fits the bounds are close to saturate the perturbative unitary condition, \(g\lesssim\sqrt{8\pi}\)[42]. Comparing the impact of linear versus quadratic corrections, we notice a significant improvement in sensitivity for the heavy scalar models \(\omega_{1}\), \(\omega_{4}\), \(\zeta\), \(\Omega_{1}\), \(\Omega_{4}\), \(\Upsilon\) and \(\Phi\). This can be explained by the fact that they generate four-fermion operators, as these are characterised by large quadratic corrections. Indeed, four-heavy operators, constrained in the EFT fit by \(t\bar{t}t\bar{t}\) and \(t\bar{t}b\bar{b}\) cross-section data, have limited sensitivity in the linear EFT fit. In the remaining models, the impact of quadratic EFT corrections on the UV-invariant bounds is small. Considering the impact of the QCD perturbative accuracy used for the EFT cross-sections, one notices a moderate improvement of \(\sim 10\%\) for models that are sensitive to four-fermion \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c} \hline & & \multicolumn{11}{c|}{**Heavy Scalars**} & \multicolumn{11}{c|}{**Heavy Vector Bosons**} \\ \hline & \(\mathcal{S}\) & \(\mathcal{S}_{1}\) & \(\phi\) & \(\Xi\) & \(\Xi_{1}\) & \(\omega_{1}\) & \(\omega_{4}\) & \(\zeta\) & \(\Omega_{1}\) & \(\Omega_{4}\) & \(\Upsilon\) & \(\Phi\) & \(\mathcal{B}\) & \(\mathcal{B}_{1}\) & \(\mathcal{W}\) & \(\mathcal{W}_{1}\) & \(\mathcal{G}\) & \(\mathcal{H}\) & \(\mathcal{Q}_{5}\) & \(\mathcal{Y}_{5}\) & \(\mathcal{B}_{1}\mathcal{B}\) \\ \hline \(c_{\varphi\Box}\) & ✓ & & & ✓ & ✓ & & & & & & & & & & & ✓ & ✓ & ✓ & ✓ & & & & & & & ✓ \\ \(c_{\varphi D}\) & & & & ✓ & ✓ & & & & & & & & & ✓ & ✓ & & & & & & & & & & & & & & & & & & & \\ \(c_{\tau\varphi}\) & & & & ✓ & ✓ & & & & & & & & & & ✓ & ✓ & operators with the exception of \(\omega_{1}\). The posterior distributions associated to the heavy scalar models listed in Table 4.4 are shown in Figs. 4.1 and 4.2, comparing results with different EFT expansion order and QCD accuracy respectively. To facilitate visualisation of the bulk region, the distributions are cut after the distribution has dropped to 5% of its maximum, though this choice is independent from the calculation of the CL bounds. One can see how using quadratic EFT corrections (NLO QCD cross-sections) improve significantly (moderately) the bounds on models that are sensitive to four-fermion operators for the reasons mentioned above. For all models considered, the posterior distributions indicate agreement with the SM, namely vanishing UV model couplings. Along the same lines, from Figs. 4.1 and 4.2 one also observes that the posterior distributions of the absolute values of UV couplings tend to exhibit a most likely value (mode) away from zero. This feature is not incompatible with a posterior distribution on the EFT coefficient space that favours the SM case. The reason is that when transforming the probability density function from one space to the other, one must consider the Jacobian factor that depends on the functional relation between EFT coefficients and UV couplings. For most matching relations, this Jacobian factor can generate these peaks away from zero in the UV coupling space even when the EFT fit favours the SM solution. \begin{table} \begin{tabular}{ To illustrate this somewhat counter-intuitive result, consider a toy model for the probability density of a (positive-definite) Wilson coefficient \(c\), \[P(c)=\frac{2}{\sqrt{\pi}}e^{-c^{2}}\,,\qquad\int_{0}^{\infty}dc\,P(c)=1\,, \tag{4.1}\] where the underlying law is the SM. For a typical matching condition of the form \(c=g^{2}\), (see i.e. the heavy scalar model of Eq. (2.2)), the transformed probability distribution for the "UV-invariant" \(|g|\) is \[P(|g|)=\frac{4}{\sqrt{\pi}}|g|e^{-|g|^{4}}\,,\qquad\int_{0}^{\infty}d|g|\,P(|g| )=1\,, \tag{4.2}\] which is maximised by \(g=1/\sqrt{2}\neq 0\). Hence posterior distributions in the UV coupling space favouring non-zero values do not (necessarily) indicate preference for BSM solutions in the fit. On the other hand, \begin{table} \begin{tabular}{c|c|c|c|c|c} Model & UV invariants & LO \(\mathcal{O}\left(\Lambda^{-2}\right)\) & LO \(\mathcal{O}\left(\Lambda^{-4}\right)\) & NLO \(\mathcal{O}\left(\Lambda^{-2}\right)\) & NLO \(\mathcal{O}\left(\Lambda^{-4}\right)\) \\ \hline \(\mathcal{S}\) & \(\left|\kappa_{S}\right|\) & [0, 1.4] & [0, 1.4] & [0, 1.5] & [0, 1.4] \\ \(\mathcal{S}_{1}\) & \(\left(y_{5_{1}}\right)_{12}\left(y_{5_{1}}\right)_{21}\) & [-0.041, 0.0018] & [-0.040, 0.0042] & [-0.042, 0.0027] & [-0.042, 0.0030] \\ \(\Xi\) & \(\left|\kappa_{\Xi}\right|\) & [0, 0.067] & [0, 0.069] & [0, 0.069] & [0, 0.069] \\ \(\Xi_{1}\) & \(\left|\kappa_{\Xi_{1}}\right|\) & [0, 0.049] & [0, 0.049] & [0, 0.049] & [0, 0.048] \\ \(\omega_{1}\) & \(\left|\left(y_{5_{1}}^{\text{eq}}\right)_{33}\right|\) & [0, 5.0] & [0, 1.6] & [0, 5.2] & [0, 1.7] \\ \(\omega_{4}\) & \(\left|\left(y_{5_{4}}^{\text{eq}}\right)_{33}\right|\) & [0.027, 3.6] & [0.021, 1.1] & [0, 3.1] & [0.043, 1.0] \\ \(\zeta\) & \(\left|\left(y_{5_{3}}^{\text{eq}}\right)_{33}\right|\) & [0.11, 3.7] & [0.011, 1.0] & [0.14, 3.3] & [0.034, 0.99] \\ \(\Omega_{1}\) & \(\left|\left(y_{5_{1}}^{\text{eq}}\right)_{33}\right|\) & [0.021, 4.4] & [0, 1.5] & [0, 4.0] & [0.031, 1.4] \\ \(\Omega_{4}\) & \(\left|\left(y_{5_{1}}\right)_{33}\right|\) & [0.099, 5.165] & [0.059, 1.553] & [0, 4.4] & [0.037, 1.4] \\ \(\Upsilon\) & \(\left|\left(\eta\tau\right)_{33}\right|\) & [0, 3.4] & [0, 1.1] & [0, 3.0] & [0.027, 1.0] \\ \(\Phi\) & \(\left|\left(y_{6_{4}}^{\text{eq}}\right)_{33}\right|\) & [0.14, 11] & [0, 2.9] & [0.018, 9.8] & [0.014, 2.6] \\ \hline \hline \(N\) & \(\left|\left(\lambda_{N}^{\ast}\right)_{3}\right|\) & [0, 0.47] & [0, 0.47] & [0, 0.47] & [0, 0.48] \\ \(E\) & \(\left|\left(\lambda_{E}\right)_{3}\right|\) & [0, 0.24] & [0, 0.25] & [0, 0.25] & [0, 0.25] \\ \(\Delta_{1}\) & \(\left|\left(\lambda_{\Delta_{1}}\right)_{3}\right|\) & [0, 0.21] & [0, 0.20] & [0, 0.21] & [0, 0.20] \\ \(\Delta_{3}\) & \(\left|\left(\lambda_{\Delta_{3}}\right)_{3}\right|\) & [0, 0.26] & [0, 0.27] & [0.0015, 0.26] & [0, 0.27] \\ \(\Sigma\) & \(\left|\left(\lambda_{\Sigma}\right)_{3}\right|\) & [0, 0.29] & [0, 0.28] & [0, 0.28] & [0, 0.29] \\ \(\Sigma_{1}\) & \(\left|\left(\lambda_{\Sigma_{1}}\right)_{3}\right|\) & [0, 0.42] & [0, 0.42] & [0, 0.43] & [0, 0.42] \\ \(U\) & \(\left|\left(\lambda\nu\right)_{3}\right|\) & [0, 0.84] & [0, 0.85] & [0, 0.82] & [0, 0.84] \\ \(D\) & \(\left|\left(\lambda_{D}\right)_{3}\right|\) & [0, 0.23] & [0, 0.24] & [0, 0.24] & [0, 0.23] \\ \(Q_{1}\) & \(\left|\left(\lambda_{\Delta_{1}}^{\ast}\right)_{3}\right|\) & [0, 0.94] & [0, 0.95] & [0, 0.93] & [0, 0.92] \\ \(Q_{7}\) & \(\left|\left(\lambda_{\varnothing}\right)_{3}\right|\) & [0, 0.95] & [0, 0.93] & [0, 0.91] & [0 0.91] \\ \(T_{1}\) & \(\left|\left(\lambda_{T_{1}}\right)_{3}\right|\) & [0, 0.46] & [0, 0.46] & [0, 0.45] & [0, 0.47] \\ \(T_{2}\) & \(\left|\left(\lambda_{T_{2}}\right)_{3}\right|\) & [0, 0.39] & [0, 0.38] & [0, 0.38] & [0, 0.38] \\ \hline \end{tabular} \end{table} Table 4.4: The 95% CL intervals for the UV invariants relevant for the heavy scalar (upper part) and heavy fermion (lower part of the table) one-particle models matched at tree-level. We quote the 95% CL upper limit and the lower limit is rounded to 0 whenever it is two orders of magnitudes smaller than the total CL width. For each model we compare results obtained at the linear and quadratic level in the EFT expansion and using either LO or NLO perturbative QCD corrections to the EFT cross-sections. In all cases, we set the mass of the heavy particle to \(m_{\text{UV}}=1\) TeV. Note that the model with heavy scalar \(\phi\) is considered separately in Table 4.5, given that it is parameterised in terms of multiple UV invariants. posteriors on the UV that are approximately constant near zero correspond to posteriors in the WC space that diverge towards zero. One also observes from Table 4.4 that two of the considered heavy scalar models, specifically those containing the \(\Xi\) and \(\Xi_{1}\) particles, lead to bounds which are at least two orders of magnitude more stringent than for the rest. These two models generate the same operators with slightly different matching coefficients, and as indicated in Table 4.2 they are the only scalar particles that generate the Wilson coefficient \(c_{\varphi D}\), which is strongly constrained by the EWPOs. Therefore, one concludes that the heavy scalar models with the best constraints are those whose generated operators are sensitive to the high-precision electron-positron collider data. The heavy scalar models that we have discussed so far and listed in Table 4.4 depend on a single UV invariant. On the other hand, as discussed in Sect. 2.3, the heavy scalar \(\phi\) model depends on two different UV couplings, \(\lambda_{\phi}\) and \((y_{\phi}^{u})_{33}\), resulting in two independent invariants. We present the corresponding 95% CL intervals from tree-level matching in the upper part of Table 4.5 and their distributions in Fig. 4.3. One finds that this model exhibits a degeneracy along the \(\left(y_{\phi}^{u}\right)_{33}=0\) direction, meaning that \(\lambda_{\phi}\) can only be constrained whenever \(\left(y_{\phi}^{u}\right)_{33}\neq 0\). This feature can be traced back to the tree-level matching relations in Eq. (2) with \(\left(y_{\phi}^{d,e}\right)_{33}=0\) and the fact that there is no observable sensitive to the \(c_{\varphi}\) operator in the SMEFiT dataset as well as the fact that the data does not prefer a non-zero \(\left(y_{\phi}^{u}\right)_{33}\). As we discuss below, this flat direction is lifted once one-loop corrections to the matching relations are accounted for. As opposed to the UV-invariants in Table 4.4, for this model the constrained UV-invariant is not positive-definite. Heavy fermions.Following the discussion of the results for the single-particle BSM extensions with heavy scalars, we move to the corresponding analysis involving the heavy vector-like fermions listed in Table 4.1. We provide their 95% CL bounds in the lower part of Table 4.4, with the corresponding posterior distributions shown in Fig. 4.4. All the (positive-definite) UV-invariants can be constrained from the fit, leaving no flat directions, and are consistent with the SM hypothesis. One observes how the constraints achieved in all the heavy fermion models are in general similar, with differences no larger than a factor 4 depending on the specific gauge representation. Additionally, all the bounds are \(\mathcal{O}(0.1)\), indicating that current data probes weakly-coupled fermions with masses around 1 TeV. For these heavy fermion models, differences between fits carried out at the linear and quadratic levels in the EFT expansion are minimal. These reason is that these UV scenarios are largely constrained by the precisely measured EWPOs from LEP and SLC, composed by processes for which quadratic EFT corrections are very small [39]. The same considerations apply to the stability with respect to higher-order QCD corrections, which are negligible for the EWPOs. Heavy vector bosons.The last category of single-particle models listed in Table 4.1 is the one composed of heavy vector bosons. We provide their 95% marginalised CL intervals in Table 4.6, with the corresponding posterior distributions in Fig. 4.5 in which we compare linear and quadratic EFT corrections at NLO QCD. \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline \hline Model & UV invariants & LO \(\mathcal{O}\left(\Lambda^{-2}\right)\) & LO \(\mathcal{O}\left(\Lambda^{-4}\right)\) & NLO \(\mathcal{O}\left(\Lambda^{-2}\right)\) & NLO \(\mathcal{O}\left(\Lambda^{-4}\right)\) \\ \hline \multirow{2}{*}{\(\phi\) (tree-level)} & \multirow{2}{*}{\(|\lambda_{\phi}|\)} & [0, \(8.2\cdot 10^{2}\)] & [0, \(7.4\cdot 10^{2}\)] & [0, \(8.0\cdot 10^{2}\)] & [0, \(7.9\cdot 10^{2}\)] \\ & \(\text{sgn}(\lambda_{\phi})\left(y_{\phi}^{u}\right)_{33}\) & [-0.11, 1.0] & [-0.20, 2.1] & [-0.19, 0.62] & [-0.18, 1.7] \\ \hline \multirow{2}{*}{\(\phi\) (one-loop)} & \multirow{2}{*}{\(|\lambda_{\phi}|\)} & [0, 7.6] & [0, 7.6] & [0, 7.6] & [0, 7.1] \\ & \(\text{sgn}(\lambda_{\phi})\left(y_{\phi}^{u}\right)_{33}\) & [-0.81, 2.8] & [-1.2, 2.3] & [-0.80, 2.2] & [-0.87, 2.1] \\ \hline \hline \end{tabular} \end{table} Table 4.5: Same as Table 4.4 for the heavy scalar \(\phi\) model, which has associated two independent UV-invariants. See Fig. 4.3 for the associated posterior distributions. A first observation is that most of the heavy vector models depend on multiple UV-invariants. To be specific, the \(\mathcal{B}\), \(\mathcal{W}\) and \(\mathcal{G}\) vector models have associated 9, 5 and 2 UV-invariants respectively, while the other vector models are characterised by a single UV-invariant. For all models considered, results are consistent with the SM scenario, corresponding to vanishing UV parameters. Regarding model \(\mathcal{B}\), its results can be understood as follows. First of all, we observe a strong increase in sensitivity at the quadratic level in the last two invariants, \(\mathrm{sgn}\left(g_{\mathcal{B}}^{\varphi}\right)\left(g_{\mathcal{B}}^{q} \right)_{33}\) and \(\mathrm{sgn}\left(g_{\mathcal{B}}^{\varphi}\right)\left(g_{\mathcal{B}}^{q} \right)_{33}\). Indeed, they generate the four-heavy operators \(c_{tt}^{(1)}\) and \(c_{QQ}^{(1)}\), respectively, which are characterised by large quadratic corrections. The other UV-invariants in the \(\mathcal{B}\) model generate operators sensitive to the EWPOs which are strongly constrained from LEP data leading to strong bounds in all cases. For example, the operators \(c_{\varphi\square}\) and \(c_{\varphi D}\) are generated by \(g_{\mathcal{B}}^{\varphi}\), \[\frac{c_{\varphi\square}}{\Lambda^{2}}=\frac{1}{2}\frac{\left(g_{\mathcal{B}} ^{\varphi}\right)^{2}}{m_{\mathcal{B}}^{2}}\,,\qquad\frac{c_{\varphi D}}{ \Lambda^{2}}=-2\frac{\left(g_{\mathcal{B}}^{\varphi}\right)^{2}}{m_{\mathcal{ B}}^{2}}\,, \tag{10}\] and thus provide strong bounds on the invariant \(\left|g_{\mathcal{B}}^{\varphi}\right|\). Furthermore, the invariant \(\left(g_{\mathcal{B}}^{e}\right)_{ii}g_{\mathcal{B}}^{\varphi}\) is sensitive to the leptonic Yukawa operators \(c_{\varphi(e,\mu,\tau)}\) and therefore gets well constrained by LEP data as well. Yet another example related to the EWPOs is the invariant \(g_{\mathcal{B}}^{\varphi}\left(g_{\mathcal{B}}^{t}\right)_{ii}\) that generates the operators \(c_{\varphi\ell_{2}}\) and Figure 4.1: Posterior distributions associated to the UV-invariants in the one-particle heavy scalar models listed in the upper part of Table 4.4 and obtained from tree-level matching. Note that all UV-invariants for the considered models are positive-definite. We compare the results based on linear and quadratic corrections in the EFT expansion, in both cases with NLO QCD accuracy. To facilitate visualisation, the posterior distributions are cut after they have dropped to 5% of its maximum, though this does not affect the calculation of the bounds listed in Table 4.4. \(c_{\varphi\ell_{3}}\) for \(i=1,2\), respectively. Finally, \(\left(g_{\mathcal{B}}^{\ell}\right)_{11}\) generates \(\left(c_{\ell\ell}\right)_{1111}\) and thus gets constrained by Bhabha scattering. For this model, we have chosen the UV-invariants such that they agree with the constrained directions. Had we built them in the same way that for other models, it would be explicit that there are 5 poorly constrained directions along \(\left|\left(g_{\mathcal{B}}^{e}\right)_{11}\right|\), \(\left|\left(g_{\mathcal{B}}^{e}\right)_{22}\right|\), \(\left|\left(g_{\mathcal{B}}^{e}\right)_{33}\right|\), \(\left|\left(g_{\mathcal{B}}^{\ell}\right)_{22}\right|\), and \(\left|\left(g_{\mathcal{B}}^{\ell}\right)_{33}\right|\). The model \(\mathcal{B}_{1}\) is only sensitive to operators that can be constrained via EWPOs, hence no improved sensitivity is observed after adding quadratic EFT effects. Moving on to model \(\mathcal{W}\), we observe two flat directions along \(\left(g_{\mathcal{W}}^{\ell}\right)_{33}\) and \(\mathrm{sgn}\left(\left(g_{\mathcal{W}}^{\ell}\right)_{11}\right)\left(g_{ \mathcal{W}}^{\ell}\right)_{22}\). In the matching relations, the UV coupling \(\left(g_{\mathcal{W}}^{\ell}\right)_{33}\) enters exclusively in a product together with \(g_{\mathcal{W}}^{\varphi}\). Now, \(g_{\mathcal{W}}^{\varphi}\) already gets strongly constrained via other independent relations to \(c_{\varphi\Box},c_{\theta\varphi},c_{t\varphi}\) and \(c_{\tau,\varphi}\). As a result, the UV invariant \(\left|\left(g_{\mathcal{W}}^{\ell}\right)_{33}\right|\) is left essentially unconstrained as no other matching relation exists to disentangle \(\left(g_{\mathcal{W}}^{\ell}\right)_{33}\) from \(g_{\mathcal{W}}^{\varphi}\). A similar argument holds for the second flat direction we observe along \(\mathrm{sgn}\left(\left(g_{\mathcal{W}}^{\ell}\right)_{11}\right)\left(g_{ \mathcal{W}}^{\ell}\right)_{22}\). The UV parameter \(\left(g_{\mathcal{W}}\right)_{22}\) only enters as a product with either \(\left(g_{W}^{\ell}\right)_{11}\) or \(g_{W}^{\varphi}\), both of which already get constrained via other independent matching relations, e.g. \(\left(c_{\ell\ell}\right)_{1111}\sim\left(g_{\mathcal{W}}^{\ell}\right)^{2}\) and the aforementioned reason in case of \(g_{\mathcal{W}}^{\varphi}\). In fact, these bounds can be considered meaningless since they are of the order or above the perturbative limit \(4\pi\) and well in excess of more refined perturbative unitarity bounds [43]. The four-heavy operators \(c_{QQ}^{(1,8)}\) are responsible for the increased sensitivity we observe in \(\left(g_{\mathcal{W}}^{q}\right)_{33}\) after including quadratic EFT corrections. Finally, concerning the models \(\mathcal{G}\), \(\mathcal{H}\), \(\mathcal{Q}_{5}\) and \(\mathcal{Y}_{5}\), we observe a significant tightening of the bounds after quadratic EFT corrections are accounted for. This is entirely due to their sensitivity to the four-heavy operators, as can be seen in Table 4.2. Figure 4.2: Same as Fig. 4.2, now comparing the baseline results, based on NLO QCD cross-sections for the EFT cross-sections, with the fit variant restricted to LO accuracy. Comparison to the Fitimaker results.The global SMEFT analysis carried out in [16] by the Fitimaker collaboration also provided bounds for selected single-particle extensions of the SM obtained from tree-level matching. Specifically, they provide results for 21 different models including heavy scalars, fermions, and vector bosons. These UV models were highlighted in red in Tables 4.2 and 4.3. As further discussed in App. D, the Fitimaker models are assumed to couple in a flavour-universal way to the SM fields. This flavour assumption is inconsistent with the corresponding one adopted by SMEFiT and leads in most models to the generation of non-vanishing EFT coefficients which are not part of our baseline fitting basis. Furthermore, in some scenarios, like in the heavy scalar \(\phi\) model, they lead to coefficients with flavour indices breaking the SMEFiT flavour symmetries. For this reason, in order to benchmark our results for the single-particle BSM extensions with those of [16], we adopt their same matching relations and ignore the effects of EFT coefficients not included in our fit. We provide in Fig. 4.6 a comparison between the upper 95% CL bounds on the absolute value of the UV couplings \(g_{\text{UV}}\) for the single-particle extensions obtained in [16] with the results of this work, where a heavy particle mass of \(m_{\text{UV}}=1\) TeV is assumed for all models. We find satisfactory agreement for most of the models considered, with residual differences between the FitMaker bounds (based on a linear EFT fit at LO) and our own ones with the same theory settings explained by different choices of the input dataset and of their statistical treatment. Non-negligible differences are instead observed for the heavy scalar model \(\phi\), the heavy fermion model \(Q_{17}\), and the heavy vector models \(\mathcal{W}_{1}\) and \(\mathcal{B}_{1}\mathcal{B}\). Differences for the \(\phi\) model are explained by the fact that the SMEFiT basis includes four-heavy operators generated by this model, such as \(c^{1}_{Qt}\) and \(c^{8}_{Qt}\) (Table 4.2), constrained by \(t\bar{t}b\bar{b}\) and \(t\bar{t}t\bar{t}\) cross-sections and which are absent from the Fitimaker analysis. Differences Figure 4.3: Marginalized posterior distributions of the two UV invariants associated to the heavy scalar model \(\phi\), comparing the impact of linear and quadratic EFT corrections after matching at tree level (upper panel) and at one-loop level (lower panel). The rightmost panels illustrate how the individual UV couplings \(\lambda_{\phi}\) and \((y^{u}_{\phi})_{33}\) are correlated, as expected given that only their product can be constrained from the data. See Table 4.5 for the corresponding 95% CL intervals. for the fermionic model \(Q_{17}\) as well as for the heavy vector models \(\mathcal{W}_{1}\) and \(\mathcal{B}_{1}\mathcal{B}\) originate from the inclusion of different Higgs production datasets in the fit. From the comparison in Fig. 4.6, one further observes that the change in sensitivity after including either NLO QCD or quadratic EFT corrections is mild, indicating that for these models the dominant sensitivity on the UV couplings is already provided by the linear EFT predictions with LO accuracy. In this context, one should note that a 20% improvement at the level of the Wilson coefficients \(c_{i}\) corresponds to only a 10% enhancement at the level of the UV parameters \(g_{\rm UV}\) given that the typical matching relations have the form \(c_{i}\sim g_{\rm UV}^{2}\). All in all, we conclude that this comparison with FitMaker is successful and provides a non-trivial validation of our new pipeline enabling direct fits of UV couplings within the SMEFit framework. ### Multi-particle models matched at tree level Moving beyond one-particle models, we now study UV-completions of the SM which include multiple heavy particles. Specifically, we consider a UV model which includes two heavy vector-like fermions, \(Q_{1}\) and \(Q_{7}\), and a heavy vector-boson, \(\mathcal{W}\), see Table 4.1 for their quantum numbers and gauge group representation. In the case of equal masses and couplings for the two heavy fermions, this model satisfies custodial symmetry [40]. The two heavy fermions \(Q_{1}\) and \(Q_{7}\) generate the same two operators, namely \(c_{\ell\varphi}\) and \(c_{\varphi t}\). A contribution to the top Yukawa operator \(c_{t\varphi}\) is also generated by the heavy vector-boson \(\mathcal{W}\), introducing an interesting interplay between the quark bidoublets on the one hand and the neutral vector triplet on the other hand. As indicated in Table 4.2, several other operators in addition to \(c_{t\varphi}\) are generated when integrating out the heavy vector boson \(\mathcal{W}\). It should be emphasized that we make this choice of multi-particle model for illustrative purposes as well as to compare with the benchmark studies of [40], and that results for any other combination of the Figure 4.4: Same as Fig. 4.1 for the one-particle models composed of heavy vector-like fermions heavy BSM particles listed in Table 4.1 can easily be obtained within our approach. The only limitations are that the number of UV couplings must remain smaller than the number of EFT coefficients entering the analysis, and that the input data used in the global fit exhibits sensitivity to the matched UV model. We provide in Table 4.7 the 95% CL intervals for the UV invariants associated to this three-particle model. A common value of the heavy mass, \(m_{Q_{1}}=m_{Q_{7}}=m_{\mathcal{W}}=1\) TeV, is assumed. As in the case of the one-particle models, we compare results at the linear and quadratic EFT level and at LO and NLO in the QCD expansion. The associated posterior distributions for the UV invariants comparing the impact of linear versus quadratic EFT corrections are shown in Fig. 4.7. On the one hand, one notices an increase in sensitivity at the quadratic level in case of \((g_{\mathcal{W}}^{q})_{33}\), which is consistent with the results of the one-particle model analysis shown in Table 4.6. On the other hand, for some of the UV-invariants in the model, such as \(|(g_{\mathcal{W}}^{l})_{11}|\), the bounds become looser once quadratic EFT corrections are accounted for, presumably due to the appearance of a second minimum in the marginalised \(\chi^{2}\) profiles. For this specific model, it turns out that one has a quasi-flat direction in the UV-invariant \(|\big{(}g_{\mathcal{W}}^{l}\big{)}_{33}|\). While the results of Table 4.6 assume a common value of the heavy particle masses, it is trivial to Figure 4.5: Same as Fig. 4.1 for the one-particle models composed of heavy vector-like bosons \begin{table} \begin{tabular}{c|c|c|c|c|c} Model & UV invariants & LO \(\mathcal{O}\left(\Lambda^{-2}\right)\) & LO \(\mathcal{O}\left(\Lambda^{-4}\right)\) & NLO \(\mathcal{O}\left(\Lambda^{-2}\right)\) & NLO \(\mathcal{O}\left(\Lambda^{-4}\right)\) \\ \hline \multirow{8}{*}{\(\left(Q_{1},Q_{7},\mathcal{W}\right)\)} & \(\left|\left(g_{\mathcal{W}}^{\ast}\right)\right\rangle_{33}\) & \(\left[0,\,6.5\right]\) & \(\left[0.011,\,3.1\right]\) & \(\left[0,\,5.4\right]\) & \(\left[0.014,\,2.9\right]\) \\ & \(\left|\left(g_{\mathcal{W}}^{\ast}\right)\right\rangle_{33}\) & \(\left[0.013,\,0.012\right]\) & \(\left[-0.014,\,0.019\right]\) & \(\left[-0.013,\,0.009\right]\) & \(\left[-0.014,\,0.011\right]\) \\ & \(\left|\left(\lambda_{1}^{\ast}\right)\right\rangle_{33}\) & \(\left[0,\,0.87\right]\) & \(\left[0,\,0.88\right]\) & \(\left[0,\,0.86\right]\) & \(\left[0,\,0.86\right]\) \\ & \(\left|\left(\lambda_{07}^{\ast}\right)\right\rangle_{33}\) & \(\left[0,\,0.88\right]\) & \(\left[0,\,0.87\right]\) & \(\left[0,\,0.84\right]\) & \(\left[0,\,0.87\right]\) \\ \hline \end{tabular} \end{table} Table 4.7: Same as Table 4.4 for the UV invariants associated to the three-particle model consisting of two heavy fermions \(Q_{1},Q_{7}\) and a heavy vector boson \(\mathcal{W}\) obtained from tree-level matching. A common value of the heavy mass is assumed for the three particles, \(m_{Q_{1}}=m_{Q_{7}}=m_{\mathcal{W}}=1\) TeV. See Fig. 4.8 for the corresponding results in a scenario with \(m_{Q_{1}}\neq m_{Q_{7}}\neq m_{\mathcal{W}}\). For this model we have a flat direction in the UV-invariant \(\left|\left(g_{\mathcal{W}}^{\ast}\right)_{33}\right|\). \begin{table} \begin{tabular}{c|c|c|c|c|c} Model & UV invariants & LO \(\mathcal{O}\left(\Lambda^{-2}\right)\) & LO \(\mathcal{O}\left(\Lambda^{-4}\right)\) & NLO \(\mathcal{O}\left(\Lambda^{-2}\right)\) & NLO \(\mathcal{O}\left(\Lambda^{-4}\right)\) \\ \hline \multirow{8}{*}{\(\left(Q_{1},Q_{7},\mathcal{W}\right)\)} & \(\left|\left(g_{\mathcal{W}}^{\ast}\right)\right\rangle_{33}\) & \(\left[0,\,6.5\right]\) & \(\left[0.011,\,3.1\right]\) & \(\left[0,\,5.4\right]\) & \(\left[0.014,\,2.9\right]\) \\ & \(\left|\left(g_{\mathcal{W}}^{\ast}\right)\right\rangle_{33}\) & \(\left[0.013,\,0.012\right]\) & \(\left[-0.014,\,0.019\right]\) & \(\left[-0.013,\,0.009\right]\) & \(\left[-0.014,\,0.011\right]\) \\ & \(\left|\left(\lambda_{1}^{\ast}\right)\right\rangle_{33}\) & \(\left[0,\,0.87\right]\) & \(\left[0,\,0.88\right]\) & \(\left[0,\,0.86\right]\) & \(\left[0,\,0.86\right]\) \\ & \(\left|\left(\lambda_{07}^{\ast}\right)\right\rangle_{33}\) & \(\left[0,\,0.88\right]\) & \(\left[0,\,0.87\right]\) & \(\left[0,\,0.84\right]\) & \(\left[0,\,0.87\right]\) \\ \hline \end{tabular} \end{table} Table 4.6: Same as Table 4.4 for the UV invariants associated to the one-particle heavy vector boson extensions extend them to different masses for each of the different particles in the model. This way, one can assess the dependence of the UV-coupling fit results on the assumptions of the heavy particle masses. With this motivation, Fig. 4.8 and Fig. 4.9 display pair-wise marginalised 95% contours for the relevant UV invariants and the original Lagrangian parameters in the model respectively, see Fig. 4.7 for the corresponding single-invariant posterior distributions. The baseline results with a common mass of 1 TeV are compared to a scenario with three different heavy masses, \(m_{Q_{1}}=3\) TeV, \(m_{Q_{7}}=4.5\) TeV, and \(m_{\mathcal{W}}=2.5\) TeV. We exclude \(\big{|}\big{(}g_{\mathcal{W}}^{\ell}\big{)}_{33}\big{|}\) from this comparison, given that it is essentially unconstrained from the fitted data. From the comparison in Fig. 4.8 one observes, as expected, that assuming heavier BSM particles leads to looser constraints on the UV couplings. Taking into account that different terms in the matching Figure 4.7: The posterior distributions of the UV invariants in the three-particle model consisting of two heavy fermions \(Q_{1},Q_{7}\) and a heavy vector boson \(\mathcal{W}\), see also see Table 4.7, comparing the impact of linear (blue) versus quadratic (orange) corrections in the EFT expansion. Figure 4.6: Upper 95% CL bounds on the absolute value of the UV couplings \(g_{\rm UV}\) obtained for the single-particle extensions considered in [16] compared to the results of this work. A heavy particle mass of \(m_{\rm UV}=1\) TeV is assumed. We display sequentially the results for the scalar, fermion, and vector particles separated by vertically dashed lines. relations scale with the heavy particle masses in a different manner, it is not possible in general to rescale the bounds obtained from the equal mass scenario to another with different heavy masses. Nevertheless, given that in a single-particle extension we know that bounds worsen by a factor \((m_{\text{UV}}^{*}/m_{\text{UV}})^{2}\) if the heavy particle mass is increased from \(m_{\text{UV}}\) up to \(m_{\text{UV}}^{*}\), one can expect that in this three-particle scenario the bounds are degraded by an amount between \(\sim 5\) and \(\sim 20\) depending on the specific UV coupling, a estimate which is consistent with the results in Fig. 4.8. This comparison also highlights how for very heavy particles we lose all sensitivity and the global EFT fit cannot provide competitive constraints on the UV model parameters. Figure 4.8: Pair-wise marginalised 95% contours for the UV invariants associated to the three-particle model consisting of two heavy fermions, \(Q_{1}\) and \(Q_{7}\), and the heavy vector boson \(\mathcal{W}\), see also Fig. 4.7 and Table 4.7 for the corresponding single-invariant posterior distributions and 95% CL bounds. We compare results for a common value of the heavy mass, \(m_{Q_{1}}=m_{Q_{7}}=m_{\mathcal{W}}=1\) TeV, with another scenario with different heavy masses, \(m_{Q_{1}}=3\) TeV, \(m_{Q_{7}}=4.5\) TeV, and \(m_{\mathcal{W}}=2.5\) TeV. The EFT fit is carried out at NLO QCD with \(\mathcal{O}\left(\Lambda^{-4}\right)\) corrections included. As compared to the list of invariants in Table 4.7, we exclude \(\left|\left(g_{\mathcal{W}}^{l}\right)_{33}\right|\) given that it is essentially unconstrained from the fitted data. ### Single-particle models matched at one-loop All results presented so far relied on tree-level matching relations. We now present results for UV-coupling fits in the case of matching relations obtained at the one-loop level, and study their effect on the UV parameter space as compared to results based on tree-level matching. As discussed in Sect. 2.2, for this purpose we can also deploy MatchMakerEFT interfaced to SMEFiT via match2fit in order to obtain one-loop matching relations suitable for their use in the fit in an automated way2. In its current incarnation, this pipeline enables using any of the single-particle heavy scalar and heavy fermion models (and combinations thereof) listed in Table 4.1 when matched at the one-loop level. We note here that the automation of the one-loop matching for heavy vector bosons has not been achieved yet. For concreteness, here we present results based on the one-loop matching the heavy scalar \(\phi\) defined by Eq. (2.1), and also discussed in Sect. 4.1, in order to compare its bounds to those previously obtained from the tree-level matching analysis. Footnote 2: A development version of match2fit suitable to one-loop matching relations can be obtained upon request. Table 4.5 compares the 95% CL bounds obtained for the two UV invariants in this model following either tree-level or one-loop matching relations, with the associated marginalised posterior distributions Figure 4.9: Same as Fig. 4.8 in the case of the original UV couplings. shown in Fig. 4.3. In contrast to the bounds obtained at tree-level, one-loop corrections enable lifting the flat direction along the \(|\lambda_{\phi}|\) invariant. This effect is a consequence of the operators \(\mathcal{O}_{\varphi\Box},\mathcal{O}_{b\varphi}\) and \(\mathcal{O}_{t\varphi}\), which receive additional one-loop matching contributions resulting in independent constraints on \(\lambda_{\phi}\). Specifically, the one-loop matching relations for the Wilson coefficients associated to \(\mathcal{O}_{\varphi\Box}\) and \(\mathcal{O}_{t\varphi}\) read \[\begin{split}\frac{c_{\varphi\Box}}{\Lambda^{2}}=&- \frac{g_{1}^{4}}{7680\pi^{2}}\frac{1}{m_{\phi}^{2}}-\frac{g_{2}^{4}}{2560\pi^{ 2}}\frac{1}{m_{\phi}^{2}}-\frac{3}{32\pi^{2}}\frac{\lambda_{\phi}^{2}}{m_{ \phi}^{2}},\\ \frac{c_{t\varphi}}{\Lambda^{2}}=&-\frac{\lambda_{ \phi}\left(y_{\phi}^{u}\right)_{33}}{m_{\phi}^{2}}-\frac{g_{2}^{4}y_{t}^{\rm SM }}{3840\pi^{2}}\frac{1}{m_{\phi}^{2}}+\frac{y_{t}^{\rm SM}}{16\pi^{2}}\frac{ \lambda_{\phi}^{2}}{m_{\phi}^{2}}+\frac{\left(4\left(y_{b}^{\rm SM}\right)^{2} -13\left(y_{t}^{\rm SM}\right)^{2}\right)}{64\pi^{2}}\frac{\lambda_{\phi} \left(y_{\phi}^{u}\right)_{33}}{m_{\phi}^{2}}\\ &-\left(12\lambda_{\varphi}^{\rm SM}+\left(y_{b}^{\rm SM}\right)^ {2}-11\left(y_{t}^{\rm SM}\right)^{2}\right)\frac{y_{t}^{\rm SM}}{64\pi^{2}} \frac{\left(y_{\phi}^{u}\right)_{33}^{2}}{m_{\phi}^{2}}+\frac{3}{128\pi^{2}} \frac{\lambda_{\phi}\left(y_{\phi}^{u}\right)_{33}^{3}}{m_{\phi}^{2}}\,,\end{split} \tag{4.4}\] where \(\lambda_{\varphi}^{\rm SM}\) is the quartic Higgs self-coupling in the SM. In Eq. (4.4), all terms with a factor \(1/\pi^{2}\) arise from one-loop corrections, indicating that \(c_{\varphi\Box}\) is entirely loop-generated while for \(c_{t\varphi}\) the tree-level matching relation is simply \(c_{t\varphi}=-\lambda_{\phi}\left(y_{\phi}^{u}\right)_{33}/m_{\phi}^{2}\). This additional dependence on \(\lambda_{\phi}\) arising from one-loop corrections is hence responsible for closing the tree-level flat direction. We have also checked that the bounds resulting from one-parameter SMEFT fits of the Wilson coefficients \(c_{\varphi\Box}\) and \(c_{t\varphi}\) to the same datasets lead to similar bounds on \(|\lambda_{\phi}|\) as compared to those reported in Table 2 after using the relations in Eq. (4.4) On the other hand, as a consequence of one-loop corrections to the matching relations one also observes a degradation in the sensitivity to the UV-invariant \(\text{sgn}(\lambda_{\phi})\big{(}y_{\phi}^{u}\big{)}_{33}\). The reason is that the parameter region around arbitrarily small values of the coupling \(\big{(}y_{\phi}^{u}\big{)}_{33}\) is disfavoured now, translating into a flattening of the distribution in \(\big{(}y_{\phi}^{u}\big{)}_{33}\) as observed in the rightmost panels of Fig. 4.3. These results showcase that one-loop corrections bring in not only precision but also accuracy, in that the stronger bound on this UV-invariant at tree-level was a consequence of a flat direction which is lifted as soon as perturbative corrections are accounted for. Interestingly, the middle panels of Fig. 4.3 also indicate the appearance of a double-peak structure in the distribution of \(\text{sgn}(\lambda_{\phi})\big{(}y_{\phi}^{u}\big{)}_{33}\) at quadratic order in the EFT expansion which is absent in case of tree-level matching. Such structure is associated to a second minimum in the \(\chi^{2}\) favouring non-zero UV couplings, and may be related to cancellations between different terms in the matching relations.3 For one-loop matching relations such as those displayed in Fig. 4.4, the minimised figure of merit will in general be a complicated higher-order polynomial in the UV couplings, and in particular in the presence of quadratic EFT corrections, the \(\chi^{2}(\mathbf{c})\) will be a quartic form of the Wilson coefficients. Therefore, for the specific case of the heavy scalar model, the \(\chi^{2}(\mathbf{g})\) expressed in terms of the UV couplings will include terms of up to \(\mathcal{O}\left(\lambda_{\phi}^{8}\right)\) and \(\mathcal{O}\left(\big{(}y_{\phi}^{u}\big{)}_{33}^{16}\right)\), see Eqns (4.4) and (2.11), as well as the various cross-terms. The minima structure of \(\chi^{2}(\mathbf{g})\) can then only be resolved by means of a numerical analysis such as the one enabled by our strategy. Footnote 3: Such cancellations may arise in EFT fits whenever linear and quadratic corrections become comparable. The above discussion demonstrates that one-loop corrections to the SMEFT matching relations provide non-trivial information on the parameter space of the considered UV models. Loop corrections not only modify the sensitivity to UV couplings already constrained by tree-level, but can also lift flat or quasi-flat directions. This result further motivates ongoing efforts to increase the perturbative accuracy of EFT matching relations. ## 5 Summary and outlook In this work we have presented and validated a general strategy to scrutinize the parameters of BSM models by using the SMEFT as an intermediate bridge to the experimental data. By combining MatchMakerEFT (for the matching between UV models and the EFT) with match2fit (for the conversion of the matching results to fit run card) and SMEFit (for the comparison with data and derivation of posterior distributions), this pipeline enables to constrain the masses and couplings of any UV model that can be matched to the SMEFT either at tree-level or at one-loop. While in this work we adopt MatchMakereFT, our approach is fully general and any other matching framework could be adopted. This flexible pipeline results in the automation of the bound-setting procedure on UV-complete models that can be matched to the SMEFT. To illustrate its reach, we apply it to derive constraints on a wide range of single-particle models extending the SM with heavy scalars, fermions, or vector bosons with different quantum numbers. We also consider a three-particle model which combines two heavy vector-like fermions with a vector boson. While most of the results presented arise from tree-level matching relations, we also demonstrate how our approach can be used in the case of one-loop matching using the heavy scalar model as a proof-of-concept. All the tools used in this analysis are made publicly available, ensuring the reproducibility of our results and enabling their extension to analyse other user-defined BSM scenarios. In addition to a new version of SMEFit with the option of performing fits in the parameter space of UV couplings, we release the Mathematica package match2fit whose goal is to streamline the interface between EFT matching and fitting codes. In its current incarnation, match2fit connects MatchMakereFT with SMEFitT, but could be extended to other combinations in the future. The availability of this pipeline opens up therefore the possibility of easily including SMEFT-derived constraints in upcoming model-building efforts. There are several directions that could be considered for future work generalising the results presented in this work. The most obvious one would be achieving the full automation of one-loop matching relations entering the global EFT fit, including the evaluation of the associated UV invariants, for multi-particle models. One restriction here is that one-loop matching is not yet automated for models including heavy vector bosons. As we have shown in the case of the heavy scalar model, one-loop matching relations make it possible to derive new constraints on parameter directions which are flat after tree-level matching, resolving degeneracies both among models and in the space of UV couplings. Furthermore, including dimension-8 operators in the fit, which are also generated by the matching relations, could help to disentangle models via the evaluation of their positivity bounds [34]. While a bottom-up determination of dimension-8 operators from data is hampered by their large number in the absence of extra assumptions, specific UV models only generate a relatively small number of dimension-8 operators, facilitating their integration in the SMEFT fit. One could also consider accounting for RGE effects between the cutoff scale given by the UV model and the scale at which the global fit is performed. The latter could be combined with the inclusion of RGE effects in the cross sections entering the fit [32]. Finally, it would be beneficial to consider more flexible flavour symmetries which are automatically consistent with the fitting code, something which would anyway be required for a full extension of multi-particle models to one-loop matching. All in all, our results provide valuable insights on ongoing model-building efforts by using the SMEFT to connect them with the experimental data, hence providing complementary information to that derived from direct searches for new heavy particles at the LHC and elsewhere. The pipeline presented in this work brings closer one of the original goals of the EFT program at the LHC, namely bypassing the need to directly compare the predictions of UV-complete models with the experimental measurements and instead using the SMEFT as the bridge connecting UV physics with data. We acknowledge useful discussions with J. Santiago, M. Chala, K. Mimasu, C. Severi, F. Maltoni, L. Mantani, T. Giani, J. Pages, A. Thomsen, and Y. Oda. A. R. thanks the High Energy Theory Group of Universidad de Granada and Theory Group of Nikhef for their hospitality during early stages of this work. The work of A. R. and E. V. is supported by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant agreement No. 949451) and by a Royal Society University Research Fellowship through grant URF/R1/201553. The work of J. t. H. and G. M is supported by the Dutch Research Council (NWO). The work of J. R. is supported by the Dutch Research Council (NWO) and by the Netherlands eScience Center. ## Appendix A Baseline SMEFiT global analysis The SMEFiT framework provides a flexible and robust open-source tool to constrain the SMEFT parameter space using the information provided by experimental data from energy-frontier particle colliders such as the LHC as well as from lower-energy experiments. Given a statistical model defined by a likelihood \(L(\mathbf{c})\), depending on parameters \(\mathbf{c}\) specifying the theoretical predictions for the observables entering the fit, and the corresponding dataset \(\mathcal{D}\), SMEFiT infers the posterior distribution \(P(\mathbf{c}|\mathcal{D})\) by means of sampling techniques [44; 45]. SMEFiT supports the inclusion of quadratic EFT corrections, can work up to NLO precision in the QCD expansion, admits general user-defined flavour assumptions and theoretical restrictions, and does not have limitations on the number or type of dimension-six operators that can be considered. Theoretical restrictions on the EFT parameter space can be imposed as follows. Given a set of \(N\) fit coefficients \(\{c_{1},\ldots,c_{N}\}\), SMEFiT allows the user to impose relations of the general form, \[\sum_{i}a_{i}\left(c_{1}\right)^{n_{1,i}}\ldots\left(c_{N}\right)^{n_{N,i}}=0\,, \tag{10}\] where the sum runs over all the possible combinations of the coefficients and \(n_{j,i}\) are real numbers such that \(\sum_{j}n_{j,i}\) is equal to a constant to preserve the correct dimensionality for all \(i\). For instance, the relationship between the Wilson coefficients in Eq. (4) arising from the heavy scalar model matched at tree-level can be rewritten in the same format as Eq. (10), namely \[\left(c_{qd}^{(1)}\right)_{3333}\left(\left(c_{u\varphi}\right)_{33}\right)^{2 }-\left(c_{qu}^{(1)}\right)_{3333}\left(\left(c_{d\varphi}\right)_{33}\right)^ {2}=0 \tag{11}\] These relations have to be imposed on a case-by-case basis via a run card and the user must declare which are the independent coefficients to be fitted. This functionality can be used to either relate the fit parameters among them, as in Eq. (11), or to perform a mapping to a different set of degrees of freedom. The latter feature allows one to perform with SMEFiT the fits on the UV model parameters as presented in this work. As mentioned in Sect. 3, in comparison to [23], which in turn updates the global SMEFT analysis of Higgs, top quark, and diboson data presented in [22], here we implement in SMEFiT the information provided by the legacy electroweak precision observables (EWPOs) from LEP and SLAC [35] in an exact manner. Previously, the constraints provided by these EWPOs in the SMEFT parameter space were accounted in an approximate manner, assuming EWPOs were measured with vanishing uncertainties. This assumption introduced additional relations between a subset of the Wilson coefficients. While such approximation served the purposes of interpreting LHC measurements, it is not appropriate in the context of matching to UV-complete models given that in many BSM scenarios the EWPOs provide the dominant constraints [16]. The new implementation of EWPOs in SMEFiT will be discussed in a separate publication [39] and here we provide for completeness a short summary. Experimental data and SM predictions for the legacy EWPOs from LEP and SLAC are taken from [35]. Linear and quadratic SMEFT corrections to these observables arising from dimension-six operators have been computed with mg5_aMC@NLO interfaced to SMEFT@NLO. Our implementation of the EWPOs has been benchmarked by comparing with the results of [46], assuming the same theoretical settings and choice of input dataset. Specifically, for this benchmarking exercise we use the \(m_{W}\) electroweak input scheme, linear EFT corrections, LO cross-sections, and assume flavour universality. Furthermore, we adopt the same convention on triple gauge operator \(O_{WWW}\), which differs from SMEFT@NLO by a negative sign. Fig. A.1 displays the bounds at the 68% CL for one-parameter fits in the two studies. Good agreement between our implementation and that of [46] is obtained for all operators. Residual differences can be explained due to the inclusion of flavour-sensitive observables from LEP which are absent in [46]. Taking into account this updated EWPOs implementation, when compared to the results of [22; 23] the present global SMEFT analysis used to constrain UV models contains 14 additional Wilson coefficients \(\mathbf{c}\) to be constrained from the fit for a total of 50 independent parameters. The new degrees of freedom are constrained not only by the EWPOs, but also top quark and diboson measurements. For this reason, we have recomputed EFT cross-sections for all the processes where such operators are relevant. As in the original analysis [22], we use mg5_aMC@NLO interfaced to SMEFT@NLO to evaluate linear and quadratic EFT corrections which are augmented with NLO QCD perturbative corrections whenever available. To avoid overlap between datasets entering the PDF and EFT fits [47; 48], we use NNPDF4.0 NNLO notp [49] as input PDF set. The values of the input electroweak parameters are taken to be \(m_{W}=80.387\) GeV, \(m_{Z}=91.1876\) GeV and \(G_{F}=1.1663787\cdot 10^{-5}\) GeV\({}^{-2}\). Finally, the same flavour assumptions as in [22] are used i.e. a U(2)\({}_{q}\times\) U(2)\({}_{u}\times\)U(3)\({}_{d}\times\)[U(1)\({}_{\ell}\times\)U(1)\({}_{e}\)]\({}^{3}\) symmetry in the quark and lepton sectors. In addition to Fig. 3.2, the impact of the exact implementation of EWPOs in the SMEFT fit is also illustrated in Fig. A.2, which compares the 68% and 95% CL marginalised bounds on the Wilson coefficients Figure A.1: Comparison of a SMEFiT-based analysis of the electroweak precision observables from LEP against the corresponding analysis from [46] based on the same dataset. We display bounds at the 68% CL for one-parameter fits using linear EFT and LO calculations. In this comparison, we adopt the convention of [46] for the triple gauge operator \(O_{WWW}\), which differs from the SMEFT@NLO one by a negative sign. For consistency with the settings adopted in [46], the triple gauge operator \(c_{WWW}\) has been fitted to a subset of the available LEP data composed by only 4 bins. Figure A.2: The results of the global SMEFit analysis, comparing the approximated [22, 23] and exact [39] implementation of the EWPOs. For each of the Wilson coefficients entering the fit, we indicate the best-fit results and the marginalised 68% CL (95% CL) intervals with thicker (thinner) lines. The fit labelled as “Approx EWPOs” differs slightly from that of [23] due to the use of improved theory predictions, see text for more details. Operators are grouped by type: from top to bottom, we show two-fermion-two-boson (2F2B), two-heavy-two-light (2Q2q), four-heavy quarks (4Q), and purely bosonic (B) operators. obtained with the approximate and exact implementations. As mentioned before, the results labelled as "Approx EWPOs" differ slightly from those in [23] due to the use of improved EFT theory predictions. In general one observes good agreement between the results of the global EFT fit with approximate and exact implementation of the EWPOs, with small differences whose origin is discussed in [39]. ## Appendix B The match2fit package As mentioned in the introduction, several public tools to carry out global analyses of experimental data in the SMEFT framework have been presented. Some of these tools also include the option to carry out parameter scans in the space of pre-defined UV models matched to SMEFT. While the matching procedure between UV models and the SMEFT coefficients has been automated at tree-level and partially at 1-loop level, currently there is no streamlined method enabling the interface of the output of these matching codes with SMEFT fitting frameworks. Such interface would allow using the SMEFT to impose bounds on the parameter space of general, user-defined UV-complete models. In this appendix we describe the match2fit package used in this work to interface an EFT automated matching code, specifically MatchMakerEFT, with a global fitting analysis framework, in this case SMEFIT. The current public version of this package is available on Github. We emphasize that match2fit could be extended to connect different matching and fitting SMEFT codes. We provide here a succinct introduction to this code, and point the interested reader to the user's manual for more details. The package has two different working modes. The first one reads the matching output in the format used by MatchMakerEFT and parses it to the format of run cards that can be fed into SMEFIT. In this mode, the mandatory inputs are the location of the file containing the matching results and a numerical value (in TeV) for the mass of the heavy particle. Optionally, one can also specify a name for the UV model and define a "collection", a set of UV models with common characteristics. Depending on the executed function, the code will print the run card for a scan on the UV parameters with or without the accompanying file that defines the UV invariants (see Sect. 2.3). The second mode runs MatchMakerEFT to perform the tree-level matching of a certain UV model to SMEFT and generates the same final output than the previous mode. It is also possible to just perform the matching without producing the run cards for SMEFiT. The input required in this mode is the one that MatchMakerEFT needs to describe the heavy particles that will be integrated out. More precisely, it needs the.fr,.red and.gauge files, and optionally the.red file. The.fr file is just a FeynRules file that defines the heavy particle(s), its (their) free Lagrangian and its (their) interactions with the SM. The SM and SMEFT models are included in MatchMakerEFT and can be used in this context without further modification. See [12] for more details on how to write the.fr file and what the other files must contain. The match2fit code consists of a Wolfram Mathematica Package designed and tested in version 12.1 or later. To unlock its full functionality, this package requires a working installation of MatchMakerEFT. Main commands.In order to highlight its usage, we describe now the key commands made available by the match2fit package. * parametersList[directory,model]: Both arguments should be strings. This function reads the file model.fr in directory, recognizes the masses and couplings of the heavy particles to be integrated out, and gives back an array with two elements. The first element is a list with the symbolical expression of the masses of the heavy particles. The second element is a list of the couplings defined for these heavy particles, excluding gauge couplings but not self-interactions. * parametersListFromMatchingResult[matchResFile]: It takes as only argument a string with the address of the file containing the matching results. It recognizes the masses and couplings of the heavy particles from those results and gives an output in the same format as parametersList. This code assumes that any parameter with a name starting by \(m\) or \(M\) corresponds to a mass and identifies any other parameter as a coupling. If the input of parametersListFromMatchingResult is the matching result obtained with the model fed into parametersList, any difference in their outputs should be due to couplings (or whole particles) that do not affect the tree-level matching result. * flavourSymChecker[matchResFile,Options]: It takes the file with matching results specified as input and checks if those results are compatible with the SMEFtT flavour symmetry, U(2)\({}_{q}\times\)U(2)\({}_{u}\times\)U(3)\({}_{d}\times\)(U(1)\({}_{\ell}\times\)U(1)\({}_{e}\))\({}^{3}\). If the constraints are satisfied, it returns YES. If not, it returns NO and it prints the first WC for which it found a symmetry violation. The option "UVFlavourAssumption" allows the user to specify a replacement list that can be used to apply flavour assumptions on the UV parameters. The left-hand side of the replacement rule should contain some of the UV parameters listed by parametersListFromMatchingResult or parametersList with or without numerical indices, according to how they appear in the matching results file. The code supports up to 4 numerical indices in a single group, i.e. couplings such as gUV, gUV[1], gUV[1, 3] and gUV[1, 3, 2, 4] are supported, but gUV[1][3] or gUV[1][2, 3] are not. The default value of "UVFlavourAssumption" is an empty list. An example of how to set this option is, * [*UVFlavourAssumption" -> {gWtiQ[i_, j_] :> KroneckerDelta[i,j] * KroneckerDelta[i, 3] gWq[3, 3] } * flavourSolver[matchResFile, Options]: It takes the file with matching results specified as input and tries to solve the constraints imposed by the SMEFtT flavour symmetry, U(2)\({}_{q}\times\)U(2)\({}_{u}\times\)U(3)\({}_{d}\times\)(U(1)\({}_{\ell}\times\)U(1)\({}_{e}\))\({}^{3}\), for the UV couplings. The running time, the number of solutions and their complexity depend on the model. It returns all found solutions. The only Option of this function is "UVFlavourAssumption", which follows the same description given in the function flavourSymChecker. This function considers the SM Yukawa couplings as symbolical variables and they can be set to zero with the option "UVFlavourAssumption". * matcher[directory, model]: it runs MatchMakerEFT and performs the tree-level matching without printing any run card for SMEFit. It takes two strings as arguments, _directory_ and _model_. The first one is the directory where the package will look for the files model.fr, model.red and model.gauge. If the code does not find one of those files, it will print a warning. It does not check for the existence of model.herm. The expected content of each of those files is specified in the documentation of MatchMakerEFT[12]. MatchMakerEFT will create the folder directory/model_MM, inside which the matching results will be stored as MatchingResult.dat. The code will check if MatchMakerEFT reported any problem during the matching and will print a warning if so. After performing the matching, the package will remove most of the files and directories created by itself or MatchMakerEFT for the sake of tidiness. It will only leave the directory model_MM and 2 files inside: MatchingResult.dat and MatchingProblems.dat. * matchResToWscanCard[matchResFile,mass, Options]: Function that reads the file with the tree-level matching results and prints the cards required for a UV scan. _matchResFile_ must be a string with the exact address of the file that contains the matching results to be used. The format of that file should be exactly like the file MatchingResult.dat produced by MatchMakerEFT. The argument _mass_ should be the value in TeV that the mass(es) of the UV particle(s) will be set to. _mass_ can be one numerical value or a list of them \(\{m_{1},..,m_{N}\}\), the latter being useful in the case of a multiparticle model. The order of the masses is the one returned by parametersListFromMatchingResult. If the user specifies only one numerical mass value for a multiparticle model, all the particles will be assigned the same mass. If parametersListFromMatchingResult identifies \(K\) masses and \(N<K\), the code will assume \(m_{i}=m_{N}\) for \(N\leqslant i\leqslant K\). If \(N>K\), the values \(m_{i}\) with \(K<i\leqslant N\) will be ignored. The mass values are also printed on the card names and inside the cards. For multiple masses, the convention is to take the integer part of each value and stick them together in sequence. This function has 3 options. The first one is "WVFlavourAssumption", which is identical to the one of the function flavourSynChecker, see its description for details on this option. The second option is "Collection", a string indicating the Collection to which the model belongs. Its default value is "UserCollection". Finally, the option "Model" is a string that specifies the model name to be printed on the run cards, with default value "UserModel". An example of how to set these options is replacing the argument Options by: ``` 1["UVFlavourAssumption"->{lambdaT1[a_]:>lambdaT1[3]}, 2"Collection"->"TestMatchingCollection","Model"->"TestModel"} ``` The package also includes a function that integrates the steps of matching and printing all the run cards according to the result of said matching: * modelToUyscanCard[directory,model,mass,Options]: Function that takes the files that define the UV model, performs the tree-level matching by running MatchMakerEFT, and prints the run cards needed for a UV scan. The first two arguments are exactly like in matcher, i.e. the program will look for the files model.fr, model.red, and model.gauge in directory and will do the tree-level matching based on them. The argument model also defines the name of the model. mass should be the value(s) in TeV that the mass(es) of the UV particle(s) will be set to. The handling of several mass values is equal to the function matchResToUyscanCard. This function has 2 options. The first one is "UVFlavourAssumption", which is identical to the one of the function flavourSynChecker, see its description for details on this option. The second option is "Collection", a string indicating the Collection to which the model belongs. Its default value is "UserCollection". An example of how to set these options is replacing the argument Options by: ``` 1["UVFlavourAssumption"->{lambdaT1[a_]:>lambdaT1[3]}, 2"Collection"->"TestMatchingCollection"] ``` Limitations and outlook.In its current incarnation, all SM couplings are numerically evaluated when printing the run cards used as input in the global SMEFT fit. Their values are hard-coded in the package and summarised in App. A of the match2fit user manual. Future versions of match2fit should allow the user to vary easily these values. Another restriction of match2fit is that it does not check for the fulfilment of the SMEFIT flavour assumptions automatically when printing the run cards for SMEFIT. This generates a degree of arbitrariness, e.g. the code uses the matching result for \((c^{(3)}_{\varphi q})_{22}\) as the result for \((c^{(3)}_{\varphi q})_{ii}\) without verifying first that \((c^{(3)}_{\varphi q})_{22}=(c^{(3)}_{\varphi q})_{11}\). We summarise these choices in App. B of the user manual. The user should be aware of this limitation and compensate for it with the required assumptions on the UV couplings. Support for alternative flavour structures on the Wilson coefficients will be added in the future, in order to ensure full compatibility with updated versions of the SMEFIT analysis and/or becoming compatible with other fitting codes. We also note that match2fit assumes that all UV couplings are real and applies this assumption when interpreting the matching results. The support for complex UV couplings (and hence also of the corresponding WCs) will be added in future releases. In the long term, match2fit could be extended to process matching results provided in the WCxf format [50]. This would facilitate the interface with other SMEFT matching codes such as CoDEx and Matchete as well as with codes implementing the Renormalisation Group Equations (RGEs) running of SMEFT operators. ## Appendix C Origin of the logarithms in 1-loop matching formulas. The logarithms in one-loop matching expressions such as that of Eq. (11) arise as a consequence of the RG running of both the UV couplings and the EFT coefficients. Their appearance ensures that the matching result is correct irrespective of the choice of matching scale [6; 12]. In this appendix we revisit this point in the explicit case of the heavy scalar model defined by the Lagrangian of Eq. (1). We start with the general expression, valid for a any matching scale \(Q\), for the Wilson coefficient \(c_{qu,pqrs}^{(8)}\), which is the most general form in flavour space of the coefficient \(c_{Qt}^{(8)}\) generated by this model. Subsequently, we evaluate this matching relation at two different scales, \(Q=m_{\phi}\) and \(Q=\mu<m_{\phi}\), to obtain the results for \(c_{qu,pqrs}^{(8)}|_{Q=m_{\phi}}\) and \(c_{qu,pqrs}^{(8)}|_{Q=\mu}\). We expect only the former to be free of RG logarithms. Then we use the RGEs for the Wilson coefficients in the EFT and for the UV couplings of the heavy scalar model to evolve \(c_{qu,pqrs}^{(8)}|_{Q=m_{\phi}}\) down to the scale \(\mu\) to obtain \(c_{qu,pqrs}^{(8)}|_{Q=m_{\phi}\to\mu}\). The comparison between the two calculations, \(c_{qu,pqrs}^{(8)}|_{Q=m_{\phi}\to\mu}\) and \(c_{qu,pqrs}^{(8)}|_{Q=\mu}\), will tell us whether the logarithms arising in the general matching formula correspond to the ones generated by the RG running or not. This procedure, which was also adopted in App. C of [12], is sketched in Fig. C.1. For illustration purposes, we perform this check separately at \(\mathcal{O}\left(g_{3}^{2}\right)\) and then at \(\mathcal{O}\left(g_{1}^{2}\right)\), i.e. the contributions that depends on the matching scale \(Q\) will only be kept at that order. On the other hand, we perform it without applying any flavour assumption on the EFT coefficient nor the UV model. Order \(g_{3}^{2}\) validation.The starting point of this calculation is the general matching result at one-loop for the Wilson coefficient \(c_{qu,pqrst}^{(8)}\) in the heavy scalar model, given by \[\begin{split}\frac{c_{qu,pqrst}^{(8)}}{\Lambda^{2}}=& -\frac{\left(y_{\phi}^{u}\right)_{pt}\left(y_{\phi}^{u}\right)_{rs}^{*}}{m_{ \phi}^{2}}-\frac{g_{3}^{2}\,\delta_{s,t}\left(y_{\phi}^{d}\right)_{ir}\left(y _{\phi}^{d}\right)_{ip}^{*}}{144\pi^{2}\,m_{\phi}^{2}}\left(4-3\log\left(\frac {m_{\phi}^{2}}{Q^{2}}\right)\right)\\ &-\frac{g_{3}^{2}\,\delta_{p,r}\left(y_{\phi}^{u}\right)_{it} \left(y_{\phi}^{u}\right)_{is}^{*}}{72\pi^{2}\,m_{\phi}^{2}}\left(4-3\log \left(\frac{m_{\phi}^{2}}{Q^{2}}\right)\right)-\frac{g_{3}^{2}\,\delta_{s,t} \left(y_{\phi}^{u}\right)_{pi}\left(y_{\phi}^{u}\right)_{ri}^{*}}{144\pi^{2} \,m_{\phi}^{2}}\left(4-3\log\left(\frac{m_{\phi}^{2}}{Q^{2}}\right)\right)\\ &+\frac{g_{3}^{2}\left(y_{\phi}^{u}\right)_{pt}\left(y_{\phi}^{u }\right)_{rs}^{*}}{48\pi^{2}m_{\phi}^{2}}+\mathcal{O}\left(\frac{g_{3}^{0}}{16 \pi^{2}}\right),\end{split} \tag{103}\] Figure C.1: Sketch of the procedure adopted to verify the origin of the logarithms arising in the one-loop matching formulas. The general matching expression for the EFT coefficient \(c/\Lambda^{2}\) in terms of UV couplings is evaluated at two different scales, \(Q=m_{\rm UV}\) and \(Q=\mu<m_{\rm UV}\), and one verify whether or not the two results are entirely related by RGE running. where \(prst\) indicate the flavour indices and we omitted the one-loop contributions that do not depend on the \(g_{3}\) coupling. As expected, all the logarithms appearing in this expression are removed by setting the matching scale at the heavy scalar mass, \(Q=m_{\phi}\). The renormalisation group equations relevant to describe the scale dependence of this specific Wilcoxon coefficient are given by [51], \[\begin{split}\mu\frac{dc^{(8)}_{qu,prst}}{d\mu}=\frac{\beta(c^{(8 )}_{qu,prst})}{16\pi^{2}}=&\frac{g_{3}^{2}}{16\pi^{2}}\left(\frac{ 4}{3}\delta_{s,t}\left(c^{(1)}_{qq,pjjr}+c^{(1)}_{qq,jrpj}\right)+4\delta_{s,t }\left(c^{(3)}_{qq,pjjr}+c^{(3)}_{qq,jrpj}\right)\right.\\ &+\frac{2}{3}\delta_{s,t}\left(c^{(8)}_{qu,pjjrj}+c^{(8)}_{qd,pjj }\right)+\frac{4}{3}\delta_{p,r}c^{(8)}_{qu,jjst}+\frac{2}{3}\delta_{p,r}c^{(8 )}_{ud,stjj}\\ &\left.+\frac{4}{3}\delta_{p,r}c_{uu,sjjt}+\frac{4}{3}\delta_{p, r}c_{uu,jtsj}-14c^{(8)}_{qu,prst}-12c^{(1)}_{qu,prst}\right),\end{split} \tag{122}\] \[\mu\frac{d\big{(}y^{u}_{\phi}\big{)}_{pt}}{d\mu}=\frac{\beta\left(\big{(}y^{u }_{\phi}\big{)}_{pt}\right)}{16\pi^{2}}=-\frac{g_{3}^{2}}{2\pi^{2}}\big{(}y^{u }_{\phi}\big{)}_{pt}, \tag{123}\] \[\mu\frac{dm_{\phi}}{d\mu}=0, \tag{124}\] where again we have kept only those terms proportional to \(g_{3}^{2}\). The RGE equations for the Yukawa coupling \(\big{(}y^{u}_{\phi}\big{)}_{pt}\) and the heavy mass \(m_{\phi}\) were computed with RGBeta[52] and cross-checked against the results of [53]. We can solve these RGEs at leading log accuracy to obtain, \[c^{(8)}_{qu,prst}\left(\mu\right)=c^{(8)}_{qu,prst}\left(m_{\phi}\right)+\frac {\beta(c^{(8)}_{qu,prst})}{32\pi^{2}}\log\left(\frac{\mu^{2}}{m_{\phi}^{2}} \right), \tag{125}\] and similarly for \(\big{(}y^{u}_{\phi}\big{)}_{pt}\), while \(m_{\phi}\) is constant at this order. The required tree-level matching results are, \[c^{(1)}_{qq,ijkl}=c^{(3)}_{qq,ijkl}=c^{(8)}_{ud,ijkl}=c_{uu,ijkl}=0, \tag{126}\] \[\frac{c^{(8)}_{qd,ijkl}}{\Lambda^{2}}=-\frac{\big{(}y^{d}_{\phi}\big{)}_{kj} \big{(}y^{d}_{\phi}\big{)}_{li}^{*}}{m_{\phi}^{2}}, \tag{127}\] \[\frac{c^{(1)}_{qu,ijkl}}{\Lambda^{2}}=-\frac{\big{(}y^{u}_{\phi}\big{)}_{il} \big{(}y^{u}_{\phi}\big{)}_{jk}^{*}}{6\,m_{\phi}^{2}}, \tag{128}\] which substituting in the corresponding beta function allows one to compute \(\frac{c^{(8)}_{qu,prst}}{\Lambda^{2}}\Big{|}_{Q=\mu}\) as, \[\frac{c^{(8)}_{qu,prst}}{\Lambda^{2}}\Big{|}_{Q=\mu}= \frac{c^{(8)}_{qu,prst}}{\Lambda^{2}}\Big{|}_{Q=m_{\phi}}+\frac{ \beta(c^{(8)}_{qu,prst})}{32\pi^{2}\Lambda^{2}}\log\left(\frac{\mu^{2}}{m_{ \phi}^{2}}\right),\] \[= \frac{c^{(8)}_{qu,prst}}{\Lambda^{2}}\Big{|}_{\text{tree},Q=m_{ \phi}}+\frac{c^{(8)}_{qu,prst}}{\Lambda^{2}}\Big{|}_{\text{loop},Q=m_{\phi}} +\frac{\beta(c^{(8)}_{qu,prst})}{32\pi^{2}\Lambda^{2}}\log\left(\frac{\mu^{2}} {m_{\phi}^{2}}\right), \tag{129}\] where we have defined \[\frac{c^{(8)}_{qu,prst}}{\Lambda^{2}}\Big{|}_{\text{loop},Q=m_{\phi}}=-\frac{ g_{3}^{2}\,\delta_{s,t}\big{(}y^{d}_{\phi}\big{)}_{ir}\big{(}y^{d}_{\phi}\big{)}_{ ip}^{*}}{36\pi^{2}\,m_{\phi}^{2}}-\frac{g_{3}^{2}\,\delta_{p,r}\big{(}y^{u}_{ \phi}\big{)}_{it}\big{(}y^{u}_{\phi}\big{)}_{is}^{*}}{18\pi^{2}\,m_{\phi}^{2}}- \frac{g_{3}^{2}\,\delta_{s,t}\big{(}y^{u}_{\phi}\big{)}_{pi}\big{(}y^{u}_{\phi }\big{)}_{ri}^{*}}{36\pi^{2}\,m_{\phi}^{2}}+\frac{g_{3}^{2}\big{(}y^{u}_{\phi }\big{)}_{pt}\big{(}y^{u}_{\phi}\big{)}_{rs}^{*}}{48\pi^{2}m_{\phi}^{2}}. \tag{130}\] The last term in Eq. (129) is the contribution related to the running between \(Q=m_{\phi}\) and \(Q=\mu<m_{\phi}\). To keep everything consistently at leading-log and one-loop order, all the couplings in the previous expression should be considered as constant with energy, and hence \(\left(c^{(8)}_{qu,prst}/\Lambda^{2}\right)\Big{|}_{\text{loop},Q=m_{\phi}}\) is equal to \(\left(c_{qu,prst}^{(8)}/\Lambda^{2}\right)\Big{|}_{\text{loop},Q=\mu}\) except for the logarithmic pieces. The same is not true for the tree-level contribution, which has to be transformed as follows. \[\frac{c_{qu,prst}^{(8)}}{\Lambda^{2}}\Big{|}_{tree,Q=m_{\phi}}= -\frac{\left(y_{\phi}^{u}\right)_{pt}\left(y_{\phi}^{u}\right)_{rs}^{*}}{m _{\phi}^{2}}\Big{|}_{Q=m_{\phi}}=-\frac{\left(y_{\phi}^{u}(m_{\phi})\right)_{pt }\left(y_{\phi}^{u}(m_{\phi})\right)_{rs}^{*}}{m_{\phi}^{2}}\] \[= -\frac{\left(\left(y_{\phi}^{u}(\mu)\right)_{pt}-\frac{\beta \left(\left(y_{\phi}^{u}\right)_{pt}\right)}{32\pi^{2}}\log(\frac{\mu^{2}}{m_{ \phi}^{2}})\right)\left(\left(y_{\phi}^{u}(\mu)\right)_{rs}-\frac{\beta\left( \left(y_{\phi}^{u}\right)_{rs}\right)}{32\pi^{2}}\log(\frac{\mu^{2}}{m_{\phi} ^{2}})\right)^{*}}{m_{\phi}^{2}}\] \[= -\frac{\left(y_{\phi}^{u}\right)_{pt}\left(y_{\phi}^{u}\right)_{ rs}^{*}}{m_{\phi}^{2}}\Big{|}_{Q=\mu}+\frac{\beta\left(\left(y_{\phi}^{u} \right)_{pt}\left(y_{\phi}^{u}\right)_{rs}^{*}}{32\pi^{2}\,m_{\phi}^{2}}\log \left(\frac{\mu^{2}}{m_{\phi}^{2}}\right)+\frac{\left(y_{\phi}^{u}\right)_{pt} \beta\left(\left(y_{\phi}^{u}\right)_{rs}\right)^{*}}{32\pi^{2}\,m_{\phi}^{2}} \log\left(\frac{\mu^{2}}{m_{\phi}^{2}}\right), \tag{111}\] where in the first two lines we indicate explicitly the scale at which the UV couplings are evaluated and in the last line we have kept only the leading log terms. The first term in the last line can be identified as \(\left(c_{qu,prst}^{(8)}/\Lambda^{2}\right)\Big{|}_{\text{tree},Q=\mu}\). Hence, now we have, \[\frac{c_{qu,prst}^{(8)}}{\Lambda^{2}}\Big{|}_{Q=\mu}=\frac{c_{qu,prst}^{(8)}}{ \Lambda^{2}}\Big{|}_{tree,Q=\mu}+\frac{c_{qu,prst}^{(8)}}{\Lambda^{2}}\Big{|}_ {\text{loop},Q=m_{\phi}}+\frac{\beta\Big{(}\left(y_{\phi}^{u}\right)_{pt} \Big{)}\left(y_{\phi}^{u}\right)_{rs}^{*}+\left(y_{\phi}^{u}\right)_{pt}\beta \Big{(}\left(y_{\phi}^{u}\right)_{rs}\Big{)}^{*}+\beta(c_{qu,prst}^{(8)})}{32 \pi^{2}\,m_{\phi}^{2}}\log\left(\frac{\mu^{2}}{m_{\phi}^{2}}\right).\] Finally, one has to replace the expressions for the \(\beta\) functions computed before, use the tree-level matching results inside the \(\beta\) function for the Wilson coefficient, and reorder the terms. This leads to Eq. (110) evaluated at \(Q=\mu\), concluding the check. Order \(g_{1}^{2}\) validation.Next we perform the same check at order \(g_{1}^{2}\). The general matching result at 1-loop for \(c_{qu,prst}^{(8)}\) is \[\frac{c_{qu,prst}^{(8)}}{\Lambda^{2}}=-\frac{\left(y_{\phi}^{u}\right)_{pt} \left(y_{\phi}^{u}\right)_{rs}^{*}}{m_{\phi}^{2}}-\frac{25g_{1}^{2}\left(y_{ \phi}^{u}\right)_{pt}\left(y_{\phi}^{u}\right)_{rs}^{*}}{1152\pi^{2}m_{\phi}^{2 }}+\mathcal{O}\left(\frac{g_{1}^{0}}{16\pi^{2}}\right), \tag{112}\] where the omitted terms are 1-loop contributions that do not depend on \(g_{1}\). In this case, there are no logarithms in the expression. Hence, the only terms that have a dependence on the energy scale are the ones generated at tree level. The RGE equations we need are in this case are [51; 53], \[\mu\frac{d\left(y_{\phi}^{u}\right)_{pt}}{d\mu}=\frac{\beta(c_{qu,prst}^{(8)})} {16\pi^{2}}=-\frac{g_{1}^{2}}{12\pi^{2}}c_{qu,prst}^{(8)}, \tag{113}\] \[\mu\frac{d\left(y_{\phi}^{u}\right)_{pt}}{d\mu}=\frac{\beta\Big{(}\left(y_{ \phi}^{u}\right)_{pt}\Big{)}}{16\pi^{2}}=-\frac{17g_{1}^{2}}{192\pi^{2}}\left(y_ {\phi}^{u}\right)_{pt}^{\prime}, \tag{114}\] \[\mu\frac{dm_{\phi}}{d\mu}=\frac{\beta(m_{\phi})}{16\pi^{2}}=-\frac{3g_{1}^{2}} {64\pi^{2}}m_{\phi}, \tag{115}\] where we have kept only those terms proportional to \(g_{1}^{2}\). The leading-log solution reads, \[c_{qu,prst}^{(8)}\left(\mu\right)=c_{qu,prst}^{(8)}\left(m_{\phi}\right)+\frac{ \beta(c_{qu,prst}^{(8)})}{32\pi^{2}}\log\left(\frac{\mu^{2}}{m_{\phi}^{2}} \right), \tag{116}\] \[m_{\phi}^{2}\left(m_{\phi}\right)=m_{\phi}^{2}\left(\mu\right)-m_{\phi}\left( \mu\right)\frac{\beta(m_{\phi})}{16\pi^{2}}\log\left(\frac{\mu^{2}}{m_{\phi}^{2 }}\right), \tag{117}\] and analogously for the UV coupling. All the required tree-level matching results were shown in the previous subsection. We can compute \(\frac{c_{qu,prst}^{(8)}}{\Lambda^{2}}\Big{|}_{Q=\mu}\) as, \[\frac{c_{qu,prst}^{(8)}}{\Lambda^{2}}\Big{|}_{Q=\mu} =\frac{c_{qu,prst}^{(8)}}{\Lambda^{2}}\Big{|}_{Q=m_{\phi}}+\frac{ \beta(c_{qu,prst}^{(8)})}{32\pi^{2}\Lambda^{2}}\log\left(\frac{\mu^{2}}{m_{ \phi}^{2}}\right),\] \[=\frac{c_{qu,prst}^{(8)}}{\Lambda^{2}}\Big{|}_{\text{tree},Q=m_{ \phi}}+\frac{c_{qu,prst}^{(8)}}{\Lambda^{2}}\Big{|}_{\text{loop},Q=m_{\phi}}+ \frac{\beta(c_{qu,prst}^{(8)})}{32\pi^{2}\Lambda^{2}}\log\left(\frac{\mu^{2}}{m_ {\phi}^{2}}\right), \tag{111}\] where \[\frac{c_{qu,prst}^{(8)}}{\Lambda^{2}}\Big{|}_{loop,Q=m_{\phi}}=-\frac{25g_{1} ^{2}\big{(}y_{\phi}^{u}\big{)}_{pt}\big{(}y_{\phi}^{u}\big{)}_{rs}^{*}}{1152 \pi^{2}m_{\phi}^{2}}. \tag{112}\] Let us remark again that all the UV couplings in \(\frac{c_{qu,prst}^{(8)}}{\Lambda^{2}}\Big{|}_{\text{loop},Q=m_{\phi}}\) should be considered as constant with energy. The tree-level contribution at the two different scales are related as, \[\frac{c_{qu,prst}^{(8)}}{\Lambda^{2}}\Big{|}_{tree,Q=m_{\phi}}\] \[=-\frac{\left(\big{(}y_{\phi}^{u}\big{)}_{pt}\big{(}y_{\phi}^{u} \big{)}_{rs}^{*}\right.}{32\pi^{2}}\log(\frac{\mu^{2}}{m_{\phi}^{2}})\left( \big{(}y_{\phi}^{u}\big{)}_{rs}-\frac{\beta\left(\big{(}y_{\phi}^{u}\big{)}_{ rs}\right)}{32\pi^{2}}\log(\frac{\mu^{2}}{m_{\phi}^{2}})\right)^{*}}{m_{\phi}^{2} \left(\mu\right)-m_{\phi}\left(\mu\right)\frac{\beta\left(m_{\phi}\right)}{16 \pi^{2}}\log\left(\frac{\mu^{2}}{m_{\phi}^{2}}\right)\] \[=-\frac{\big{(}y_{\phi}^{u}\big{)}_{pt}\big{(}y_{\phi}^{u}\big{)} _{rs}^{*}}{m_{\phi}^{2}}\Big{|}_{Q=\mu}+\frac{\beta\left(\big{(}y_{\phi}^{u} \big{)}_{pt}\big{)}\big{(}y_{\phi}^{u}\big{)}_{rs}^{*}+\big{(}y_{\phi}^{u} \big{)}_{pt}\beta\Big{(}\big{(}y_{\phi}^{u}\big{)}_{rs}\Big{)}^{*}}{32\pi^{2} \,m_{\phi}^{2}}\log(\frac{\mu^{2}}{m_{\phi}^{2}})\] \[\quad-\frac{\big{(}y_{\phi}^{u}\big{)}_{pt}\big{(}y_{\phi}^{u} \big{)}_{rs}^{*}}{m_{\phi}^{2}}\frac{\beta(m_{\phi})}{16\pi^{2}m_{\phi}}\log \left(\frac{\mu^{2}}{m_{\phi}^{2}}\right), \tag{113}\] where we have kept only the leading-log terms. The first term in the last line is \(\frac{c_{qu,prst}^{(8)}}{\Lambda^{2}}\Big{|}_{\text{tree},Q=\mu}\). Hence, in total we have, \[\frac{c_{qu,prst}^{(8)}}{\Lambda^{2}}\Big{|}_{Q=\mu}= \frac{c_{qu,prst}^{(8)}}{\Lambda^{2}}\Big{|}_{\text{tree},Q=\mu}+ \frac{c_{qu,prst}^{(8)}}{\Lambda^{2}}\Big{|}_{\text{loop},Q=m_{\phi}}\] \[+\frac{\beta\Big{(}\big{(}y_{\phi}^{u}\big{)}_{pt}\big{)}\big{(}y_{ \phi}^{u}\big{)}_{rs}^{*}+\big{(}y_{\phi}^{u}\big{)}_{pt}\beta\Big{(}\big{(}y_{ \phi}^{u}\big{)}_{rs}\Big{)}^{*}-2\frac{\beta(m_{\phi})}{m_{\phi}}\big{(}y_{ \phi}^{u}\big{)}_{pt}\big{(}y_{\phi}^{u}\big{)}_{rs}^{*}+\frac{m_{\phi}^{2}}{ \Lambda^{2}}\beta(c_{qu,prst}^{(8)})}{\Lambda^{2}\pi^{2}\,m_{\phi}^{2}}\log \left(\frac{\mu^{2}}{m_{\phi}^{2}}\right)\,.\] Replacing the \(\beta\) functions computed before and reordering, one finds that the logarithmic term vanishes exactly, in perfect agreement with Eq. (110). Finally, let us remark the key role played by the RGEs of the UV variables, \(y_{\phi}^{u}\) and \(m_{\phi}\), in both checks at order \(g_{3}^{2}\) and \(g_{1}^{2}\). Had we not considered them, there would have been residual logarithmic effects not accounted for by the RGEs, as found in [17]. The same checks can be performed with other WCs and at all orders in the couplings. ## Appendix D Additional details on UV models. In this appendix we provide complementary information on the UV-complete scenarios studied in this work, in particular concerning the tree-level matched one-particle models. For each of the one-particle UV models considered here and listed in Table 1, we indicate the relevant couplings entering their Lagrangian in Table D.1, where as elsewhere in the paper we follow the notation from [3]. These are the UV parameters which are constrained from the data via the matching procedure. These couplings have been selected such that the resulting Wilson coefficients at the matching scale fulfil the SMEFiT flavour assumptions in the case of tree-level matching. It is worth noting that in most models considered, the interaction with the top quark is singled out and treated differently from the lighter quarks, consistently with the SMEFiT assumptions. This is a well-motivated assumption and is realised in many common UV scenarios, such as in partial compositeness. Concerning the leptonic sector, the SMEFiT flavour assumption allows for independent couplings to the different generations. This situation differs from the models considered by the FitMaker collaboration [16], in which the same heavy particles are used but the couplings are assumed to be flavour-universal. For the case of heavy spin-one bosons, they assume couplings only to the Higgs boson where appropriate. The FitMaker models \(BB_{1}\) and \(Q_{17}\) arise by considering the pairs of heavy particles \(\{\mathcal{B},\mathcal{B}_{1}\}\) and \(\{Q_{1},Q_{7}\}\) respectively, with degenerate masses and couplings. The models \(T\) and \(TB\) contain the heavy fermions \(U\) and \(Q_{1}\) respectively with specially rescaled couplings [54]. Table D.2 indicates, for the UV models considered in the FitMaker study [16] and which are compared with our results in Fig. 4, whether they comply with the flavour symmetry assumed in SMEFiT, and if this is not the case what are the differences. Here "more restrictive than SMEFiT" means that applying the SMEFiT flavour symmetry induces more non-vanishing UV couplings than in the FitMaker case. For the purposes of the benchmark comparison of Fig. 4, the effect of these additional symmetry-breaking \begin{table} \begin{tabular}{c|c||c|c||c|c} \multicolumn{2}{c||}{**Scalars**} & \multicolumn{2}{c||}{**Fermions**} & \multicolumn{2}{c}{**Vectors**} \\ \hline Model & UV couplings & Model & UV couplings & Model & UV couplings \\ \hline \(\mathcal{S}\) & \(\kappa_{\mathcal{S}}\) & \(N\) & \(\left(\lambda_{N}^{e}\right)_{3}\) & \(\mathcal{B}\) & \(\left(g_{B}^{u}\right)_{33}\), \(\left(g_{B}^{q}\right)_{33}\), \(g_{B}^{\varphi}\), \\ \(\phi\) & \(\lambda_{\phi}\), \(\left(y_{\phi}^{u}\right)_{33}\) & \(E\) & \(\left(\lambda_{E}\right)_{3}\) & & \(\left(g_{B}^{e}\right)_{11}\), \(\left(g_{B}^{e}\right)_{22}\), \(\left(g_{B}^{e}\right)_{33}\), \\ \(\Xi\) & \(\kappa_{\Xi}\) & \(\Delta_{1}\) & \(\left(\lambda_{\Delta_{1}}\right)_{3}\) & & \(\left(g_{B}^{\ell}\right)_{22}\), \(\left(g_{B}^{\ell}\right)_{33}\) \\ \(\Xi_{1}\) & \(\kappa_{\Xi_{1}}\) & \(\Delta_{3}\) & \(\left(\lambda_{\Delta_{3}}\right)_{3}\) & \(\mathcal{B}_{1}\) & \(g_{B_{1}}^{\varphi}\) \\ \(\omega_{1}\) & \((y_{\omega_{1}}^{qq})_{33}\) & \(\Sigma\) & \(\left(\lambda_{\Sigma}\right)_{3}\) & \(\mathcal{W}\) & \(\left(g_{\mathcal{W}}^{l}\right)_{11}=2\)\(\left(g_{\mathcal{W}}^{l}\right)_{22}\), \(\left(g_{\mathcal{W}}^{l}\right)_{33}\) \\ \(\omega_{4}\) & \(\left(y_{\omega_{4}}^{uu}\right)_{33}\) & \(\Sigma_{1}\) & \(\left(\lambda_{\Sigma_{1}}\right)_{3}\) & & \(g_{\mathcal{W}}^{\varphi}\), \(\left(g_{\mathcal{W}}^{q}\right)_{33}\) \\ \(\zeta\) & \(\left(y_{\zeta}^{qq}\right)_{33}\) & \(U\) & \(\left(\lambda_{U}\right)_{3}\) & \(\mathcal{W}_{1}\) & \(g_{\mathcal{W}_{1}}^{\varphi}\) \\ \(\Omega_{1}\) & \(\left(y_{\Omega_{1}}^{qq}\right)_{33}\) & \(D\) & \(\left(\lambda_{D}\right)_{3}\) & \(\mathcal{G}\) & \(\left(g_{\mathcal{G}}^{q}\right)_{33}\), \(\left(g_{\mathcal{G}}^{u}\right)_{33}\) \\ \(\Omega_{4}\) & \(\left(y_{\Omega_{4}}\right)_{33}\) & \(Q_{1}\) & \(\left(\lambda_{\mathcal{O}_{1}}^{u}\right)_{3}\) & & \\ \(\Upsilon\) & \(\left(y_{\Upsilon}\right)_{33}\) & \(Q_{7}\) & \(\left(\lambda_{\mathcal{Q}_{7}}\right)_{3}\) & \(\mathcal{H}\) & \(\left(g_{\mathcal{H}}\right)_{33}\) \\ \(\Phi\) & \(\left(y_{\Phi}^{qu}\right)_{33}\) & \(T_{1}\) & \(\left(\lambda_{T_{1}}\right)_{3}\) & \(\mathcal{Q}_{5}\) & \(\left(g_{\mathcal{O}}^{uq}\right)_{33}\) \\ & & \(T_{2}\) & \(\left(\lambda_{T_{2}}\right)_{3}\) & \(\mathcal{Y}_{5}\) & \(\left(g_{\mathcal{Y}_{5}}\right)_{33}\) \\ \hline \end{tabular} \end{table} Table D.1: For each of the one-particle UV models considered in this work and described in Table 1, we list the couplings entering their Lagrangian, restricting ourselves to those which are consistent with the SMEFiT flavour assumption after tree-level matching. Wilson coefficients has been ignored. \begin{table} \begin{tabular}{c|c|c} Model & Compliant & Details \\ \hline \hline \(\mathcal{S}\left(S\right)\) & Yes & - \\ \hline \(\phi\) & No & \((c_{ledq})_{3333}\neq 0\), \((c_{quqd})_{3333}\neq 0\), \((c_{lequ})_{3333}\neq 0\) \\ \hline \(\Xi\) & Yes & - \\ \hline \(\Xi_{1}\) & Yes & - \\ \hline \(\mathcal{B}\left(B\right)\) & Yes & More restrictive than SMEFiT \\ \hline \(\mathcal{B}_{1}\left(B_{1}\right)\) & Yes & - \\ \hline \(\mathcal{W}\left(W\right)\) & Yes & More restrictive than SMEFiT \\ \hline \(\mathcal{W}_{1}\left(W_{1}\right)\) & Yes & More restrictive than SMEFiT \\ \hline \(N\) & No & \((c_{\varphi l}^{(1),(3)})_{ij}\neq 0\) for \(i\neq j\) \\ \hline \(E\) & No & \((c_{\varphi l}^{(1),(3)})_{ij}\neq 0\) for \(i\neq j\) \\ \hline \(T\) & Yes & - \\ \hline \(\Delta_{1}\) & No & \((c_{\varphi e})_{ij}\neq 0\) for \(i\neq j\) \\ \hline \(\Delta_{3}\) & No & \((c_{\varphi e})_{ij}\neq 0\) for \(i\neq j\) \\ \hline \(\Sigma\) & No & \((c_{\varphi l}^{(1),(3)})_{ij}\neq 0\) for \(i\neq j\) \\ \hline \(\Sigma_{1}\) & No & \((c_{\varphi l}^{(1),(3)})_{ij}\neq 0\) for \(i\neq j\) \\ \hline \(U\) & No & \((c_{\varphi l}^{(1),(3)})_{ij}\neq 0\) and \((c_{u\varphi})_{ij}\neq 0\) for \(i\neq j\) \\ \hline \(D\) & No & \((c_{\varphi q}^{(1),(3)})_{ij}\neq 0\) for \(i\neq j\) \\ \hline \(Q_{5}\) & No & \((c_{\varphi d})_{ij}\neq 0\) for \(i\neq j\) \\ \hline \(Q_{7}\) & No & \((c_{\varphi u,\,u\varphi})_{ij}\neq 0\) for \(i\neq j\) \\ \hline \(T_{1}\) & No & \((c_{\varphi q}^{(1),(3)})_{ij}\neq 0\) and \((c_{u\varphi})_{ij}\neq 0\) for \(i\neq j\) \\ \hline \(T_{2}\) & No & \((c_{\varphi q}^{(1),(3)})_{ij}\neq 0\) and \((c_{u\varphi})_{ij}\neq 0\) for \(i\neq j\) \\ \hline \(U\left(T\right)\) & Yes & - \\ \hline \(Q_{1}\left(TB\right)\) & No & \((c_{\varphi d})_{33}\neq(c_{\varphi d})_{11,22}\) and \((c_{\varphi ud})_{33}\neq 0\) \\ \hline \(Q_{17}\left(\left\{Q_{1},\,Q_{7}\right\}\right)\) & Yes & - \\ \hline \(BB_{1}\left(\left\{B,\,B_{1}\right\}\right)\) & Yes & - \\ \hline \end{tabular} \end{table} Table 2: For the UV models considered in the FitMaker study and which are compared with the results of our analysis in Fig. 4.6, we indicate whether they comply with the flavour symmetry assumed in SMEFiT, and if this is not the case what are the differences. Here “more restrictive than SMEFiT” means that applying the SMEFiT flavour symmetry induces more non-vanishing UV couplings. We denote UV models following the notation of [3] and we indicate the choice in FitMaker between parenthesis whenever different. The only model for which we use a different notation from [3] is the scalar doublet model \(\phi\).
2306.17457
The $^{103}$Rh NMR Spectroscopy and Relaxometry of the Rhodium Formate Paddlewheel Complex
The NMR spectroscopy of spin-1/2 nuclei with low gyromagnetic ratio is challenging, due to the low NMR signal strength. Methodology for the rapid acquisition of $^{103}$Rh NMR parameters is demonstrated for the case of the rhodium formate "paddlewheel" complex $\mathrm{Rh_2(HCO_2)_4}$. A scheme is described for enhancing the $^{103}$Rh signal strength by polarization transfer from $^{1}$H nuclei and which also greatly reduces the interference from ringing artifacts, a common hurdle for the direct observation of low-$\gamma$ nuclei. The $^{103}$Rh relaxation time constants $T_1$ and $T_2$ are measured within 20 minutes using $^{1}$H-detected experiments. The field-dependence of the $^{103}$Rh $T_1$ is measured. The high-field relaxation is dominated by the chemical shift anisotropy (CSA) mechanism. The $^{103}$Rh shielding anisotropy is found to be very large: $|\Delta\sigma|=9900\pm540\mathrm{\,ppm}$. This estimate is compared with density functional theory calculations.
Harry Harbor Collins, Mohamed Sabba, Gamal Moustafa, Bonifac Legrady, Murari Soundararajan, Markus Leutzsch, Malcolm H. Levitt
2023-06-30T08:06:47Z
http://arxiv.org/abs/2306.17457v2
# The \({}^{103}\)Rh NMR Spectroscopy and Relaxometry of the Rhodium Formate Paddlewheel Complex ###### Abstract The NMR spectroscopy of spin-1/2 nuclei with low gyromagnetic ratio is challenging, due to the low NMR signal strength. Methodology for the rapid acquisition of \({}^{103}\)Rh NMR parameters is demonstrated for the case of the rhodium formate "paddlewheel" complex Rh\({}_{2}\)(HCO\({}_{2}\))\({}_{4}\). A scheme is described for enhancing the \({}^{103}\)Rh signal strength by polarization transfer from \({}^{1}\)H nuclei and which also greatly reduces the interference from ringing artifacts, a common hurdle for the direct observation of low-\(\gamma\) nuclei. The \({}^{103}\)Rh relaxation time constants \(T_{1}\) and \(T_{2}\) are measured within 20 minutes by using \({}^{1}\)H-detected experiments. The field-dependence of the \({}^{103}\)Rh \(T_{1}\) is measured. The high-field relaxation is dominated by the chemical shift anisotropy (CSA) mechanism. The \({}^{103}\)Rh shielding anisotropy is found to be very large: \(|\Delta\sigma|=9900\pm 540\) ppm. This estimate is compared with density functional theory calculations. ## I Introduction Rhodium paddlewheel complexes have attracted significant attention due to their unique properties and diverse applications where they have played roles as catalysts and potential anticancer agents.[1; 2; 3; 4; 5] These complexes consist of two rhodium atoms bridged by four carboxylate ligands, forming a lantern-like structure, with some resemblance to the paddle-wheels of a river boat. A typical example is rhodium formate, Rh\({}_{2}\)(HCO\({}_{2}\))\({}_{4}\), see figure 1. Nuclear magnetic resonance (NMR) is a powerful probe of the properties of rhodium complexes. \({}^{103}\)Rh carries the distinction of being one of only 4 (with \({}^{19}\)F, \({}^{31}\)P, and \({}^{89}\)Y) spin-1/2 nuclei with a natural abundance of 100%. Nevertheless, it has been relatively neglected by spectroscopies: \({}^{103}\)Rh is a member of what Mann dubbed "the Cinderella nuclei"[6] - transition metals with spin-1/2 but very low magnetogyr ratio \(\gamma\). The NMR of \({}^{103}\)Rh is associated with multiple experimental challenges leading to a relative scarcity of experimental data. However, many of these challenges have been successfully overcome by the creative application of modern NMR methodology, such as heteronuclear multiple-quantum (HMQC) NMR[7]. However, although HMQC experiments allow the rapid acquisition of \({}^{103}\)Rh NMR spectra in suitable cases, it is not possible to estimate \({}^{103}\)Rh spin-lattice and spin-spin relaxation time constants through HMQC experiments. For this purpose, experiments exploiting \({}^{103}\)Rh magnetization are needed. In this work, we utilise a variant of the PulsePol polarisation transfer technique[8; 9; 10] to enhance the \({}^{103}\)Rh NMR spectroscopy of the rhodium formate paddlewheel complex in solution. We report (i) NMR methodology for the acquisition of directly detected \({}^{103}\)Rh spectra with effective ringing filtration; (ii) NMR methodology for the rapid measurement of \({}^{103}\)Rh \(T_{1}\) and \(T_{2}\) relaxation time constants over a range of magnetic field strengths. We observe a strong field dependence of the \({}^{103}\)Rh \(T_{1}\), which is qualitatively consistent with a dominant chemical shift anisotropy relaxation mechanism. We estimate the \({}^{103}\)Rh shielding anisotropy by using information from \({}^{13}\)C and \({}^{103}\)Rh relaxation experiments in solution, and from \({}^{13}\)C solid-state NMR. ## II Experimental ### Sample Experiments were performed on a saturated (\(\sim\)10 mM) solution of rhodium formate (Rh\({}_{2}\)(HCO\({}_{2}\))\({}_{4}\)) dissolved in 500 \(\mu\)L deuterated tetrahydrofuran (THF-d\({}_{8}\)) contained in a Wilmad LPV 5 mL sample tube. The rhodium formate was synthesised from rhodium chloride using a reported procedure[11] and dried extensively under heated vacuum. The Figure 1: Molecular structure of the rhodium formate paddlewheel complex ligated by solvent tetrahydrofuran (THF) molecules at the axial sites. This work exploits the \({}^{3}J_{\text{RBH}}\) scalar couplings for polarisation transfer between the \({}^{103}\)Rh and \({}^{1}\)H nuclei. resulting rhodium formate solid was green in colour and dissolved in THF to produce a green solution. ### Solution NMR \({}^{1}\)H and \({}^{103}\)Rh Spectra were acquired at a magnetic field strength of 9.4 T using a standard commercial Bruker 5 mm NMR BBO probe (\({}^{1}\)H/\({}^{2}\)H/\({}^{109}\)Ag.\({}^{31}\)P) equipped with a z-gradient with a maximum strength of 50 G cm\({}^{-1}\). Proton resonances are referenced to the absolute frequency 400.14300 MHz; whereas \({}^{103}\)Rh resonances are referenced to an absolute frequency that is proportional to the protons (\(\Xi\left({}^{103}\text{Rh}\right)=3.16\%\)) per the most common convention [12]. Although the probe could be tuned to \({}^{103}\)Rh beyond the manufacturer specifications, it was set to mismatched (overcoupled) conditions to reduce ringdown times [13, 14, 15, 16]. The radiofrequency amplitudes on the \({}^{1}\)H and \({}^{103}\)Rh channels were both adjusted to give an intentionally matched nutation frequency of \(\omega_{\text{nut}}/(2\pi)\simeq 4\) kHz, corresponding to a 90\({}^{\circ}\) pulse duration of 62.5 \(\mu\)s. Additional isolation of the rf channels by electronic filters was found to be necessary - without the filters, noise on the \({}^{103}\)Rh channel was significant enough to preclude observation of other nuclei. At the preamplifier output we installed: a 30 MHz lowpass filter (Chemagnetics) on the \({}^{103}\)Rh channel, a 400 MHz bandpass filter (K&L Microwave) on the \({}^{1}\)H channel, and a 61 MHz bandpass filter (FSY Microwave) on the \({}^{2}\)H lock channel. To measure relaxation times as a function of magnetic field, the experiments used rapid sample shuttling from inside the 9.4 T magnet bore to regions of lower field outside the magnet bore. The shuttling was performed using a motorised fast shuttling system based on the design by Kiryutin [17]. The shuttling time was kept constant at 1 second. The pulse sequences described below use the following elements: #### ii.2.1 Composite pulses Composite pulses were used to minimize the effects of rf field inhomogeneity and are denoted by shaded black rectangles in the pulse sequence diagrams. All composite pulses are implemented using the symmetrized BB1 composite pulse scheme [18, 19] in which a simple pulse \(\beta_{\phi}\) (where \(\beta\) is the flip angle and \(\phi\) is the phase) is replaced by: \[(\beta/2)_{\phi}180_{\phi+\theta_{W}}360_{\phi+3\theta_{W}}180_{\phi+\theta_{W }}(\beta/2)_{\phi} \tag{1}\] Where \(\theta_{W}=\arccos\left(-\beta/(4\pi)\right)\). For the \(\pi/2\) and \(\pi\) flip angles used in this paper, this corresponds to the following sequences: \[90_{\phi}\to 45_{\phi}180_{\phi+97.18}360_{\phi+291.54}180_{\phi+97.18}45_{\phi} \tag{2}\] \[180_{\phi}\to 90_{\phi}180_{\phi+104.48}360_{\phi+313.43}180_{\phi+104.48}90_{\phi} \tag{3}\] #### ii.2.2 DualPol Polarization Transfer Sequence The transfer of polarisation between \({}^{103}\)Rh and \({}^{1}\)H was achieved using the pulse sequence shown in figure 2. This consists of repeating PulsePol sequences [8, 9], applied simultaneously to the \({}^{1}\)H and \({}^{103}\)Rh radiofrequency channels. The PulsePol sequence consists of six phase-shifted radiofrequency pulses and four intervals \(\tau\), and was originally developed for polarization transfer between electron and nuclear spins in the context of nitrogen-vacancy diamond magnetometry [8]. It has also been shown to be effective for singlet-to-magnetization conversion [9, 10], and has been interpreted in terms of symmetry-based recoupling theory [10]. For convenience, we refer to the "dual PulsePol" sequence in figure 2 as "DualPol". DualPol is an unusual example of a solution-state polarization transfer sequence combining (i) multiple-pulse averaging [21, 22] and (ii) hard pulses separated by delays. The sequence provides robust polarization transfer even in the strong-coupling regime, where the standard INEP sequence breaks down [23, 24, 25, 26, 27]. That particular feature is not essential for the results described here. However, it is advantageous in other circumstances, as will be discussed in a future publication. The repeating sequences of PulsePol and DualPol are composed of three-pulse elements of the form 90\({}_{y}\)180\({}_{x}\)90\({}_{x}\), with the pulses separated by intervals \(\tau\), and variants thereof. Each three-pulse sequence is therefore a "windowed" version of a composite 180\({}^{\circ}\) pulse [20]. We therefore call this three-pulse sequence a "R-element", using notation originally introduced in the context of broadband heteronuclear decoupling [28], and later adapted for symmetry-based recoupling sequences in solid-state NMR [29], and symmetry-based singlet-triplet conversion sequences in solution NMR [10]. In the case of Du Figure 2: DualPol pulse sequence used for \({}^{1}\)H-\({}^{103}\)Rh cross polarisation, and consisting of simultaneous PulsePol sequences [8] on the two channels. Each PulsePol sequence is a repeating sequence of two R-elements. Each R-element has duration \(\tau_{\text{R}}\), and is given by a composite 180\({}^{\circ}\) pulse [20] with delays of duration \(\tau\) between the pulses. The R-element duration should be short compared to the inverse of the relevant J-couplings. The black rectangles indicate BB1 composite \(\pi\)-pulses (equation 3). alPol, there is no special constraint or matching condition on the duration \(\tau_{R}\) of the R-element, except that it should be much shorter than the period of the relevant J-coupling, \(\tau_{R}\ll|{}^{3}J_{\mathrm{RhH}}|^{-1}\). Under these conditions, the average Hamiltonian [21] generated by the DualPol sequence, for a heteronuclear 2-spin system, has the form \[\overline{H}^{(1)}\simeq\kappa_{\mathrm{DP}}\times 2\pi J_{JS}\left(I_{x}S_{x}+I _{y}S_{y}\right) \tag{4}\] where the nuclides \({}^{1}\)H and \({}^{103}\)Rh are referred to as \(I\) and \(S\), respectively. The numbering convention for the average Hamiltonian terms starts with 1 for the lowest-order approximation, in common with the symmetry-based recoupling literature [29]. The DualPol scaling factor is given, under suitable approximations, by \(\kappa_{\mathrm{DP}}\simeq\frac{1}{2}\) in the limit of strong radiofrequency pulses. Equation 4 corresponds to an anisotropic Hartmann-Hahn Hamiltonian [30], indicating that the DualPol sequence exchanges \(z\)-magnetization components between the \(I\)-spins and \(S\)-spins. The theory and performance of the DualPol sequence will be discussed in more depth in a future paper. In the experiments described here, all DualPol sequences used an R-element duration of \(\tau_{R}=5\) ms and a repetition number of \(n=10\). The total duration of each DualPol sequence was \(T=2n\tau_{R}=100\) ms. #### ii.2.3 \({}^{1}\)H Destruction Filter The \({}^{1}\)H destruction filter is shown in figure 3. The filter has the net effect of dephasing residual proton transverse and longitudinal magnetisation (which may be generated by accidental excitation, and recovery during the decay interval respectively). #### ii.2.4 \({}^{1}\)H z-filter The z-filter for the selection of longitudinal \({}^{1}\)H magnetisation is shown in figure 4. This employs a bipolar gradient scheme in order to reduce spectral distortions by eddy currents or residual gradient fields [31]. ### Solid-state NMR Solid state CPMAS \({}^{13}\)C NMR was performed using a 4 mm Bruker probe at 14.1 T and \(\sim\)303 K. ### Computational Chemistry Quantum chemical geometry optimisation and shielding tensor calculations for the rhodium formate complex axially ligated by solvent THF molecules were performed using the ORCA program package version 5.0.3 [32]. \({}^{103}\)Rh shielding tensors were computed at the TPSSh/SARC-ZORA-TZVPP level of theory. ## III Results ### NMR Spectra #### iii.1.1 Solution-state \({}^{1}\)H Spectrum The rhodium formate \({}^{1}\)H spectrum features a single formate \({}^{1}\)H resonance split into a 1:2:1 triplet by coupling to the pair of magnetically equivalent \({}^{103}\)Rh nuclei (figure 5). The three-bond \({}^{1}\)H-\({}^{103}\)Rh J-coupling is estimated to be \(|{}^{3}J_{\mathrm{RhH}}|=4.7\pm 0.1\) Hz. Figure 3: Proton destruction filter for the removal of residual proton magnetisation. The gradient strengths are given by G\({}_{1}\)=100% and G\({}_{2}\)=-61.8% with respect to the maximum gradient strength 50 G cm\({}^{-1}\). Each gradient has a duration of 2 ms. The black rectangle indicates a BB1 composite \(\pi/2\) pulse (equation 2). Figure 4: Proton z-filter for the selection of proton z-magnetisation, using bipolar gradients. The gradient strengths are given by G\({}_{1}\)=40% and G\({}_{2}\)=-40% with respect to the maximum gradient strength of 50 G cm\({}^{-1}\). Each gradient pulse has a duration of 2 ms. The black rectangle indicates a BB1 composite \(\pi\)-pulse (equation 3). #### iii.1.2 Solution-state \({}^{103}\)Rh Spectra The sequence shown in figure 6 was used for the acquisition of directly-detected \({}^{103}\)Rh spectra, enhanced by polarization transfer from \({}^{1}\)H nuclei. After an initial pair of 90\({}^{\circ}\) pulses, used for the suppression of ringing artefacts (see below), the DualPol sequence transfers z-magnetization from the \({}^{1}\)H to the \({}^{103}\)Rh nuclei, exploiting the form of the DualPol average Hamiltonian (equation 4). The resultant \({}^{103}\)Rh z-magnetization is converted into observable transverse magnetization by a final 90\({}^{\circ}\) pulse. The \({}^{103}\)Rh NMR signal is enhanced by a factor of up to \(\left|\eta/\gamma_{5}\right|\sim 31\), relative to that induced by a single 90\({}^{\circ}\) pulse applied to \({}^{103}\)Rh nuclei in thermal equilibrium. Ringing artifacts are strongly suppressed by a phase-cycled pair of 90\({}^{\circ}\) pulses on the proton channel, before the polarization transfer takes place. The signs of the \({}^{1}\)H magnetization and the \({}^{103}\)Rh receiver are simultaneously inverted in successive scans. Since the phases of the ringing are correlated with the phases of the pulses on the \({}^{103}\)Rh channel, the ringing is strongly suppressed in the \({}^{103}\)Rh spectrum. Further suppression of ringing is achieved by additional phase cycling of the PulsePol blocks. The sign of the \({}^{103}\)Rh magnetization is invariant under global phase shifts of the DualPol sequence, while the ringing contribution is phase-correlated and largely cancels out. Similar logic has been used to design excitation schemes for ringing suppression in homonuclear NMR experiments [33; 34]. The rhodium formate \({}^{103}\)Rh spectrum features a single \({}^{103}\)Rh resonance split into a 1:4:6:4:1 pentet by couplings to the four equivalent \({}^{1}\)H nuclei on the formate ligands (figure 7(a). The three-bond \({}^{1}\)H-\({}^{103}\)Rh J-coupling is estimated to be \(\left|{}^{3}J_{\text{RhH}}\right|=4.7\pm 0.1\) Hz, in agreement with the \({}^{1}\)H spectrum. The \({}^{103}\)Rh resonances collapse into a single peak centred at 7516 ppm upon \({}^{1}\)H decoupling (figure 7(b)). The \({}^{103}\)Rh resonances are broadened by the short \({}^{103}\)Rh Figure 5: \({}^{1}\)H spectrum of a \(\sim\)10 mM solution of rhodium formate in THF-d\({}_{8}\), acquired at 9.4 T and at 298K in a single scan. Exponential line broadening (0.75 Hz) was applied. Figure 7: (a) \({}^{103}\)Rh spectrum of a \(\sim\)10 mM solution of rhodium formate in THF-d\({}_{8}\) scaled 2.5 times, acquired using 128 scans at 9.4 T and at 298 K using the pulse sequence in figure 6. (b) \({}^{1}\)H-decoupled \({}^{103}\)Rh spectrum acquired using 128 scans at 9.4 T and at 298 K using the pulse sequence in figure 6 with continuous-wave \({}^{1}\)H decoupling during signal acquisition. Acquisition time for each spectrum was 1 hour. Exponential line broadening (1 Hz) was applied to each spectrum. (see figure-13). The \({}^{103}\)Rh chemical shift is temperature-dependent (see Figure 8). The temperature-dependence of the \({}^{103}\)Rh chemical shift is approximately linear over the relevant temperature range, with a gradient of \(\sim 1.48\) ppm K\({}^{-1}\). This is in general agreement with observations on similar Rh complexes [7; 12]. #### iii.1.3 Solid-state \({}^{13}\)C NMR The chemical shift anisotropy (CSA) of the formate \({}^{13}\)C nuclei was estimated by magic-angle-spinning NMR experiments on rhodium formate solid (figure 9). The estimated eigenvalues of the traceless, symmetric (rank-2) part of the shielding tensor are as follows: \(\sigma_{xx}^{(2)}=65.1\) ppm, \(\sigma_{yy}^{(2)}=5.5\) ppm, and \(\sigma_{zz}^{(2)}=-70.7\) ppm. This corresponds to the following Frobenius norm of the rank-2 \({}^{13}\)C shielding tensor: \[||\mathbf{\sigma}^{(2)}||(^{13}\text{C}) =\{(\sigma_{xx}^{(2)})^{2}+(\sigma_{yy}^{(2)})^{2}+(\sigma_{zz}^ {(2)})^{2}\}^{1/2}\] \[=96.3\pm 1.0\text{ ppm} \tag{5}\] ### Relaxation Times #### iii.2.1 \({}^{1}\)H-Detected \({}^{103}\)Rh \(T_{1}\) \({}^{103}\)Rh \(T_{1}\) relaxation time constants were measured indirectly through \({}^{1}\)H NMR signals using the sequence shown in figure 10. DualPol is used to transfer z-magnetization from the \({}^{1}\)H nuclei to the \({}^{103}\)Rh nuclei, and allowed to relax towards equilibrium during the relaxation interval \(\tau_{\text{relax}}\). For field-dependent relaxation measurements, the sample is shutted to a region of lower magnetic field during this interval, and back again. A proton destruction filter is applied to eliminate any residual proton magnetisation, such as that generated during \(\tau_{\text{relax}}\) through longitudinal relaxation towards equilibrium. Remaining \({}^{103}\)Rh z-magnetisation, selected for by the two 90\({}^{\circ}\) pulses, is now transferred back to \({}^{1}\)H z-magnetisation by a second DualPol block and is selected for by a proton z-filter. A final \({}^{1}\)H 90\({}^{\circ}\) pulse generates observable \({}^{1}\)H transverse magnetization. The sequence is repeated with variation of \(\tau_{\text{relax}}\) in order to follow the equilibration of longitudinal \({}^{103}\)Rh magnetization. The trajectory of indirectly-detected \({}^{103}\)Rh z-magnetization in a field of 9.4 T is shown in figure 11(a). The trajectory fits well to a single-exponential decay with time constant Figure 8: \({}^{103}\)Rh chemical shift of rhodium formate dissolved in THF-d\({}_{\text{g}}\) at 9.4 T, as a function of temperature. The chemical shifts are referenced to \(\Xi\left({}^{103}\text{Rh}\right)=3.16\%\). Figure 10: Sequence used for the indirect measurement of rhodium T\({}_{1}\) through \({}^{1}\)H NMR signals. Phase cycles are given by \(\phi_{1}=[x,x,-x,-x]\), \(\phi_{2}=[-x,x,-x,x]\), \(\phi_{3}=[x,x,x,x,y,y,y,-x,-x,-x,-y,-y,-y,-y]\) and the receiver \(\phi_{\text{rec}}=[x,-x,-x,x,y,-y,-y,-x,x,x,-x,-y,y,y,-y]\). The optional shutting of the sample to low field, and back again, during the interval \(\tau_{\text{relax}}\), is indicated. Figure 9: Rhodium formate \({}^{13}\)C\(\{^{1}\)H\(\}\) solid-state CPMAS [35] NMR spectrum obtained at a spinning frequency of 4 kHz acquired using 2048 scans at 14.1 T and at 303K. The chemical shift was referenced to admatane. The contact time was 160 \(\mu\)s. The recycle delay was 3 s. \(\sim\)150 mg of sample was used. Further details of the pulse sequence are provided in the Supporting Information. \(T_{1}(^{103}{\rm Rh})=0.483\pm 0.002\) s. A trajectory in the low magnetic field of 1 mT is shown in figure 11(b). This was produced by shuttling the sample to low magnetic field during the interval \(\tau_{\rm relax}\). The relaxation process is much slower in low field, with a time constant of \(T_{1}(^{103}{\rm Rh})=28.2\pm 1.2\) s. The rhodium \(T_{1}^{-1}\) increases approximately quadratically with the magnetic field strength \(B\), as shown in figure 11(c). The field-dependent relaxation rate constant is a reasonable fit to the quadratic function \(T_{1}^{-1}(B)=T_{1}^{-1}(0)+aB^{2}\), where \(T_{1}^{-1}(0)=0.065\pm 0.04\) s\({}^{-1}\) and \(a=0.023\pm 0.001\) s\({}^{-1}\) T\({}^{-2}\). #### iii.2.2 \({}^{1}\)H-Detected \({}^{103}{\rm Rh}\)\(T_{2}\) The sequence shown in figure 12 was used to measure the \({}^{103}\)Rh spin-spin relaxation time constant \(T_{2}\) in high magnetic field. Conversion of \({}^{1}\)H z-polarization to \({}^{103}\)Rh z-polarization is achieved via DualPol. \({}^{103}\)Rh transverse magnetisation is generated by a 90\({}^{\circ}\) pulse and allowed to decay during the subsequent spin echo of duration \(\tau_{\rm echo}\). The ensuing 90\({}^{\circ}\)\({}^{103}\)Rh pulse returns the remaining transverse \({}^{103}\)Rh magnetisation back to longitudinal \({}^{103}\)Rh polarisation. A \({}^{1}\)H destruction filter destroys any residual \({}^{1}\)H magnetisation before another DualPol cross-polarisation block transfers \({}^{103}\)Rh z-magnetisation back to \({}^{1}\)H z-magnetization. The \({}^{1}\)H z-filter selects for \({}^{1}\)H z-magnetization before the \({}^{1}\)H signal is induced by the final 90\({}^{\circ}\)\({}^{1}\)H pulse. The pulse sequence is repeated varying the echo delay \(\tau_{\rm echo}\) in order to follow the decay of \({}^{103}\)Rh transverse magnetization. The trajectory of indirectly-detected \({}^{103}\)Rh transverse magnetization in a field of 9.4 T is shown in figure 13. The trajectory fits well to a single-exponential decay with time constant \(T_{2}(^{103}{\rm Rh})=0.181\pm 0.001\) s. Note that the measured value of \(T_{2}\) is much smaller than \(T_{1}\) under the same conditions. #### iii.2.3 \({}^{13}\)C inversion-recovery As discussed below, the rotational correlation time \(\tau_{c}\) of the rhodium formate complex may be estimated by a study of the \({}^{13}\)C longitudinal relaxation. This data was obtained by an indirect detection method exploiting the scalar-coupled formate protons, as described in the Supporting Information. The inversion-recovery data fits well to a single-exponential recovery with a time constant of \(2.64\pm 0.13\) s for a solution in THF-\(\rm d_{8}\), in a magnetic field of 9.4 T. However, as described below, the inversion-recovery curve for the \({}^{13}\)C magnetization is best analyzed using a bi-exponential relaxation model. ## IV Discussion As shown in figure 11(c), the \({}^{103}\)Rh relaxation rate constant \(T_{1}^{-1}\) has a quadratic dependence on magnetic field \(B\), with an additional zero-field contribution of \(T_{1}^{-1}(0)=0.0653\pm 0.0383\) s\({}^{-1}\). The quadratic field dependence is consistent Figure 11: (a) Decay curve for \({}^{103}\)Rh longitudinal magnetization at a field of 9.4 T, obtained using the pulse sequence in figure 10, but without shuttling the sample to low field. The data was acquired in \(\sim\)20 minutes. The integrals are normalised against the \({}^{1}\)H spectrum obtained by a single \({}^{1}\)H 90\({}^{\circ}\) pulse applied to a system in thermal equilibrium at 9.4 T. The data fits well to an exponential decay with time constant \(T_{1}=0.483\pm 0.002\) s. (b) Decay curve for \({}^{103}\)Rh longitudinal magnetization at a field of 1 mT, obtained using the pulse sequence in figure 10, including the shuttling of the sample to low field. The data fits well to an exponential decay with time constant \(T_{1}=28.2\pm 1.2\) s. (c) \({}^{103}\)Rh relaxation rate constant \(T_{1}^{-1}\) as a function of magnetic field strength. The dashed line shows the quadratic function \(T_{1}^{-1}(B)=T_{1}^{-1}(0)+aB^{2}\), where \(T_{1}^{-1}(0)=0.065\pm 0.038\) s\({}^{-1}\) and \(a=0.023\pm 0.001\) s\({}^{-1}\) T\({}^{-2}\). with a dominant chemical shift anisotropy (CSA) relaxation mechanism, as is commonly observed for the \({}^{103}\)Rh NMR of rhodium complexes [12; 36]. It is difficult to estimate the \({}^{103}\)Rh chemical shift anisotropy by solid-state NMR. The small magnetogyric ratio of \({}^{103}\)Rh and the very large CSA value make solid-state \({}^{103}\)Rh NMR very difficult. Our attempts to use the PROSPR method [37] to observe the \({}^{103}\)Rh spectrum indirectly in the solid state, by saturation transfer to the \({}^{1}\)H nuclei, were also unsuccessful. This is likely due to the very small dipole-dipole couplings between \({}^{1}\)H and \({}^{103}\)Rh nuclei in this complex, which greatly inhibits dipolar-mediated polarization transfer in the solid state. The symmetry of the complex indicates that the \({}^{103}\)Rh CSA tensors should have uniaxial symmetry (\(\eta=0\)) with their unique principal axis along the Rh-Rh bond. This property is assumed in the following discussion. Although the \({}^{103}\)Rh CSA may not be measured directly, it is possible to estimate it by a combination of field-dependent \({}^{103}\)Rh and \({}^{13}\)C \(T_{1}\) measurements. The compact cage structure of the rhodium formate complex (figure 1) suggests that, to a good approximation, the complex tumbles in solution as a near-rigid body, with a common rotational correlation time \(\tau_{\rm c}\) for all spin interactions. This approximation allows a correlation time estimate from \({}^{13}\)C NMR to be applied in the context of \({}^{103}\)Rh NMR. A \({}^{13}\)C nucleus of rhodium formate experiences two strong anisotropic interactions: the \({}^{13}\)C-\({}^{1}\)H dipole-dipole coupling with the directly-bonded hydrogen nucleus, and the \({}^{13}\)C chemical shift anisotropy. For point nuclei (i.e. ignoring the spatial spread of the nuclear wavefunctions), the \({}^{13}\)C-\({}^{1}\)H dipole-dipole coupling constant is given by \(b_{CH}=-(\mu_{0}/4\pi)\hbar\gamma\zeta\eta r_{CH}^{-3}\), where \(r_{CH}\) is the \({}^{13}\)C-\({}^{1}\)H internuclear distance [38]. Quantum chemical calculations [32] (see SI) predict an internuclear \({}^{13}\)C-\({}^{1}\)H distance of 1.097 A, corresponding to a dipole-dipole coupling constant of \(b_{CH}=-2\pi\times 22.8\) kHz. However, solid-state NMR studies have shown that the true dipole-dipole coupling is weakened by the angular spread of the \({}^{1}\)H wavefunctions, associated with the zero-point librational motion of the C-H bonds [39]. In the calculations below, we therefore assume a \({}^{13}\)C-\({}^{1}\)H dipole-dipole coupling constant of \(b_{CH}=-2\pi\times(20.4\pm 0.5)\) kHz. For isolated \({}^{13}\)C-\({}^{1}\)H spin systems in the extreme narrowing approximation (fast tumbling), the theoretical recovery of \({}^{13}\)C longitudinal magnetization \(M_{z}(t)\) after perturbation from equilibrium at time \(t=0\) is expected to follow the biexponential curve \[M_{z}(t)=M_{z}^{\rm eq}+ \left(M_{z}(0)-M_{z}^{\rm eq}\right)\times\] \[\frac{1}{2}\left(\exp\{-(\frac{1}{2}b_{CH}^{2}+\frac{1}{5}\omega_{ \rm CSA}^{2})\tau_{\rm c}\}\right.\] \[+\left.\exp\{-(\frac{3}{2}b_{CH}^{2}+\frac{1}{5}\omega_{\rm CSA}^ {2})\tau_{\rm c}\}\right) \tag{6}\] where \(M_{z}^{\rm eq}\) is the thermal equilibrium \({}^{13}\)C magnetization, and \(\omega_{\rm CSA}\) is defined as follows: \[\omega_{\rm CSA}=-\gamma_{\rm c}B^{0}||\mathbf{\sigma}^{(2)}|| \tag{7}\] where \(||\mathbf{\sigma}^{(2)}||\) is the norm of the \({}^{13}\)C shielding tensor, as defined in equation 5. The biexponential form of equation 6 is due to \({}^{1}\)H-13C cross-relaxation during the magnetization recovery [40; 41; 42]. In a magnetic field of 9.4 T, the \({}^{13}\)C CSA, as estimated by \({}^{13}\)C solid-state NMR (section III.1.3), corresponds to an interaction strength of \(\omega_{\rm CSA}\simeq 2\pi\times(9.7\pm 0.1)\) kHz. By fitting the experimental \({}^{13}\)C inversion-recovery trajectory to an equation of the form in eq. 6, we obtain the following estimate of the rotational correlation time for the rhodium formate paddlewheel complex in THF-d\({}_{8}\) solution at 298 K: \(\tau_{\rm c}\simeq 24.5\pm 1.5\) ps. The \({}^{103}\)Rh relaxation may now be analyzed using the estimate of \(\tau_{\rm c}\) from the \({}^{13}\)C data. As shown in figure 11, the \({}^{103}\)Rh \(T_{1}^{-1}\) relaxation rate constant is well-described by the function \(T_{1}^{-1}(B)=T_{1}^{-1}(0)+aB^{2}\), with the field-independent Figure 13: Decay curve for \({}^{103}\)Rh transverse magnetization at a field of 9.4 T, obtained using the pulse sequence in figure 12. The data fits well to an exponential decay with time constant \(T_{2}=0.181\pm 0.001\) s. The integrals are normalised against the \({}^{1}\)H spectrum obtained by a single \({}^{1}\)H 90\({}^{\circ}\) pulse applied to a system in thermal equilibrium at 9.4 T. term \(T_{1}^{-1}(0)=0.065\pm 0.038\,\mathrm{s}^{-1}\), and the quadratic coefficient \(a=0.023\pm 0.001\,\mathrm{s}^{-1}\,\mathrm{T}^{-2}\). The quadratic field-dependent term may be ascribed to the CSA mechanism. In the extreme narrowing approximation (fast tumbling), the CSA contribution to the \(T_{1}^{-1}\) relaxation rate constant for \({}^{103}\)Rh is given by [42] \[\left(T_{1}(^{103}\mathrm{Rh})\right)^{-1}_{\mathrm{CSA}}=\frac{2}{15}B_{0}^{2 }\gamma_{\mathrm{Rh}}^{2}\Delta\sigma^{2}\tau_{\mathrm{c}} \tag{8}\] where the shielding anisotropy \(\Delta\sigma\) is defined as follows [42]: \[\Delta\sigma=\frac{3}{2}(\sigma_{ZZ}-\sigma_{\mathrm{iso}})=-\frac{3}{2} \delta^{\mathrm{aniso}} \tag{9}\] Equation 8 implies that the quadratic field-dependent coefficient \(a\) for the \({}^{103}\)Rh \(T_{1}^{-1}\) relaxation rate constant is given by \[a=\frac{2}{15}\gamma_{\mathrm{Rh}}^{2}\Delta\sigma^{2}\tau_{\mathrm{c}} \tag{10}\] The experimental estimate of the quadratic coefficient \(a=0.023\pm 0.001\,\mathrm{s}^{-1}\,\mathrm{T}^{-2}\) may be combined with the correlation time estimate \(\tau_{\mathrm{c}}\simeq 24.5\pm 1.5\,\mathrm{ps}\) to obtain the following experimental estimate of the \({}^{103}\)Rh shielding anisotropy: \(|\Delta\sigma|=9900\pm 540\) ppm. This is a very large number. Although prior estimates of the \({}^{103}\)Rh CSA are scarce in the literature, CSA values for heavy spin-1/2 nuclei are sometimes of a similar magnitude [43; 44; 45; 46; 47; 48; 49; 50; 51; 52], with closely related platinum (II) compounds displaying \({}^{195}\)Pt CSA values on the order of 10,000 ppm [50; 52; 43; 53]. To our knowledge, the only other measurements of \({}^{103}\)Rh CSAs, in very different Rh(III) compounds, were on the order of \(\sim\)500-1500 ppm [53; 54]. This dramatic range is also typical [51; 46; 47] for heavy spin-1/2 nuclei. Using ORCA [55; 32; 56; 103]Rh shielding tensors were computed at the TPSSh/SARC-ZORA-TZVPP level of theory using implicit solvation (CPCM [57; 58] for THF), the zeroth-order regular approximation (ZORA) [59; 60] for the inclusion of relativistic effects, GIAOs, the RI approximation [56], and the taut-dependent correction as suggested by Dobson [61; 62; 63] (see Supporting Information). The result is summarised in table 1. The calculated CSA is somewhat smaller than the experimental estimate. Underestimation of CSAs calculated using the ZORA method has been reported for other heavy spin-1/2 nuclei [64; 65; 66], where better agreement might be obtained with higher-order four-component relativistic calculations [65] or by accounting for the relativistic breakdown of the relationship between spin-rotation and the paramagnetic contribution to the anisotropy [66]. The origin of the zero-field contribution \(T_{1}^{-1}(0)\) to the \({}^{103}\)Rh relaxation rate constant is currently unknown. As discussed in the Supporting Information, the \({}^{103}\)Rh-\({}^{103}\)Rh and \({}^{103}\)Rh-\({}^{1}\)H dipole-dipole couplings are much too weak to account for this term. In the literature, the low-field relaxation of heavy spin-1/2 nuclei is often attributed to a spin-rotation mechanism. However, to our knowledge, this conclusion has not been supported by any theoretical or computational studies. The experimental estimate of the \({}^{103}\)Rh \(T_{2}\) is much shorter than the estimate of \(T_{1}\) under the same conditions (\(T_{2}=0.181\pm 0.001\) s as against \(T_{1}=0.483\pm 0.002\) s, in a field of 9.4 T. We tentatively attribute the short \(T_{2}\) value to the modulation of the isotropic chemical shift by ligand exchange at the axial positions. Other decoherence mechanisms, such as diffusion in the presence of inhomogeneous magnetic fields, are expected to be too weak to account for the observed \(T_{2}\) value in this case. In conclusion, this paper has demonstrated methodology for the indirect estimation of \({}^{103}\)Rh \(T_{1}\) and \(T_{2}\) values by magnetization transfer to and from \({}^{1}\)H nuclei using the DualPol pulse sequence. Field-dependent \({}^{103}\)Rh \(T_{1}\) measurements indicate a very large chemical shift anisotropy for the \({}^{103}\)Rh sites in the rhodium formate paddlewheel complex. The field-independent contribution to the \({}^{103}\)Rh relaxation rate constant is not fully understood at the current time. A limitation of the methodology described here is the prerequisite of a spin system with direct scalar couplings between \({}^{103}\)Rh nuclei and a proton, which is not present in all rhodium complexes. This limitation may be addressed via the use of a relay nucleus, such as \({}^{13}\)C at natural abundance [7; 67; 68; 69]. ###### Acknowledgements. We acknowledge funding from the European Research Council (grant 786707-FunMagResBeacons), and EPSRC-UK (grants EP/P009980/1, EP/P030491/1, EP/V055593/1). M.L. acknowledges financial support by the Max-Planck-Gesellschaft and the Max-Planck-Institut fur Kohlenforschung. We thank Alexander A. Auer for advice on quantum chemical calculations. We thank Professor Brian E. Mann for advice and historical insights on rhodium NMR. We thank Alexey Kiryutin for sharing his designs for the sample shuttle. ## Author Declarations ### Conflict of interest The authors have no conflicts to disclose. \begin{table} \begin{tabular}{|l|l|} \hline Method & \(|\Delta\sigma|\)/ppm \\ \hline Calculated & 7070 \\ \hline Experimental estimate & 9900 \(\pm\) 540 \\ \hline \end{tabular} \end{table} Table 1: Estimates of the \({}^{103}\)Rh shielding tensor anisotropy \(\Delta\sigma\) of Rh formate, defined in equation 9. The computational estimate is given by quantum chemical calculation using ORCA [32]. The experimental estimate is from the analysis of field-dependent \({}^{103}\)Rh relaxation in solution, as described in this paper. ## Data Availability Statement The data that support the findings of this study are available from the corresponding author upon reasonable request.
2309.16872
The support of mixed area measures involving a new class of convex bodies
Mixed volumes in $n$-dimensional Euclidean space are functionals of $n$-tuples of convex bodies $K,L,C_1,\ldots,C_{n-2}$. The Alexandrov--Fenchel inequalities are fundamental inequalities between mixed volumes of convex bodies. As very special cases they cover or imply many important inequalities between basic geometric functionals. A complete characterization of the equality cases in the Alexandrov--Fenchel inequality remains a challenging open problem. Major recent progress was made by Yair Shenfeld and Ramon van Handel \cite{SvH22,SvH23+}, in particular they resolved the problem in the cases where $C_1,\ldots,C_{n-2}$ are polytopes, zonoids or smooth bodies (under some dimensional restriction). In \cite{HugReichert23+} we introduced the class of polyoids, which are defined as limits of finite Minkowski sums of polytopes having a bounded number vertices. Polyoids encompass polytopes, zonoids and triangle bodies, and they can be characterized by means of generating measures. Based on this characterization and Shenfeld and van Handel's contribution, we extended their result to polyoids (or smooth bodies). Our previous result was stated in terms of the support of the mixed area measure associated with the unit ball $B^n$ and $C_1,\ldots,C_{n-2}$. This characterization result is completed in the present work which more generally provides a geometric description of the support of the mixed area measure of an arbitrary $(n-1)$-tuple of polyoids (or smooth bodies). The result confirms a long-standing conjecture by Rolf Schneider in the case of polyoids and hence, in particular, of zonoids.
Daniel Hug, Paul A. Reichert
2023-09-28T22:01:29Z
http://arxiv.org/abs/2309.16872v1
# The support of mixed area measures involving a new class of convex bodies ###### Abstract Mixed volumes in \(n\)-dimensional Euclidean space are functionals of \(n\)-tuples of convex bodies \(K,L,C_{1},\ldots,C_{n-2}\). The Alexandrov-Fenchel inequalities are fundamental inequalities between mixed volumes of convex bodies. As very special cases they cover or imply many important inequalities between basic geometric functionals. A complete characterization of the equality cases in the Alexandrov-Fenchel inequality remains a challenging open problem. Major recent progress was made by Yair Shenfeld and Ramon van Handel [9, 10], in particular they resolved the problem in the cases where \(C_{1},\ldots,C_{n-2}\) are polytopes, zonoids or smooth bodies (under some dimensional restriction). In [3] we introduced the class of polyoids, which are defined as limits of finite Minkowski sums of polytopes having a bounded number vertices. Polyoids encompass polytopes, zonoids and triangle bodies, and they can be characterized by means of generating measures. Based on this characterization and Shenfeld and van Handel's contribution, we extended their result to polyoids (or smooth bodies). Our previous result was stated in terms of the support of the mixed area measure associated with the unit ball \(B^{n}\) and \(C_{1},\ldots,C_{n-2}\). This characterization result is completed in the present work which more generally provides a geometric description of the support of the mixed area measure of an arbitrary \((n-1)\)-tuple of polyoids (or smooth bodies). The result confirms a long-standing conjecture by Rolf Schneider in the case of polyoids and hence, in particular, of zonoids. MSC-classes 2020.52A39, 52A20, 52A21, 52A40 **Keywords.** Polytope, zonoid, polyoid, Alexandrov-Fenchel inequality, generating measure, mixed area measure Introduction Mixed volumes of convex bodies (nonempty compact convex sets) in Euclidean space \(\mathbb{R}^{n}\), \(n\geq 2\), are symmetric functionals of \(n\)-tuples of convex bodies, which naturally arise as coefficients of polynomial expansions of nonnegative Minkowski combinations of convex bodies. We write \(\mathrm{V}\) for the volume functional (Lebesgue measure) and \(\alpha_{1}K_{1}+\cdots+\alpha_{m}K_{m}\) for the Minkowski combination of the convex bodies \(K_{1},\ldots,K_{m}\subset\mathbb{R}^{n}\) with nonnegative coefficients \(\alpha_{1},\ldots,\alpha_{m}\in\mathbb{R}\). Then \[\mathrm{V}(\alpha_{1}K_{1}+\cdots+\alpha_{m}K_{m})=\sum_{i_{1},\ldots,i_{n}=1 }^{m}\mathrm{V}(K_{i_{1}},\ldots,K_{i_{n}})\alpha_{i_{1}}\cdots\alpha_{i_{n}}, \tag{1}\] where \(\mathrm{V}(K_{i_{1}},\ldots,K_{i_{n}})\) is called the mixed volume of \(K_{i_{1}},\ldots,K_{i_{n}}\). A local counterpart of the mixed volumes are the mixed area measures. For convex bodies \(K_{1},\ldots,K_{n-1}\subset\mathbb{R}^{n}\), the mixed area measure \(\mathrm{S}(K_{1},\ldots,K_{n-1},\cdot)\) is the uniquely determined Borel measure on the Euclidean unit sphere \(\mathbb{S}^{n-1}\) such that \[\mathrm{V}(K_{1},\ldots,K_{n-1},K_{n})=\frac{1}{n}\int_{\mathbb{S}^{n-1}}h_{ K_{n}}(u)\;\;\mathrm{S}(K_{1},\ldots,K_{n-1},\mathrm{d}u) \tag{2}\] holds for all convex bodies \(K_{n}\subset\mathbb{R}^{n}\), where \(h_{K_{n}}\) is the support function of \(K_{n}\) (see [7, Sect. 5.1] or [4, Thm. 4.1]). A deep inequality for mixed volumes of convex bodies, with many consequences and applications to diverse fields, has been found and established by Alexandrov [1] (see Schneider [7, Sect. 7.3], also for some historical comments). We write \(\mathcal{K}^{n}\) for the set of convex bodies in \(\mathbb{R}^{n}\). **Theorem** (Alexandrov-Fenchel Inequality).: _Let \(K,L\in\mathcal{K}^{n}\) be convex bodies, and let \(\mathcal{C}=(C_{1},\ldots,C_{n-2})\) be an \((n-2)\)-tuple of convex bodies in \(\mathbb{R}^{n}\). Then_ \[\mathrm{V}(K,L,\mathcal{C})^{2}\geq\mathrm{V}(K,K,\mathcal{C})\;\mathrm{V}(L, L,\mathcal{C}),\] (AFI) _where \(\mathrm{V}(K,L,\mathcal{C})\coloneqq\mathrm{V}(K,L,C_{1},\ldots,C_{n-2})\)._ While the inequality was already established by Alexandrov and various proofs of the inequality are known, some of which were found recently (see [2, 8, 11] and the references given there), a complete characterization of the equality cases remains a major open problem in Brunn-Minkowski theory (see [7, Sect. 7.6]). For recent progress, we mention the work by Shenfeld and van Handel [9, 10] and the literature cited there. Based on their findings for the case where \(\mathcal{C}=(C_{1},\ldots,C_{n-2})\) is a tuple of polytopes, zonoids or smooth bodies (satisfying a weak dimensionality assumption, called supercriticality), the following more general result has been shown in [3]. It confirms a conjecture by Rolf Schneider [7, Conjecture 7.6.16] for a new class of convex bodies, which we called polyoids, that contains all polytopes, zonoids and triangle bodies. A _polyoid_ is a convex body \(K\) for which there is some integer \(k\in\mathbb{N}\) and a sequence of Minkowski sums of polytopes each having at most \(k\) vertices that converges to \(K\); see [3, Sect. 2] (and Section 3 below) for further details and a representation theorem characterizing polyoids. A convex body is smooth if each of its boundary points is contained in a unique supporting hyperplane. **Theorem** (Equality cases in (AFI) for polyoids and smooth bodies [3]).: _Let \(K,L\in\mathcal{K}^{n}\), and let \(\mathcal{C}=(C_{1},\ldots,C_{n-2})\) be a supercritical \((n-2)\)-tuple of polyoids or smooth convex bodies in \(\mathbb{R}^{n}\). Assume that \(\operatorname{V}(K,L,\mathcal{C})>0\). Then_ (AFI) _holds with equality if and only if there are \(a>0\) and \(x\in\mathbb{R}^{n}\) such that_ \[h_{K}=h_{aL+x}\quad\text{ on }\operatorname{supp}\operatorname{S}(B^{n}, \mathcal{C},\cdot),\] _where \(\operatorname{supp}\operatorname{S}(B^{n},\mathcal{C},\cdot)\) denotes the support of the mixed area measure \(\operatorname{S}(B^{n},\mathcal{C},\cdot)\) of the unit ball \(B^{n}\) and the \((n-2)\)-tuple \(\mathcal{C}\)._ For a geometric understanding of the equality cases in the Alexandrov-Fenchel inequality (AFI) it thus remains to describe the support of the measure \(\operatorname{S}(B^{n},\mathcal{C},\cdot)\) in geometric terms. According to another (more general) conjecture by Rolf Schneider [7, Conjecture 7.6.14], the support of the mixed area measure \(\operatorname{S}(K_{1},\ldots,K_{n-1},\cdot)\), for given convex bodies \(K_{1},\ldots,K_{n-1}\subset\mathbb{R}^{n}\), is the closure of the set of \((K_{1},\ldots,K_{n-1})\)_-extreme normal vectors_, for which we write \(\operatorname{cl}\operatorname{ext}(K_{1},\ldots,K_{n-1})\); an explicit definition and further information are given in Section 2. If all convex bodies are polytopes or all are smooth and strictly convex, then the conjecture is known to be true. The conjecture was also recently confirmed by Shenfeld and van Handel in the case of \((n-1)\)-tuples of the form \((B^{n},C_{1},\ldots,C_{n-2})\), where \(C_{i}\) is a zonoid or a smooth convex body in \(\mathbb{R}^{n}\). However, even in the case where the unit ball \(B^{n}\) is replaced by a general zonoid, the conjecture was open up to now. Our main result confirms Schneider's conjecture [7, Conjecture 7.6.14] not only for general \((n-1)\)-tuples of zonoids (or smooth bodies), but for the larger class of polyoids (or smooth bodies). **Theorem 1.1**.: _Let \(\mathcal{C}=(C_{1},\ldots,C_{n-1})\) be an \((n-1)\)-tuple of polyoids (or smooth convex bodies provided at least one of the bodies \(C_{i}\) is smooth and strictly convex) in \(\mathbb{R}^{n}\). Then_ \[\operatorname{supp}\operatorname{S}(\mathcal{C},\cdot)=\operatorname{cl} \operatorname{ext}\mathcal{C}. \tag{3}\] In combination with the preceding theorem on the characterization of the equality cases in (AFI), given in terms of the support of the mixed measure \(\operatorname{S}(B^{n},\mathcal{C},\cdot)\), we thus obtain the following result, which establishes Schneider's conjecture [7, Conjecture 7.6.13] for the class of polyoids (or smooth bodies). **Theorem 1.2**.: _Let \(K,L\in\mathcal{K}^{n}\), and let \(\mathcal{C}=(C_{1},\ldots,C_{n-2})\) be a supercritical \((n-2)\)-tuple of polyoids or smooth convex bodies in \(\mathbb{R}^{n}\). Assume that \(\operatorname{V}(K,L,\mathcal{C})>0\). Then (AFI) holds with equality if and only if there are \(a>0\) and \(x\in\mathbb{R}^{n}\) such that_ \[h_{K}=h_{aL+x}\quad\text{ on }\operatorname{ext}(B^{n},\mathcal{C}).\] In the special case where \(C_{1},\ldots,C_{n-2}\) are all smooth, each unit vector is \((B^{n},\mathcal{C})\)-extreme and therefore \(K\) and \(L\) are homothetic (see [7, Thm.7.6.8]). As another consequence of Theorem 1.1, we obtain the following partial confirmation of a conjecture on the monotonicity of mixed volumes (see [6, Conjecture A\({}^{\prime}\)]). **Theorem 1.3**.: _Let \(K,L\in\mathcal{K}^{n}\) satisfy \(K\subseteq L\). Let \(\mathcal{C}=(C_{1},\ldots,C_{n-1})\) be an \((n-1)\)-tuple of polyoids (or smooth convex bodies provided at least one of the bodies \(C_{i}\) is smooth and strictly convex) in \(\mathbb{R}^{n}\). Then equality holds in_ \[\operatorname{V}(K,\mathcal{C})\leq\operatorname{V}(L,\mathcal{C})\] _if and only if_ \[h_{K}=h_{L}\quad\text{ on }\operatorname{ext}\mathcal{C}. \tag{4}\] Condition (4) is expressed by saying that \(K\) and \(L\) have the same \(\mathcal{C}\)-extreme supporting hyperplanes. In order to show relation (3), which is the main result of this paper, we prove two inclusions. Both inclusions require various preparations and involve new ideas. The main task is to prove the result in the case where \(C_{1},\ldots,C_{n-1}\) are polyoids with generating measures \(\mu_{1},\ldots,\mu_{n-1}\). In order to show that \(\operatorname{supp}\operatorname{S}(\mathcal{C})\subseteq\operatorname{cl} \operatorname{ext}\mathcal{C}\), we express in a first step the support of the mixed area measure of \(\mathcal{C}\) as the closure of the union of the extreme normal vectors \(\operatorname{ext}\mathcal{P}\) of all \((n-1)\)-tuples \(\mathcal{P}\) of polytopes in the support of \(\mu_{1}\otimes\cdots\otimes\mu_{n-1}\). A main tool is Theorem 2.23 which applies to more general bodies than polyoids. In a second step, we provide in Section 3 information about projections of touching spaces (the linear subspaces orthogonal to the better known touching cones) and projections of polyoids and their generating measures. In Section 4 we develop a method to characterize what it means that the touching space of a convex body, and in particular of a polyoid, is trivial. These ingredients are combined in Section 7 to complete the proof of the inclusion "\(\subseteq\)". In fact, our arguments for the inclusion "\(\subseteq\)" apply to a formally larger class of convex bodies which we called macroids in [3], see Proposition 7.1. For the reverse inclusion, we proceed by induction over the dimension (see Section 7). A natural ingredient in the argument is a reduction formula that relates the mixed area measures of convex bodies, where some of these bodies are contained in a subspace \(E\), to the mixed area measure of the remaining bodies, projected to the orthogonal subspace \(E^{\perp}\) (see Secton 2). A crucial new idea to make the induction work is to reduce the complexity of a polyoid \(M\), which has a nontrivial touching space in direction \(u\), by a construction we call pruning. It ultimately allows us to replace \(M\) locally by a lower dimensional witness polytope \(\operatorname{Re}(M,u)\) which can be used in place of \(M\) to explore the support of a mixed area measure involving \(M\). Motivating examples for the construction of such a polytope and the crucial Witness Lemma 5.8 are contained in Section 5. It is this part of the argument for the inclusion "\(\supseteq\)" which inhibits the extension of Theorem 1.1 to macroids. Another ingredient for the induction is provided in Section 6. It finally allows us in the induction step to replace, for a given direction \(u\), some of the polyoids by their associated witness polytopes. The required Switching Lemma 6.1 is based on concepts of criticality that are discussed in Section 2 and have already proved to be essential in recent work by Shenfeld and van Handel [10]. ## 2 Preparations We work in Euclidean space \(\mathbb{R}^{n}\) with scalar product \(\langle\cdot,\cdot\rangle\), norm \(\|\cdot\|\) and Euclidean metric \(d(\cdot\,,\cdot)\). Most of the time we work with nonempty compact convex subsets of \(\mathbb{R}^{n}\) (convex bodies) and denote the space of all convex bodies in \(\mathbb{R}^{n}\) by \(\mathcal{K}^{n}\), together with the Hausdorff metric. We denote by \(\mathcal{P}^{n}\) the subset of \(\mathcal{K}^{n}\) consisting of polytopes. It is useful to consider some basic operations and concepts from convex geometry also for non-convex sets. This is straightforward for the Minkowski (i.e. vector) sum or Minkowski combinations with real coefficients of arbitrary subsets of \(\mathbb{R}^{n}\). For \(n\in\mathbb{N}\), we set \([n]:=\{1,\ldots,n\}\). If \(\varnothing\neq A\subseteq\mathbb{R}^{n}\), we denote by \(\operatorname{span}A\) the (linear) span and by \(\operatorname{\overline{span}}A:=\operatorname{span}(A-A)\) the linear subspace parallel to the affine span of \(A\). Then \(\dim A:=\dim\operatorname{\overline{span}}A\) is the dimension of \(A\). We write \(\operatorname{relint}B\) for the relative interior of a convex set \(B\subseteq\mathbb{R}^{n}\). For \(x,y\in\mathbb{R}^{n}\), the segment connecting \(x\) and \(y\) is denoted by \([x,y]\) (which equals the convex hull of \(\{x,y\}\)). The support function of a subset \(A\subseteq\mathbb{R}^{n}\) is \(h_{A}\colon\mathbb{R}^{n}\to[-\infty,\infty]\), \(u\mapsto\sup\{\langle x,u\rangle\mid x\in A\}\) and the support set of \(A\) in direction \(u\in\mathbb{R}^{n}\setminus\{0\}\) is \[F(A,u)\coloneqq\{x\in A\mid\langle x,u\rangle=h_{A}(u)\},\] which can be the empty set. For a convex body \(A\) and \(u\in\mathbb{R}^{n}\setminus\{0\}\), the support set \(F(A,u)\) is again a convex body. ### Faces and touching spaces We follow Schneider [7, p. 16] in defining a face of a convex set \(A\subseteq\mathbb{R}^{n}\) as a convex subset \(F\subseteq A\) with the following property: If \(x,y\in A\) and \(F\cap\operatorname{relint}[x,y]\neq\varnothing\), then \([x,y]\subseteq F\). Several useful properties of faces of nonemtpy closed convex sets are provided in [7, Sect.2.i] and will be used in the following. In particular, for a polytope \(P\) a set \(\varnothing\neq F\subset P\) is a face of \(P\) if and only if it is a support set. Note that "\(\subset\)" means strict inclusion. Next we collect and complement some definitions from [7, p. 85]. As usual, for a subset \(A\subset\mathbb{R}^{n}\) we set \(A^{\perp}\coloneqq\{v\in\mathbb{R}^{n}\mid\langle v,a\rangle=0\text{ for }a\in A\}\) (which equals the orthogonal complement of \(\operatorname{span}A\)). For a vector \(u\in\mathbb{R}^{n}\), we set \(u^{\perp}\coloneqq\{u\}^{\perp}\). **Definition 2.1**.: Let \(K\) be a convex body contained in some linear subspace \(V\subseteq\mathbb{R}^{n}\). The set of common outer normal vectors (including \(0\)) of some set \(S\subseteq K\) is \[N_{V}(K,S)\coloneqq\{u\in V\setminus\{0\}\mid S\subseteq F(K,u)\}\cup\{0\}\subseteq V\] and called the _normal cone of \(K\) at \(S\) (in \(V\))_. If \(u\in V\setminus\{0\}\), then \(N_{V}(K,F(K,u))\) is a closed convex cone containing \(u\). As such, it has a unique face \(T_{V}(K,u)\) such that \(u\in\operatorname{relint}T_{V}(K,u)\). This face is called the _touching cone of \(K\) in direction \(u\)_. The space \(\operatorname{TS}_{V}(K,u)\coloneqq V\cap T_{V}(K,u)^{\perp}\) is called the _touching space of \(K\) in direction \(u\)_. In case of \(V=\mathbb{R}^{n}\), we write \(N\coloneqq N_{\mathbb{R}^{n}},T\coloneqq T_{\mathbb{R}^{n}}\) and \(\operatorname{TS}\coloneqq\operatorname{TS}_{V}\). The following definition of extreme normal directions for an \((n-1)\)-tuple of convex bodies in \(\mathbb{R}^{n}\) can be easily seen to be equivalent to the definition given in [7, p. 87] by means of [7, Lem. 5.1.9], applied in \(u^{\perp}\) for some \(u\in\mathbb{S}^{n-1}\). **Definition 2.2**.: If \(n\geq 1\) and \(\boldsymbol{\mathcal{C}}=(C_{1},\ldots,C_{n-1})\) is a tuple of convex bodies in \(\mathbb{R}^{n}\), then \(u\in\mathbb{S}^{n-1}\) is said to be a _\(\boldsymbol{\mathcal{C}}\)-extreme (normal) vector_ if there are one-dimensional linear subspaces of \(\operatorname{TS}(C_{i},u)\), for \(i\in[n-1]\), with linearly independent directions. The set of all \(\boldsymbol{\mathcal{C}}\)-extreme normal vectors is denoted by \(\operatorname{ext}\boldsymbol{\mathcal{C}}\). **Remark 2.3**.: In the situation of Definition 2.2, \(u\in\operatorname{ext}\boldsymbol{\mathcal{C}}\) if and only if \[\dim\sum_{i\in I}\operatorname{TS}(C_{i},u)\geq|I|\quad\text{for }I\subseteq[n-1], \tag{5}\] where the empty sum is understood as the trivial vector space. For the equivalence of this condition with Definition 2.2, see [7, Thm. 5.1.8]. With the notation introduced later in Definition 2.17, condition (5) will be expressed by writing \[\operatorname{V}(\operatorname{TS}(C_{1},u),\ldots,\operatorname{TS}(C_{n-1},u))>0;\] see also the more general Lemma 2.20. The facial structure, touching cones and touching spaces of polytopes are reasonably well-understood. In Lemmas 2.4 and 2.5 we provide some related information that will be needed in the sequel. **Lemma 2.4** (Facial stability).: _Let \(P=\operatorname{conv}\{v_{1},\ldots,v_{\ell}\}\in\mathcal{P}^{n}\) and \(u\in\mathbb{R}^{n}\setminus\{0\}\). Consider the set \(I_{u}\coloneqq\{i\in[\ell]\mid v_{i}\in F(P,u)\}\)._ _Then there is an \(\varepsilon\in(0,\|u\|)\) such that for all \(v,w_{1},\ldots,w_{\ell}\in\mathbb{R}^{n}\) with \(d(u,v)<\varepsilon\) and \(d(v_{i},w_{i})<\varepsilon)\), \(Q\coloneqq\operatorname{conv}\{w_{1},\ldots,w_{\ell}\}\) satisfies \(F(Q,v)\subseteq\operatorname{conv}\{w_{i}\mid i\in I_{u}\}\)._ _Equality holds if and only if additionally \(v\in\overline{\operatorname{span}\{w_{i}\mid i\in I_{u}\}}^{\perp}\)._ Proof.: Note that \(I_{u}\) is not empty. If \(I_{u}=[\ell]\), then the first claim follows from \(F(Q,v)\subseteq Q\). Now assume that \([\ell]\setminus I_{u}\neq\varnothing\). Let \(\|u\|>\varepsilon>0\) and \(v,w_{1},\ldots,w_{\ell}\in V\) with \(d(u,v)<\varepsilon\) and \(d(v_{i},w_{i})<\varepsilon\). Then \(v\neq 0\). Define convex bodies \[Q\coloneqq\operatorname{conv}\{w_{1},\ldots,w_{\ell}\}\] and \[P^{\prime}\coloneqq\operatorname{conv}\{v_{i}\mid i\in[\ell]\setminus I_{u}\},\quad Q^{\prime}\coloneqq\operatorname{conv}\{w_{i}\mid i\in[\ell]\setminus I _{u}\}.\] Note that \(Q\) and \(Q^{\prime}\) depend on \(v,w_{1},\ldots,w_{\ell}\). It holds \[h(P^{\prime},u)=\max_{i\in[\ell]\setminus I_{u}}\left\langle v_{i},u\right\rangle <\max_{i\in[\ell]}\left\langle v_{i},u\right\rangle=h(P,u)=h(F(P,u),u).\] By continuity in \(u\) and \((v_{i})_{i\in[\ell]}\) of the left and right term of the inequality, we can choose \(\varepsilon>0\) such that for all \(v,w_{1},\ldots,w_{\ell}\in V\) with \(d(u,v)<\varepsilon\) and \(d(v_{i},w_{i})<\varepsilon\), \[h(Q^{\prime},v)<h(Q,u).\] So \(F(Q,v)\subseteq\operatorname{conv}\{w_{i}\mid i\in I_{u}\}\), remembering that \(F(Q,v)\) is spanned by vertices of \(Q\)[4, Theorem 1.19]. If equality holds, then \(v\in(\overline{\operatorname{span}}\,F(Q,v))^{\perp}=(\overline{\operatorname{ span}}\{w_{i}\mid i\in I_{u}\})^{\perp}\). Conversely, assume \(v\in(\overline{\operatorname{span}}\{w_{i}\mid i\in I_{u}\})^{\perp}\). Since \(F(Q,v)\) is spanned by vertices \(w_{i}\) with \(i\in I_{u}\) and nonempty, we may assume that \(1\in I_{u}\) and \(\left\langle w_{1},v\right\rangle=h(Q,v)\). For \(v\in(\overline{\operatorname{span}}\{w_{i}\mid i\in I_{u}\})^{\perp}\), we get \[\left\langle w_{i},v\right\rangle=\left\langle w_{1},v\right\rangle+\left\langle w _{i}-w_{1},v\right\rangle=h(Q,v)\quad\text{for all $i\in I_{u}$},\] hence \(w_{i}\in F(Q,v)\), and therefore also \(\operatorname{conv}\{w_{i}\mid i\in I_{u}\}\subseteq F(Q,v)\) holds. The next lemma should be compared to [7, (2.26)]. **Lemma 2.5**.: _Let \(P\in\mathcal{P}^{n}\) be a polytope and \(u\in\mathbb{R}^{n}\setminus\{0\}\). Then_ \[T(P,u)=N(P,F(P,u))\quad\text{and}\quad\operatorname{TS}(P,u)=\overline{\operatorname {span}}\,F(P,u).\] Proof.: If \(v\in N(P,F(P,u))\), then \(F(P,u)\subseteq F(P,v)\), and thus \(v\in(\overline{\operatorname{span}}\,F(P,u))^{\perp}\). Hence \(N(P,F(P,u))\subseteq(\overline{\operatorname{span}}\,F(P,u))^{\perp}\) and therefore \[\overline{\operatorname{span}}\,F(P,u)\subseteq N(P,F(P,u))^{\perp}\subseteq T (P,u)^{\perp}=\operatorname{TS}(P,u).\] By Lemma 2.4, there is an open neighborhood \(U\subseteq\mathbb{R}^{n}\setminus\{0\}\) of \(u\) such that for all \(v\in U\), \[F(P,v)\subseteq F(P,u)\] and such that for all \(v\in U^{\prime}\coloneqq U\cap(\overline{\operatorname{span}}\,F(P,u))^{\perp}\), even equality holds. So \(U^{\prime}\subseteq N(P,F(P,u))\subseteq(\overline{\operatorname{span}}\,F( P,u))^{\perp}\). But \(U^{\prime}\) is open in \((\overline{\operatorname{span}}\,F(P,u))^{\perp}\), so that \(u\in\operatorname{relint}N(P,F(P,u))\). By definition of \(T(P,u)\), \[T(P,u)=N(P,F(P,u))\] and hence (by the preceding argument) \[(\overline{\operatorname{span}}\,F(P,u))^{\perp}=\operatorname{span}\,N(P,F( P,u))=\operatorname{span}\,T(P,u).\] Thus we get \(\overline{\operatorname{span}}\,F(P,u)=T(P,u)^{\perp}=\operatorname{TS}(P,u)\). ### Mixed volumes and mixed area measures See [7, 4] for an introduction to mixed volumes and mixed area measures of convex bodies or differences of support functions of convex bodies. We start with some simple comments and conventions. **Conventions concerning tuples of sets** Most of the time, the ordering of a tuple will not be relevant for our purposes. This is why a _subtuple_ of a tuple \(\boldsymbol{\mathcal{A}}=(A_{1},\ldots,A_{\ell})\), \(\ell\in\mathbb{N}_{0}\), will denote any tuple \(\boldsymbol{\mathcal{B}}\) that is a prefix of a permutation of \(\boldsymbol{\mathcal{A}}\). The notation for this situation is \(\boldsymbol{\mathcal{B}}\leq\boldsymbol{\mathcal{A}}\). Every set \(I\subseteq[\ell]\) can be uniquely written as \(I=\{i_{1},\ldots,i_{m}\}\) such that \(m\in\mathbb{N}_{0}\) and \((i_{j})_{j\in[m]}\) is strictly increasing in \(j\). Then we assign to \(I\) a subtuple of \(\boldsymbol{\mathcal{A}}\), \[\boldsymbol{\mathcal{A}}_{I}\coloneqq(A_{i_{1}},\ldots,A_{i_{m}})\leq \boldsymbol{\mathcal{A}}.\] The _span_ of a tuple of _nonempty_ sets \(\boldsymbol{\mathcal{A}}=(A_{1},\ldots,A_{\ell})\) with \(A_{i}\subseteq\mathbb{R}^{n}\) is \[\overline{\operatorname{span}}\,\boldsymbol{\mathcal{A}}\coloneqq\overline{ \operatorname{span}}\sum_{i=1}^{\ell}A_{\ell}=\sum_{i=1}^{\ell}\overline{ \operatorname{span}}\,A_{\ell},\] where \(\sum_{i=1}^{\ell}A_{\ell}\coloneqq\{0\}\) if \(\ell=0\). The _dimension_ of a tuple means the dimension of its affine span, that is, \(\dim\boldsymbol{\mathcal{A}}\coloneqq\dim\overline{\operatorname{span}} \boldsymbol{\mathcal{A}}\). The _size_ of a tuple \(\boldsymbol{\mathcal{A}}\) is the number of its components and is written as \(|\boldsymbol{\mathcal{A}}|\coloneqq\ell\). Whenever tuples of sets are nested into other tuples, we will omit brackets as convenient. For example, if \(C,D\) are sets and \(\boldsymbol{\mathcal{A}}=(A_{1},\ldots,A_{\ell})\) is a tuple of sets, then \[(C,\boldsymbol{\mathcal{A}},D)\coloneqq(C,A_{1},\ldots,A_{\ell},D)\] and therefore, for example, if the right term is well-defined, \[\operatorname{V}(C,\boldsymbol{\mathcal{A}},D)=\operatorname{V}(C,A_{1}, \ldots,A_{\ell},D).\] If \(\boldsymbol{\mathcal{A}},\boldsymbol{\mathcal{B}}\) are tuples, we also write \[\boldsymbol{\mathcal{A}}+\boldsymbol{\mathcal{B}}\coloneqq(\boldsymbol{ \mathcal{A}},\boldsymbol{\mathcal{B}}),\] using the nested-tuple convention as just described. If \(k\in\mathbb{N}\), then for arbitrary \(X\) (being a set, a measure,...), \(X[k]\) denotes the tuple consisting of \(k\) copies of \(X\), that is, \(X[k]\coloneqq(X,\ldots,X)\). As usual we set \[S_{n-1}(K,\cdot)\coloneqq S(K[n-1],\cdot)\quad\text{ for }K\in\mathcal{K}^{n}.\] If \(f\colon A\to B\) is a function and \(\boldsymbol{\mathcal{A}}=(A_{1},\ldots,A_{\ell})\) is a tuple of elements or subsets of \(A\), then we write \[f(\boldsymbol{\mathcal{A}})=f(A_{1},\ldots,A_{\ell})\coloneqq(f(A_{1}),\ldots, f(A_{\ell})).\] If \(\boldsymbol{\mathcal{A}}=(A_{1},\ldots,A_{\ell})\) is a tuple and \(r\in[\ell]\), the tuple obtained from \(\boldsymbol{\mathcal{A}}\) by removing the \(r\)-th entry (i.e. \(A_{r}\)) is denoted by \(\boldsymbol{\mathcal{A}}_{\setminus r}\). **Remark 2.6**.: For the discussion of mixed volumes and area measures it is usually assumed that \(n\geq 1\) (or even \(n\geq 2\)). In view of induction arguments in the following, we set \[\operatorname{V}()\coloneqq\operatorname{V}_{0}(\{0\})\coloneqq\mathcal{H}^{ 0}(\{0\})=1,\] where \(\mathcal{H}^{0}\) is the zero-dimensional Hausdorff measure (counting measure). Moreover, for \(n=1\) we define \(\operatorname{S}()\) as the counting measure on \(S^{0}=\{-1,1\}\). Then e.g. relation (2) remains true. These definitions are consistent with the inductive definitions of volume and surface in [4, Definition 3.2]. **Remark 2.7**.: In order to simplify notation, we use the following conventions. 1. Let \(\mu(\boldsymbol{\mathcal{C}})\) be a measure which depends on a parameter \(\boldsymbol{\mathcal{C}}\). Then we write \(\mu(\boldsymbol{\mathcal{C}},\cdot)\) or \(\mu_{\boldsymbol{\mathcal{C}}}(\cdot)\) as shorthands for \(\mu(\boldsymbol{\mathcal{C}})(\cdot)\). 2. Sometimes it is useful to pass the support function \(h_{K}\) instead of the convex body \(K\in\mathcal{K}^{n}\) to \(\mathrm{S}\) or \(\mathrm{V}\), i.e., write \(\mathrm{V}(h_{K},\mathcal{C})\) instead of \(\mathrm{V}(K,\mathcal{C})\). Using this convention, \(\mathrm{V}\) (and \(\mathrm{S}\)) can be extended to multilinear functions taking \(n\) (or \(n-1\)) differences of support functions. For example, \[\mathrm{V}(h_{K}-h_{L},\mathcal{C})\coloneqq\mathrm{V}(K,\mathcal{C})-\mathrm{ V}(L,\mathcal{C}).\] 3. In the following we write \(\mathrm{V}\) for the mixed volume in \(\mathbb{R}^{n}\), but we use the same symbol for the mixed volume in a subspace (the number of arguments already provides the relevant information). By the translation invariance of mixed volumes, the mixed volume of convex bodies lying in parallel subspaces is well-defined. The mixed area measure of an \((n-1)\)-tuple of polytopes can be written as a finite sum of weighted Dirac measures and the point mass (weight) of each atom is given as a mixed volume. We recall this relation in the remark below since it will be used in the following and a related (more general) result for general convex bodies is stated as Lemma 2.13. **Remark 2.8**.: Let \(P_{1},\ldots,P_{n-1}\in\mathcal{P}^{n}\) and \(P\coloneqq P_{1}+\cdots+P_{n-1}\). Then the mixed area measure of \(P_{1},\ldots,P_{n-1}\) is a weighted sum of Dirac measures, that is, \[\mathrm{S}(P_{1},\ldots,P_{n-1},\cdot)=\sum_{u\in\mathcal{N}_{n-1}(P)}\mathrm{ V}(F(P_{1},u),\ldots,F(P_{n-1},u))\delta_{u},\] where \(\mathcal{N}_{n-1}(P)\) is the set of all \(u\in\mathbb{S}^{n-1}\) with \(\dim F(P,u)=n-1\) (see [4, (4.2)]). We will end this discussion by recalling a useful result which relates the mixed area measure \(S_{n-1}(K;\cdot)\) of the \((n-1)\)-tuple \((K,\ldots,K)\) to the (localized) \((n-1)\)-dimensional Hausdorff measure \(\mathcal{H}^{n-1}\) of the topological boundary \(\partial K\) of an \(n\)-dimensional convex body \(K\) in \(\mathbb{R}^{n}\). **Definition 2.9**.: Let \(n\geq 1,K\in\mathcal{K}^{n}\) a convex body and \(\omega\subseteq\mathbb{S}^{n-1}\) a set. Then \[\tau(K,\omega)\coloneqq\bigcup_{u\in\omega}F(K,u)\] is called the _reverse spherical image of \(K\) at \(\omega\)_ (compare [7, p. 88]). **Lemma 2.10**.: _Let \(n\geq 1\). For every \(n\)-dimensional convex body \(K\in\mathcal{K}^{n}\) and every Borel measurable set \(\omega\subseteq\mathbb{S}^{n-1}\),_ \[\mathrm{S}_{n-1}(K,\omega)=\mathcal{H}^{n-1}(\tau(K,\omega)).\] Proof.: See [7, Theorem 4.2.3] or [4, Thm. 4.8]. Lemma 2.10 in combination with the well-known (diagonality) Lemma 2.11 has many applications, such as Lemmas 2.12 and 2.13. **Lemma 2.11**.: _Let \(f,g\colon(\mathcal{K}^{n})^{k}\to\mathbb{R}\) be functionals that are symmetric and multilinear (i.e. Minkowski additive and positively homogeneous in each of their \(k\in\mathbf{N}_{0}\) components) and let \(\mathcal{C}=(C_{1},\ldots,C_{k})\) be a tuple of convex bodies in \(\mathbb{R}^{n}\). If for all choices of \(\lambda=(\lambda_{1},\ldots,\lambda_{k})\in[0,\infty)^{k}\) the convex body_ \[L_{\lambda}\coloneqq\sum_{i=1}^{k}\lambda_{i}C_{i}\] _satisfies \(f(L_{\lambda}[k])=g(L_{\lambda}[k])\), then \(f(\mathcal{C})=g(\mathcal{C})\)._ The following lemma states that the mixed area measures are locally determined, which will be crucial for the proof of Lemma 5.6 (and it will be used in the discussion of some of the examples). For the area measures of a single convex body (and Euclidean balls), the corresponding simple fact is well known (see [7, Note 11 for Sect. 4.2]). **Lemma 2.12**.: _Let \(n\geq 1\). Let \(\mathcal{C}=(C_{1},\ldots,C_{n-1}),\mathcal{D}=(D_{1},\ldots,D_{n-1})\) be tuples of convex bodies in \(\mathbb{R}^{n}\), and let \(\omega\subseteq\mathbb{S}^{n-1}\) be a Borel set such that_ \[\tau(C_{i},\omega)=\tau(D_{i},\omega),\quad i\in[n-1].\] _Then_ \[S(\mathcal{C})(\omega)=S(\mathcal{D})(\omega).\] Proof.: The case \(n=1\) follows from the fact that \(\mathcal{C}=\mathcal{D}\) are empty tuples. We will prove the theorem for the case that \(n\geq 2\) and \(C_{i}=D_{i}\) for \(i\neq 1\). This allows one to replace \(C_{1}\) by \(D_{1}\), yielding \[S(C_{1},C_{2},\ldots,C_{n-1})(\omega)=S(D_{1},C_{2},\ldots,C_{n-1})(\omega). \tag{6}\] Using symmetry of \(S\), we can afterwards replace \(C_{2}\) by \(D_{2}\), and so on until we have replaced all \(C_{i}\) by \(D_{i}\). We start with a preparatory remark. Let \(K\in\mathcal{K}^{n}\), \(\omega\subseteq\mathbb{S}^{n-1}\) and \(u\in\omega\). We show that \(F(K,u)=F(\tau(K,\omega),u)\). First, observe that \(F(K,u)\subseteq\tau(K,\omega)\subseteq K\). Hence, \(h_{\tau(K,\omega)}(u)=h_{K}(u)\) and \[F(K,u) =\{x\in K\mid\langle x,u\rangle=h_{K}(u)\}=\left\{x\in\tau(K, \omega)\ \big{|}\ \langle x,u\rangle=h_{\tau(K,\omega)}(u)\right\}\] \[=F(\tau(K,\omega),u),\] where we again used that \(F(K,u)\subseteq\tau(K,\omega)\). By Minkowski additivity of the mixed area measure in its first component, it suffices to show that (6) holds when \(C_{1},D_{1}\) are full-dimensional. To see this, replace \(C_{1}\) by \(C_{1}+B^{n}\) and \(D_{1}\) by \(D_{1}+B^{n}\) and note that \(\tau(C_{1}+B^{n},\omega)=\tau(D_{1}+B^{n},\omega)\), since by the preparatory remark for any \(u\in\omega\) we have \[F(C_{1}+B^{n},u) =F(C_{1},u)+F(B^{n},u)=F(\tau(C_{1},\omega),u)+F(B^{n},u)\] \[=F(\tau(D_{1},\omega),u)+F(B^{n},u)=F(D_{1},u)+F(B^{n},u)=F(D_{1}+ B^{n},u).\] For every \((\lambda_{i})_{i\in[n-1]}\in[0,\infty)^{n-1}\), we claim that \[\mathrm{S}_{n-1}\left(\sum_{i=1}^{n-1}\lambda_{i}C_{i}\right)(\omega)= \mathrm{S}_{n-1}\left(\lambda_{1}D_{1}+\sum_{i=2}^{n-1}\lambda_{i}C_{i}\right) (\omega). \tag{7}\] If this holds, Lemma 2.11 will show \[\mathrm{S}(C_{1},C_{2},\ldots,C_{n-1})(\omega)=\mathrm{S}(D_{1},C_{2},\ldots, C_{n-1})(\omega).\] If \(\lambda_{1}=0\), eq. (7) clearly holds. Otherwise, \(\sum_{i=1}^{n-1}\lambda_{i}C_{i}\) and \(\lambda_{1}D_{1}+\sum_{i=2}^{n-1}\lambda_{i}C_{i}\) are full-dimensional and by Lemma 2.10 and the definition of \(\tau\) it suffices to show that, for all \(u\in\omega\), \[F\left(\sum_{i=1}^{n-1}\lambda_{i}C_{i},u\right) =\sum_{i=1}^{n-1}\lambda_{i}F(C_{i},u)\overset{(!)}{=}\lambda_{1 }F(D_{1},u)+\sum_{i=2}^{n-1}\lambda_{i}F(C_{i},u)\] \[=F\left(\lambda_{1}D_{1}+\sum_{i=2}^{n-1}\lambda_{i}C_{i},u\right),\] where we used at \((!)\) that by the preparatory remark and the assumption we have \[F(C_{1},u)=F(\tau(C_{1},\omega),u)=F(\tau(D_{1},\omega),u)=F(D_{1},u),\] concluding the proof. The next lemma is a simple consequence of Lemma 2.12, but we will not need it in the current work. **Lemma 2.13**.: _Assume \(n\geq 1\). Let \(K_{1},\ldots,K_{n-1}\subset\mathbb{R}^{n}\) be convex bodies and \(u\in\mathbf{S}^{n-1}\). Then_ \[\mathrm{S}(K_{1},\ldots,K_{n-1})(\{u\})=\mathrm{V}(F(K_{1},u),\ldots,F(K_{n-1},u)).\] Proof.: By multilinearity and symmetry of \(\mathrm{S}\) and \(\mathrm{V}\) and linearity of \(F\), it suffices by Lemma 2.11 to prove the statement for \(K_{1}=\cdots=K_{n-1}\), i.e. to prove that \[\mathrm{S}_{n-1}(K_{1})(\{u\})=\mathrm{V}_{n-1}(F(K_{1},u)),\] where \(V_{n-1}\) is the volume (intrinsic volume of order \(n-1\)) in an \((n-1)\)-dimensional subspace of \(\mathbb{R}^{n}\). Consider the truncated convex cone \[C\coloneqq\left\{x\in B^{n}\ \middle|\ \langle x,u\rangle\leq-\frac{1}{2}\|x\| \right\},\] which is a full-dimensional convex body satisfying \(F(C,u)=\{0\}\). So \[\tau(K_{1},\{u\})=F(K_{1},u)=F(C+K_{1},u)=\tau(C+K_{1},\{u\}).\] By Lemmas 2.12 and 2.10 and since \(\dim(C+K_{1})=n\), it follows that \[S_{n-1}(K_{1})(\{u\})=S_{n-1}(C+K_{1})(\{u\})=\mathcal{H}^{n-1}(F(C+K_{1},u))= V_{n-1}(F(K_{1},u)),\] which completes the argument. ### Reduction formulas We will use dimensional induction to prove assertions about mixed area measures. To succeed in this endeavor, we have to relate mixed area measures in \(\mathbb{R}^{n}\) to mixed area measures in subspaces. By using basic integral geometry, the following two reduction formulas can be obtained. Recall that we write \(V\) for the mixed volume in \(\mathbb{R}^{n}\) and use the same symbol for the mixed volume in a subspace (the number of arguments already provides the relevant information). For a linear subspace \(L\subseteq\mathbb{R}^{n}\), the orthogonal projection to \(L\) is denoted by \(\pi_{L}:\mathbb{R}^{n}\to L\). **Lemma 2.14**.: _Let \(\boldsymbol{\mathcal{C}}=(C_{1},\ldots,C_{n})\) be a tuple of convex bodies in \(\mathbb{R}^{n}\), and let \(k\in[n]\cup\{0\}\) be such that \(\overline{\operatorname{span}}\boldsymbol{\mathcal{C}}_{[k]}\) is contained in a linear subspace \(E\subseteq\mathbb{R}^{n}\) of dimension \(k\). Then_ \[\binom{n}{k}V(\boldsymbol{\mathcal{C}})=V(\boldsymbol{\mathcal{C}}_{[k]}) \cdot V(\pi_{E^{\perp}}(\boldsymbol{\mathcal{C}}_{[n]\setminus\{k\}})).\] Proof.: The cases \(k\in\{0,n\}\) are trivial. For the remaining cases, use the translation invariance of \(V\) and apply [7, Theorem 5.3.1]. In dealing with mixed area measures, we will indicate by our notation in which subspace the measure is applied. For an \(\ell\)-dimensional linear subspace \(L\subset\mathbb{R}^{n}\), \(\ell\geq 1\), we write \(S_{L}\) for the mixed area measure in \(L\), which is evaluated at \(\ell-1\) convex bodies in \(L\) and Borel subsets of \(\mathbb{S}^{n-1}\cap L\). Moreover, we define \(S_{L}^{\prime}\) as the Borel measure on \(\mathbb{S}^{n-1}\) defined by \[S_{L}^{\prime}(C_{1},\ldots,C_{\ell-1})(\omega)\coloneqq S_{L}(C_{1},\ldots,C _{\ell-1})(\omega\cap L)\] for convex bodies \(C_{1},\ldots,C_{\ell-1}\subset L\) and Borel sets \(\omega\subseteq\mathbb{S}^{n-1}\). The following proposition will be essential for the proof of our main result in Section 7. **Proposition 2.15**.: _Assume \(n\in\mathbb{N}\). Let \(\boldsymbol{\mathcal{C}}=(C_{1},\ldots,C_{n-1})\) be a tuple of convex bodies in \(\mathbb{R}^{n}\), and let \(k\in[n-1]\cup\{0\}\) be such that \(\overline{\operatorname{span}}\,\boldsymbol{\mathcal{C}}_{[k]}\) is contained in a linear subspace \(E\subseteq\mathbb{R}^{n}\) of dimension \(k\). Then_ \[\binom{n-1}{k}\operatorname{S}(\boldsymbol{\mathcal{C}})=\operatorname{V}( \boldsymbol{\mathcal{C}}_{[k]})\cdot S^{\prime}_{E^{\perp}}(\pi_{E^{\perp}}( \boldsymbol{\mathcal{C}}_{[n-1]\setminus[k]})).\] _In particular, if \(\dim\boldsymbol{\mathcal{C}}_{[k]}<k\), then \(\operatorname{S}(\boldsymbol{\mathcal{C}})=0\)._ Proof.: The case \(k=0\) is trivial. The assertion for \(k=n-1\) is clear for polytopes (see Remark 2.8), the general case follows by approximation. So we can assume that \(n\geq 3\) and \(k\in[n-2]\). Let \(C_{n}\in\mathcal{K}^{n}\). Then by Lemma 2.14, \[\binom{n}{k}\operatorname{V}(C_{1},\ldots,C_{n})=\operatorname{V}(C_{1}, \ldots,C_{k})\cdot\operatorname{V}(\pi_{E^{\perp}}(C_{k+1},\ldots,C_{n})).\] Expressing the mixed volumes by mixed area measures, we obtain \[\binom{n}{k}\frac{n-k}{n}\int\,h_{C_{n}}\operatorname{d}\operatorname{S}(C_{1 },\ldots,C_{n-1})\] Noting that \(h_{\pi_{E^{\perp}}C_{n}}=h_{C_{n}}\) on \(E^{\perp}\), we find that \[\int\,h_{\pi_{E^{\perp}}C_{n}}\operatorname{d}\operatorname{S}_{ E^{\perp}}(\pi_{E^{\perp}}(C_{k+1},\ldots,C_{n-1})) =\int\,h_{C_{n}}\operatorname{d}\operatorname{S}_{E^{\perp}}( \pi_{E^{\perp}}(C_{k+1},\ldots,C_{n-1}))\] \[=\int\,h_{C_{n}}\operatorname{d}S^{\prime}_{E^{\perp}}(\pi_{E^{ \perp}}(C_{k+1},\ldots,C_{n-1}))\] and conclude that \[\binom{n-1}{k}\int\,h_{C_{n}}\operatorname{d}\operatorname{S}(C_{1 },\ldots,C_{n-1})\] \[=\operatorname{V}(C_{1},\ldots,C_{k})\cdot\int\,h_{C_{n}} \operatorname{d}S^{\prime}_{E^{\perp}}(\pi_{E^{\perp}}(C_{k+1},\ldots,C_{n-1}).\] Because \(C_{n}\) is an arbitrary convex body and differences of support functions are dense in \(C(\mathbb{S}^{n-1})\), the claim follows. ### Criticality Criticality is a useful concept that describes dimensionality conditions on arrangements of convex bodies. Shenfeld and van Handel [10] employed criticality in their investigation of equality cases in the Alexandrov-Fenchel inequality for polytopes. We will slightly deviate from their terminology in that we call "semicritical" what they called "subcritical", and we say "subcritical" to describe a situation which is "not critical". The most elementary occurrence and motivation for the terminology is the following result. **Lemma 2.16**.: _Let \(\mathcal{C}=(K_{1},\ldots,K_{n})\) be a tuple of convex bodies in \(\mathbb{R}^{n}\). Then the following are equivalent:_ 1. \(\mathrm{V}(\mathcal{C})>0\)_._ 2. _There are segments_ \(S_{i}\subseteq K_{i}\) _(_\(i\in[n]\)_) with linearly independent directions._ 3. _Whenever_ \(\mathcal{D}\leq\mathcal{C}\)_, then_ \(\dim\overline{\operatorname{span}}\,\mathcal{D}\geq|\mathcal{D}|\)_._ Proof.: See [7, Theorem 5.1.8]. Condition (c) in Lemma 2.16 suggests the definition of a "semicritical" tuple of convex bodies. Let us recall concepts of criticality and describe some consequences. **Definition 2.17**.: Let \(\ell\in\mathbb{N}_{0}\). Let \(\mathcal{A}=(A_{1},\ldots,A_{\ell})\) be a tuple of nonempty subsets of \(\mathbb{R}^{n}\). Then \(\mathcal{A}\) is called 1. _semicritical_ if for all \(()\neq\mathcal{B}\leq\mathcal{A}\) we have \(\dim\overline{\operatorname{span}}\,\mathcal{B}\geq|\mathcal{B}|\), 2. _critical_ if for all \(()\neq\mathcal{B}\leq\mathcal{A}\) we have \(\dim\overline{\operatorname{span}}\,\mathcal{B}\geq|\mathcal{B}|+1\), 3. _supercritical_ if for all \(()\neq\mathcal{B}\leq\mathcal{A}\) we have \(\dim\overline{\operatorname{span}}\,\mathcal{B}\geq|\mathcal{B}|+2\), 4. _subcritical_ if it is not critical. Abusing notation, we write \(\mathrm{V}(\mathcal{A})>0\) to say that \(\mathcal{A}\) is semicritical. The following lemma is provided in [3, Lem. 3.2] (see also the preceding remarks there). **Lemma 2.18**.: _Let \(\ell\in\mathbb{N}_{0}\), and let \(\mathcal{A}=(A_{1},\ldots,A_{\ell})\) be a tuple of nonempty subsets of \(\mathbb{R}^{n}\)._ 1. _Subtuples of (super-, semi-)critical tuples are also (super-, semi-)critical._ 2. _Supercriticality implies criticality, which implies semicriticality._ 3. _The empty tuple is supercritical._ 4. _(Super-, Semi-)Criticality is invariant under permutations of_ \(\mathcal{A}\)_._ 5. _(Super-, Semi-)Criticality is invariant under simultaneous affine isomorphisms and argumentwise translations._ 6. _(Super-, Semi-)Criticality is preserved if the sets in_ \(\mathcal{A}\) _are replaced by supersets._ 7. _Let_ \(\mathcal{A}\) _be critical and_ \(A_{\ell+1}\subseteq\mathbb{R}^{n}\) _be nonempty. Then_ \((A_{1},\ldots,A_{\ell+1})\) _is semicritical if and only if_ \(A_{\ell+1}\) _is at least one-dimensional._ 8. _Let_ \(\mathcal{A}\) _be supercritical and_ \(A_{\ell+1}\subseteq\mathbb{R}^{n}\) _be nonempty. Then_ \((A_{1},\ldots,A_{\ell+1})\) _is critical if and only if_ \(A_{\ell+1}\) _is at least two-dimensional._ 9. _If all sets_ \(A_{i}\) _are full-dimensional, then_ \(\mathcal{A}\) _is supercritical if and only if_ \(\ell\leq n-2\) _or_ \(\mathcal{A}=()\)_._ The notation '\(\mathrm{V}(\mathcal{A})>0\)' suggests that semicriticality might abide laws similar to the ones applying to mixed volumes. In particular, we might hope for some kind of reduction theorem in analogy to Reduction Formula 2.14. As the next result shows, this hope is not in vain. The following Lemmas 2.19 and 2.21 will be crucial for the arguments in Sections 6 and 7. Lemma 2.20 is used in the proof of Lemma 2.21. **Lemma 2.19** (Semicritical reduction).: _Let \(\ell\in\mathbb{N}_{0}.\) Let \(\mathcal{A}=(A_{1},\ldots,A_{\ell})\) be a tuple of nonempty subsets of \(\mathbb{R}^{n}\) and let \(\overline{\operatorname{span}}\mathcal{A}_{[k]}\) be contained in a linear subspace \(E\) of dimension \(k\in\mathbb{N}_{0}\). Then the following are equivalent:_ 1. \(\mathrm{V}(\mathcal{A})>0\)_;_ 2. \(\mathrm{V}(\mathcal{A}_{[k]})>0\) _and_ \(\mathrm{V}(\pi_{E^{\perp}}(\mathcal{A}_{[\ell]\setminus[k]}))>0\)_._ Proof.: After applying suitable translations, we may assume that all sets contain \(0\). "\(\Longrightarrow\) ": Assume that \(\mathrm{V}(\mathcal{A})>0\). Then by Lemma 2.18, \(\mathrm{V}(\mathcal{A}_{[k]})>0\). It remains to show the second claim. For this, let \(I\subseteq[\ell]\setminus[k]\). Then using the dimension formula from linear algebra and semicriticality of \(\mathcal{A}\), \[\dim\overline{\operatorname{span}}\,\pi_{E^{\perp}}(\mathcal{A})_ {I} =\dim\pi_{E^{\perp}}\big{(}\overline{\operatorname{span}}\, \mathcal{A}_{\mathrm{I}\cup[k]}\big{)}\geq\dim\overline{\operatorname{span}} \,\mathcal{A}_{\mathrm{I}\cup[k]}-\dim\ker\pi_{E^{\perp}}\] \[\geq|I|+k-k.\] "\(\Longleftarrow\) ": Now assume that \(\mathrm{V}(\mathcal{A}_{[k]})>0\) and \(\mathrm{V}(\pi_{E^{\perp}}(\mathcal{A}_{[\ell]\setminus[k]}))>0\). Let \(I\subseteq[\ell]\) and consider the linear map \(\Phi\colon\,\overline{\operatorname{span}}\,\mathbf{\mathcal{A}}_{I}\to\mathbb{R}^{n}\), \(x\mapsto\pi_{E^{\perp}}(x)\). It satisfies \[\ker\Phi=E\cap\overline{\operatorname{span}}\,\mathbf{\mathcal{A}}_{I}\supseteq \overline{\operatorname{span}}\,\mathbf{\mathcal{A}}_{I\cap[k]}\] and \[\operatorname{im}\Phi=\overline{\operatorname{span}}\,\pi_{E^{\perp}}(\mathbf{ \mathcal{A}}_{I})=\overline{\operatorname{span}}\,\pi_{E^{\perp}}(\mathbf{ \mathcal{A}}_{I\setminus[k]}).\] The dimension formula together with the assumption shows \[\dim\overline{\operatorname{span}}\,\mathbf{\mathcal{A}}_{I} =\dim\ker\Phi+\dim\operatorname{im}\Phi\] \[\geq\dim\overline{\operatorname{span}}\,\mathbf{\mathcal{A}}_{I\cap[ k]}+\dim\overline{\operatorname{span}}\,\pi_{E^{\perp}}(\mathbf{\mathcal{A}}_{I \setminus[k]})\] \[=\dim\overline{\operatorname{span}}\,\mathbf{\mathcal{A}}_{I\cap[k]}+ \dim\overline{\operatorname{span}}(\pi_{E^{\perp}}(\mathbf{\mathcal{A}}))_{I \setminus[k]}\] \[\geq|I\cap[k]|+|I\setminus[k]|=|I|,\] which shows that \(\mathbf{\mathcal{A}}\) is semicritical. Having proved the reduction Lemma 2.19, we can inductively prove an analogue of Lemma 2.16. **Lemma 2.20**.: _Let \(\mathbf{\mathcal{A}}=(A_{1},\ldots,A_{\ell})\) be a tuple of nonempty subsets of \(\mathbb{R}^{n}\). Then the following are equivalent:_ 1. \(\operatorname{V}(\mathbf{\mathcal{A}})>0\)_._ 2. _There are pairs of points_ \((x_{i},y_{i})\in A_{i}\times A_{i}\) _(_\(i\in[\ell]\)_) such that the tuple_ \((y_{i}-x_{i})_{i\in[\ell]}\) _consists of linearly independent vectors._ Proof.: "\(\Longleftarrow\)": Clearly, whenever \(I\subseteq[\ell]\), \[\dim\overline{\operatorname{span}}\,\mathbf{\mathcal{A}}_{I}\geq\dim\operatorname {span}\{y_{i}-x_{i}\mid i\in I\}=|I|.\] "\(\Longrightarrow\)": We may assume that every set \(A_{i}\) contains \(0\). We proceed by induction over the dimension \(n\). Assume that the claim is true for all dimensions smaller than \(n\). Then we distinguish three cases: * If \(n=0\), we have nothing to show since the empty family is clearly linearly independent. * If \(n>0\) and the tuple is critical, let \(E\) be an arbitrary \((n-1)\)-dimensional linear subspace. Then \(\pi_{E}(\mathbf{\mathcal{A}})\) is still semicritical because the kernel of the projection is one-dimensional. The inductive hypothesis guarantees the existence of pairs of points \((x_{i},y_{i})\in A_{i}\times A_{i}\) (\(i\in[\ell]\)) such that \(\pi_{E}(y_{i}-x_{i})\in E\) are linearly independent. But then \((y_{i}-x_{i})\) are linearly independent, too. * If \(n>0\) and the tuple is subcritical, we find \(\varnothing\neq I\subseteq[\ell]\) with \(\dim\boldsymbol{\mathcal{A}}_{I}=|I|\). If \(\ell=n\) and \(\dim\overline{\operatorname{span}}\,\boldsymbol{\mathcal{A}}_{I}=|I|\) for all \(\varnothing\neq I\subseteq[n]\), then clearly there exist points \((x_{i},y_{i})\in A_{i}\times A_{i}\) (\(i\in[\ell]\)) such that the family \((y_{i}-x_{i})_{i\in[\ell]}\) is linearly independent. Otherwise, without loss of generality, \(\boldsymbol{\mathcal{A}}_{I}\) is a prefix of \(\boldsymbol{\mathcal{A}}\), so that \(I=[k]\) for some \(0<k<n\). After defining the linear subspace \(E:=\overline{\operatorname{span}}\,\boldsymbol{\mathcal{A}}_{[k]}\) of dimension \(k\), we can apply Lemma 2.19 to deduce that \[\operatorname{V}(\boldsymbol{\mathcal{A}}_{[k]}),\operatorname{V}(\pi_{E^{ \perp}}(\boldsymbol{\mathcal{A}}_{[\ell]\setminus[k]}))>0.\] Because \(0<k<n\), two applications of the inductive hypothesis yield pairs of points \((x_{i},y_{i})\in A_{i}\times A_{i}\) (\(i\in[\ell]\)) such that * \(y_{1}-x_{1},\ldots,y_{k}-x_{k}\) are linearly independent and * \(\pi_{E^{\perp}}(y_{k+1}-x_{k+1}),\ldots,\pi_{E^{\perp}}(y_{\ell}-x_{\ell})\) are linearly independent. Hence it follows that \(y_{1}-x_{1},\ldots,y_{\ell}-x_{\ell}\) are linearly independent. Since these cases are exhaustive, the proof is complete. In analogy to the additivity of the mixed volume, we obtain the following result. **Lemma 2.21** (Semicritical additivity).: _Let \(\boldsymbol{\mathcal{A}}=(A_{1},A_{2},\ldots,A_{\ell})\) be a tuple of nonempty subsets of \(\mathbb{R}^{n}\) and \(\ell\geq 1\). Furthermore, let \(A_{1}=B+C\). Then the following are equivalent._ 1. \(\operatorname{V}(A_{1},A_{2},\ldots,A_{\ell})>0\)_._ 2. \(\operatorname{V}(B,A_{2},\ldots,A_{\ell})>0\) _or_ \(\operatorname{V}(C,A_{2},\ldots,A_{\ell})>0\)_._ Proof.: "\(\Longleftarrow\)" follows from \(\overline{\operatorname{span}}\,B\), \(\overline{\operatorname{span}}\,C\subseteq\overline{\operatorname{span}}\,A_ {1}\). "\(\Longrightarrow\)": In view of Lemma 2.20, we find pairs of points \((x_{i},y_{i})\in A_{i}\times A_{i}\) for \(i\in[\ell]\) such that the differences \(y_{i}-x_{i}\) are linearly independent. In particular, \(y_{1}-x_{1}\) is not contained in \(E:=\operatorname{span}\{y_{i}-x_{i}\mid i\in[\ell]\setminus\{1\}\}\). We can find \(b,b^{\prime}\in B\) and \(c,c^{\prime}\in C\) such that \(x_{1}=b+c\) and \(y_{1}=b^{\prime}+c^{\prime}\). Then either \(b^{\prime}-b\) or \(c^{\prime}-c\) is not contained in \(E\) -- we may assume that \(b^{\prime}-b\notin E\). But then \((b^{\prime}-b,y_{2}-x_{2},\ldots,y_{\ell}-x_{\ell})\) are linearly independent, which yields \(\operatorname{V}(B,A_{2},\ldots,A_{\ell})>0\) via Lemma 2.20. ### Support of mixed area measures The support of mixed area measures is the central topic of this work. This section provides some of its properties that will be needed. In the special case of polytopes, Theorem 1.1 is known and easy to verify. For the sake of completeness and to familiarize the reader with our notation, we include the argument. **Lemma 2.22**.: _Let \(n\geq 1\). Let \(\mathcal{P}=(P_{1},\ldots,P_{n-1})\) be a tuple of polytopes in \(\mathbb{R}^{n}\). Then_ \[\operatorname{supp}\operatorname{S}(\mathcal{P})=\operatorname{cl}\operatorname {ext}\mathcal{P}.\] Proof.: For \(n=1\) the assertion is clear by our definitions. Let \(n\geq 2\). By Remark 2.8, \[\operatorname{S}(\mathcal{P})=\sum_{u\in\mathcal{N}_{n-1}(P_{1}+\ldots+P_{n-1 })}\operatorname{V}(F(P_{1},u),\ldots,F(P_{n-1},u))\delta_{u}, \tag{8}\] and for all \(u\in\operatorname{S}^{n-1}\), Lemmas 2.5 and 2.16 show the equivalence \[\operatorname{V}(F(P_{1},u),\ldots,F(P_{n-1},u))>0\iff\operatorname{V}( \operatorname{TS}(P_{1},u),\ldots,\operatorname{TS}(P_{n-1},u))>0, \tag{9}\] the second statement by definition being equivalent to \(u\in\operatorname{ext}\mathcal{P}\). So if \(u\in\operatorname{supp}\operatorname{S}(\mathcal{P})\), then \(\operatorname{V}(\operatorname{TS}(P_{1},u),\ldots,\operatorname{TS}(P_{n-1}, u))>0\), i.e. \(u\in\operatorname{ext}\mathcal{P}\). Therefore, \(\operatorname{supp}\operatorname{S}(\mathcal{P})\subseteq\operatorname{ext} \mathcal{P}\subseteq\operatorname{cl}\operatorname{ext}\mathcal{P}\). Conversely, assume \(u\in\operatorname{ext}\mathcal{P}\). Then \(\operatorname{V}(F(P_{1},u),\ldots,F(P_{n-1},u))>0\) follows from (9). In particular, \[\dim F\left(\sum_{i=1}^{n-1}P_{i},u\right)=\dim\sum_{i=1}^{n-1}F(P_{i},u)\geq n -1.\] So \(u\in\mathcal{N}_{n-1}\big{(}\sum_{i=1}^{n-1}P_{i}\big{)}\) and \[\operatorname{S}(\mathcal{P})(\{u\})\geq\operatorname{V}(F(P_{1},u),\ldots,F( P_{n-1},u))>0,\] which shows that \(u\in\operatorname{supp}\operatorname{S}(\mathcal{P})\), hence \(\operatorname{ext}\mathcal{P}\subseteq\operatorname{supp}\operatorname{S}( \mathcal{P})\). The claim follows, since \(\operatorname{supp}\operatorname{S}(\mathcal{P})\) is closed. Next we describe the support of a convex body which is defined as an integral average in terms of its support function. **Theorem 2.23**.: _Assume that \(n\geq 2\). Let \(C_{1}\in\mathcal{K}^{n}\) be a convex body, \(\mathcal{C}=(C_{2},\ldots,C_{n-1})\) an \((n-2)\)-tuple of convex bodies and \(\mu\) a finite Borel measure on \(\mathcal{K}^{n}\) with bounded support such that_ \[h_{C_{1}}(x)=\int h_{K}(x)\,\mu(\operatorname{d}\!K),\quad x\in\mathbb{R}^{n}.\] _Then_ \[S(C_{1},\boldsymbol{C})=\int S(K,\boldsymbol{C})\,\mu(\mathrm{d}K)\] _and_ \[\operatorname{supp}S_{C_{1},\boldsymbol{C}}=\operatorname{cl}\bigcup_{K\in \operatorname{supp}\,\mu}\operatorname{supp}S_{K,\boldsymbol{C}}\,.\] Proof.: Let \(A\subseteq\mathbb{S}^{n-1}\) be closed. Let \(d(u,A)\) denote the Euclidean distance of \(u\in\mathbb{S}^{n-1}\) from \(A\). Then the continuous function \[f_{A}\colon\mathbb{S}^{n-1}\to[0,\infty),\quad u\mapsto d(u,A),\] satisfies \(f_{A}^{-1}(\{0\})=A\). If \(f\) is a difference of support functions, we can apply Fubini's theorem and the compactness of the support of \(\mu\) to obtain \[\int f\,\mathrm{d}\,S_{C_{1},\boldsymbol{C}} =\int h_{C_{1}}\,\mathrm{d}\,S_{f,\boldsymbol{C}}\] \[=\int\int h_{K}(x)\;\mu(\mathrm{d}K)\;S_{f,\boldsymbol{C}}( \mathrm{d}x)\] \[=\int\int h_{K}(x)\;S_{f,\boldsymbol{C}}(\mathrm{d}x)\;\mu( \mathrm{d}K)\] \[=\int\int f(x)\;S_{K,\boldsymbol{C}}(\mathrm{d}x)\;\mu(\mathrm{d}K).\] The same equality holds for all continuous functions \(f\colon\mathbb{S}^{n-1}\to\mathbb{R}\) by approximation, and in particular for \(f_{A}\) as defined above. Thus we have verified the first assertion. Now we turn to the second claim. "\(\subseteq\)": Set \(f\coloneqq f_{\operatorname{cl}\cup_{K\in\operatorname{supp}\,\mu} \operatorname{supp}S_{K,\boldsymbol{C}}}\). Then \[\int f\,\mathrm{d}\,S_{C_{1},\boldsymbol{C}}=\int\int f(x)\;S_{K,\boldsymbol{C }}(\mathrm{d}x)\;\mu(\mathrm{d}K)=0.\] So \(S_{C_{1},\boldsymbol{C}}(f^{-1}((0,\infty)))=0\), concluding this direction. "\(\supseteq\)": Let \(x\notin\operatorname{supp}S_{C_{1},\boldsymbol{C}}\). Because \(\operatorname{supp}S_{C_{1},\boldsymbol{C}}\) is closed, it suffices to prove that \(x\notin\operatorname{supp}S_{K,\boldsymbol{C}}\) for all \(K\in\operatorname{supp}\mu\). There is an open set \(U\subseteq\mathbb{S}^{n-1}\) with \(x\in U\) such that \(S_{C_{1},\boldsymbol{C}}(U)=0\). Define \(f\coloneqq f_{U^{c}}\). Then \[0=\int f\,\mathrm{d}\,S_{C_{1},\boldsymbol{C}}=\int\int f(z)\;S_{K,\boldsymbol{ C}}(\mathrm{d}z)\;\mu(\mathrm{d}K).\] The integrand \(\varphi\colon K\mapsto\int f(z)\;\mathrm{S}_{K,\boldsymbol{\mathcal{C}}}(\mathrm{d}z)\) is nonnegative and continuous by the continuity of \(f\) and the weak continuity of the mixed area measure. Therefore, \(\varphi(K)=0\) for \(K\in\operatorname{supp}\mu\). In other words, if \(K\in\operatorname{supp}\mu\), then \(\int f\,\mathrm{d}\,\mathrm{S}_{K,\boldsymbol{\mathcal{C}}}=0\). The integrand being nonnegative and continuous, \(f\) vanishes on \(\operatorname{supp}\mathrm{S}_{K,\boldsymbol{\mathcal{C}}}\). Therefore, \[x\in U\subseteq(\operatorname{supp}\mathrm{S}_{K,\boldsymbol{\mathcal{C}}})^ {c},\] which was to be shown. The preceding theorem can in particular be applied in the case where \(C_{1}\) is a polyoid, as follows from [3, Cor. 2.9]. Finally, we mention a general result which states that the support of the weak limit of a sequence of measures is covered (up to taking the closure) by the supports of these measures. The proof is a straightforward consequence of the definition of weak convergence of measures. **Lemma 2.24** (Support and weak convergence).: _Let \(\mu_{\ell}\to\mu\) be a weakly convergent sequence of finite Borel measures on a second-countable metric space \(E\). Then_ \[\operatorname{supp}\mu\subseteq\operatorname{cl}\bigcup_{\ell=1}^{\infty} \operatorname{supp}\mu_{\ell}.\] The goal of the remaining part of the work is to confirm Theorem 1.1 for polyoids. Before we get to the proof, we need to discuss four concepts: projections, cusps, pruning and switching. These will be combined at the end. ## 3 Projections In the following, we assume that \(n\geq 1\) and \(k\in\mathbb{N}\). For the proof of Theorem 1.1 we show two inclusions. For one of these (namely, "\(\subseteq\)"), two crucial facts that enable us to carry out the argument are that the touching space (see Definition 2.1) of the orthogonal projection of a general convex body \(K\) to a linear subspace is the orthogonal projection of the touching space of \(K\), which is proved in Lemma 3.3, and that the orthogonal projection to a subspace of a \(k\)-polyoid \(K\) with generating measure \(\mu\) is again a \(k\)-polyoid for which the projection of \(\mu\) is a generating measure, which is established in Lemma 3.4. Lemmas 3.1 and 3.2 prepare the proof of Lemma 3.3. These auxiliary results are treated in the present section. Further ingredients needed to establish the inclusion "\(\subseteq\)" are developed in Section 4. **Lemma 3.1**.: _Let \(A\subseteq\mathbb{R}^{n}\) be a convex set, \(W\subseteq\mathbb{R}^{n}\) a linear subspace and \(u\in W\setminus\{0\}\). Then for all \(x\in A\),_ \[x\in F(A,u)\iff\pi_{W}(x)\in F(\pi_{W}(A),u).\] Proof.: The basic observation is that for all \(x\in A\), we have \(\langle x,u\rangle=\langle\pi_{W}(x),u\rangle\), and hence \(h_{A}(u)=h_{\pi_{W}(A)}(u)\). So if \(x\in F(A,u)\), then \(\langle\pi_{W}(x),u\rangle=\langle x,u\rangle=h_{A}(u)=h_{\pi_{W}(A)}(u)\) and therefore \(\pi_{W}(x)\in F(\pi_{W}(A),u)\). Conversely, if \(\pi_{W}(x)\in F(\pi_{W}(A),u)\), then \(\langle x,u\rangle=\langle\pi_{W}(x),u\rangle=h_{\pi_{W}(A)}(u)=h_{A}(u)\) and hence \(x\in F(A,u)\). **Lemma 3.2**.: _Let \(K\in\mathbb{R}^{n}\) be a convex body, \(W\subseteq\mathbb{R}^{n}\) a linear subspace and \(u\in W\setminus\{0\}\). Then \(N_{W}(\pi_{W}(K),F(\pi_{W}(K),u))=W\cap N(K,F(K,u))\)._ Proof.: By definition of \(N_{W}\), both sides of the equation are subsets of \(W\). Moreover, both contain \(0\). Let \(v\in W\setminus\{0\}\). Then the claim can be reformulated as \[F(\pi_{W}(K),u)\subseteq F(\pi_{W}(K),v)\iff F(K,u)\subseteq F(K,v).\] Let us first assume that \(F(\pi_{W}(K),u)\subseteq F(\pi_{W}(K),v)\) and let \(x\in F(K,u)\). Then by Lemma 3.1, \(\pi_{W}(x)\in F(\pi_{W}(K),u)\). By assumption, this implies \(\pi_{W}(x)\in F(\pi_{W}(K),v)\). Another application of Lemma 3.1 now shows that \(x\in F(K,v)\). Therefore, \(F(K,u)\subseteq F(K,v)\). Now assume \(F(K,u)\subseteq F(K,v)\) and let \(y\in F(\pi_{W}(K),u)\). Writing \(y=\pi_{W}(x)\) for some \(x\in K\) and applying Lemma 3.1, we obtain \(x\in F(K,u)\). By assumption, this implies \(x\in F(K,v)\), and again using Lemma 3.1, this shows that \(y=\pi_{W}(x)\in F(\pi_{W}(K),v)\). Therefore, \(F(\pi_{W}(K),u)\subseteq F(\pi_{W}(K),v)\). **Lemma 3.3**.: _Let \(K\) be a convex body, \(W\subseteq\mathbb{R}^{n}\) a linear subspace and \(u\in W\setminus\{0\}\). Then \(T_{W}(\pi_{W}(K),u)=W\cap T(K,u)\) and \(\operatorname{TS}_{W}(\pi_{W}(K),u)=\pi_{W}(\operatorname{TS}(K,u))\)._ Proof.: By Definition 2.1, \(T_{W}(\pi_{W}(K),u)\) is the unique face of the normal cone \(N_{W}(\pi_{W}(K),F(\pi_{W}(K),u))\) such that its relative interior contains \(u\). Similarly, \(T(K,u)\) is the unique face of \(N(K,F(K,u))\) such that its relative interior contains \(u\). We show that \(W\cap T(K,u)\) satisfies the definition of \(T_{W}(\pi_{W}(K),u)\). Because \(T(K,u)\) is a face of \(N(K,F(K,u))\) and by Lemma 3.2, \[W\cap T(K,u)\text{ is a face of }W\cap N(K,F(K,u))=N_{W}(\pi_{W}(K),F(\pi_{W}(K),u)).\] As (relint \(T(K,u))\cap W\) contains \(u\) and \(W\) is a linear subspace, \(\operatorname{relint}(W\cap T(K,u))\) also contains \(u\). This proves the first claim. For the second claim, observe that \(u\in(\operatorname{relint}T(K,u))\cap W\) implies \[\operatorname{span}(T(K,u)\cap W)=(\operatorname{span}T(K,u))\cap W.\] Using the first claim, we get \[\operatorname{span}T_{W}(\pi_{W}K,u)=\operatorname{span}(T(K,u)\cap W)=( \operatorname{span}T(K,u))\cap W.\] Now we take the orthogonal complement in \(W\) and obtain \[\operatorname{TS}_{W}(\pi_{W}K,u) =T_{W}(\pi_{W}K,u)^{\perp}\cap W=(T(K,u)^{\perp}+W^{\perp})\cap W\] \[=\pi_{W}T(K,u)^{\perp}=\pi_{W}\operatorname{TS}(K,u),\] which confirms also the second claim. In [3] a \(k\)-polyoid, for an integer \(k\in\mathbb{N}\), was defined as the limit of a sequence of Minkowski sums of \(k\)-topes, where a \(k\)-tope is a convex polytope having at most \(k\) vertices. Let \(\mathcal{P}_{k}^{n}\) denote the set of \(k\)-topes in \(\mathbb{R}^{n}\). Furthermore, it was shown in [3, Thm. 2.8] that a convex body \(K\in\mathcal{K}^{n}\) is a \(k\)-polyoid if and only if there is a probability measure \(\mu\) on \(\mathcal{P}_{k}^{n}\) with compact support such that \[h_{K}(u)=\int\,h_{P}(u)\,\mu(\mathrm{d}P),\quad u\in\mathbb{R}^{n}. \tag{10}\] Any such (in general non-unique) measure \(\mu\) is called a generating measure of the \(k\)-polyoid \(K\). Let \(\varnothing\neq\mathcal{K}_{*}\subseteq\mathcal{K}^{n}\) be a Borel set (Borel sets are defined with respect to the topology induced by the Hausdorff metric on \(\mathcal{K}^{n}\)). A convex body \(K\) in \(\mathbb{R}^{n}\), \(n\in\mathbb{N}_{0}\), for which there is a probability measure \(\mu\) on \(\mathcal{K}_{*}\) with bounded support such that (10) holds, is called a \(\mathcal{K}_{*}\)_-macroid_ with generating measure \(\mu\). Here the support of \(\mu\) is determined with respect to the metric space \(\mathcal{K}_{*}\). It was shown in [3, Lem. 2.11] that a \(\mathcal{K}_{*}\)_-macroid_ with generating measure \(\mu\) is the limit of a sequence of Minkowski sums of convex bodies in \(\operatorname{supp}\mu\). In the case \(\mathcal{K}_{*}=\mathcal{P}^{n}\), that is, \(K\) is a \(\mathcal{P}^{n}\)-macroid with generating measure \(\mu\) on \(\mathcal{P}^{n}\), we simply say that \(K\) is a macroid with generating measure \(\mu\). **Lemma 3.4**.: _Let \(K\) be a macroid (a \(k\)-polyoid) with generating measure \(\mu\), and let \(W\subseteq\mathbb{R}^{n}\) be a linear subspace. Moreover, let \(\tilde{\pi}_{W}\) be the function that maps the \(k\)-topes \(P\subseteq\mathbb{R}^{n}\) to the \(k\)-topes \(\pi_{W}(P)\subseteq W\). Then \(\pi_{W}(K)\) is a macroid (a \(k\)-polyoid) with generating measure_ \[\mu^{W}\coloneqq\mu\circ\tilde{\pi}_{W}^{-1}\quad\text{and}\quad\tilde{\pi}_ {W}(\operatorname{supp}\mu)\subseteq\operatorname{supp}\mu^{W}.\] _If \(K\subset\mathbb{R}^{n}\) is a \(k\)-polyoid, then \(\tilde{\pi}_{W}(\operatorname{supp}\mu)=\operatorname{supp}\mu^{W}\)._ Proof.: Let \(K\) be a macroid (a \(k\)-polyoid) with generating measure \(\mu\). For all \(u\in W\), \[h_{\pi_{W}(K)}(u)=h_{K}(u)=\int\,h_{P}(u)\,\mu(\mathrm{d}P)=\int\,h_{\tilde{ \pi}_{W}(P)}(u)\,\mu(\mathrm{d}P)=\int\,h_{P}(u)\,\mu^{W}(\mathrm{d}P).\] Moreover, if \(\mu\) is a probability measure with bounded (compact) support on polytopes (\(k\)-topes) in \(\mathbb{R}^{n}\), then \(\mu^{W}\) is a probability measure with bounded (compact) support on polytopes (\(k\)-topes) in \(W\). Let \(P\in\tilde{\pi}_{W}(\operatorname{supp}\mu)\) and \(U\) an open neighborhood of \(P\) in the space of \(k\)-topes in \(W\). Then there is \(Q\in\operatorname{supp}\mu\) such that \(\tilde{\pi}_{W}(Q)=P\), so that \(\tilde{\pi}_{W}^{-1}(U)\) is an open neighborhood of \(Q\) in \(\mathcal{P}^{n}\) (respectively, in \(\mathcal{P}^{n}_{k}\)). Therefore, \[\mu^{W}(U)=\mu(\tilde{\pi}_{W}^{-1}(U))>0,\] and because this holds for arbitrary \(P\in\tilde{\pi}_{W}(\operatorname{supp}\mu)\) and neighborhoods \(U\) of \(P\), it follows that \(\tilde{\pi}_{W}(\operatorname{supp}\mu)\subseteq\operatorname{supp}\mu^{W}\). Now we assume that \(K\) is a \(k\)-polyoid. Because \(\tilde{\pi}_{W}\) is continuous and \(\operatorname{supp}\mu\) is compact, the set \(\tilde{\pi}_{W}(\operatorname{supp}\mu)\) is compact and hence closed. From \[\mu^{W}\left(\tilde{\pi}_{W}(\operatorname{supp}\mu)^{c}\right)=\mu(\tilde{ \pi}_{W}^{-1}(\tilde{\pi}_{W}(\operatorname{supp}\mu))^{c})\leq\mu((\operatorname {supp}\mu)^{c})=0\] we conclude that \(\operatorname{supp}\mu^{W}\subseteq\tilde{\pi}_{W}(\operatorname{supp}\mu)\), and thus \(\operatorname{supp}\mu^{W}=\tilde{\pi}_{W}(\operatorname{supp}\mu)\). **Remark 3.5**.: Let \(C\) be a macroid (\(k\)-polyoid) with generating measure \(\mu\) and \(u\in\mathbb{S}^{n-1}\). Recall from [3, Rem. 2.19] that \(F(C,u)\) is a macroid (\(k\)-polyoid) with generating measure \(F_{u}(\mu)\), which denotes the image measure of \(\mu\) under the measurable map \(F_{u}=F(\cdot,u)\). In other words, \[h_{F(C,u)}=\int h_{P}\,F_{u}(\mu)(\mathrm{d}P). \tag{11}\] As a consequence, we obtain \[\bigcap_{P\in\mathcal{P}(\mu)}N(P,F(P,u))\subseteq N(C,F(C,u)), \tag{12}\] whenever \(\mathcal{P}(\mu)\subseteq\mathcal{P}^{n}\) is a measurable set of full \(\mu\)-measure. For instance, we can choose \(\mathcal{P}(\mu)=\operatorname{supp}\mu\). To verify (12), let \(v\in\bigcap_{P\in\mathcal{P}(\mu)}N(P,F(P,u))\). Then, for each \(P\in\mathcal{P}(\mu)\), \(F(P,u)\subseteq F(P,v)\), hence \(h_{F(P,u)}\leq h_{F(P,v)}\). Then (11) yields \[h_{F(C,u)}=\int h_{F(P,u)}\,\mu(\mathrm{d}P)\leq\int h_{F(P,v)}\,\mu(\mathrm{ d}P)=h_{F(C,u)},\] which shows that \(F(C,u)\subseteq F(C,v)\), and therefore \(v\in N(C,F(C,u))\). A corresponding inclusion for the touching cones does not hold in general, as shown by Example 5.2, which is in contrast to the case of finite Minkowski sums (see [7, Thm. 2.2.1 (a)]). There is a partial converse to (12). Let \(u\in\mathbb{S}^{n-1}\) be fixed and let \(s(L)\) denote the Steiner point of \(L\in\mathcal{K}^{n}\). Recall from [7, (1.34)] that \(s(L)\in\operatorname{relint}L\). Fubini's theorem yields \[s(F(C,u))=\int s(F(P,u))\,\mu(\mathrm{d}P),\] (compare [3, Rem. 2.14]), and therefore \[h_{C-s(F(C,u))}=\int\,h_{P-s(F(P,u))}\,\mu(\mathrm{d}P).\] All support functions in this equation are nonnegative. If \(v\in N(C,F(C,u))\), then \(h_{C-s(F(C,u))}(v)=0\), and hence \(h_{P-s(F(P,u))}(v)=0\) for \(\mu\)-almost all \(P\in\mathcal{P}^{n}\). This shows that \(v\in N(P,F(P,u))\) for \(\mu\)-almost all \(P\in\mathcal{P}^{n}\), that is, there is a measurable set \(\mathcal{P}_{u,v}(\mu)\) of full \(\mu\)-measure such that \(v\in N(P,F(P,u))\) for all \(P\in\mathcal{P}_{u,v}(\mu)\). Let \(D_{u}\) be a countable dense subset of \(N(C,F(C,u))\) and set \(\mathcal{P}_{u}(\mu):=\cap_{v\in D_{u}}\mathcal{P}_{u,v}(\mu)\). Then \(\mathcal{P}_{u}(\mu)\) is a measurable set that has full \(\mu\)-measure and \[N(C,F(C,u))\subseteq\bigcap_{P\in\mathcal{P}_{u}(\mu)}N(P,F(P,u)).\] Together with (12) we obtain \[N(C,F(C,u))=\bigcap_{P\in\mathcal{P}_{u}(\mu)}N(P,F(P,u)).\] ## 4 Cusps The proof of Theorem 1.1 relies on the assumption that the convex bodies in question are polyoids. In fact, one inclusion holds for the larger class of macroids. For this reason, the results in this section are provided for the class of macroids or for general convex bodies. The following results about _cusps_ describe what it means that the touching space of a convex body \(K\) (a macroid \(K\) with generating measure \(\mu\)) is \(0\)-dimensional, in terms of the polytopes in the support of \(\mu\). One might hope that \(\mathrm{TS}(K,u)=\{0\}\) if and only if the same holds for all \(P\in\mathrm{supp}\,\mu\), but this turns out to be false (the "only if" statement is true though, as follows from Lemmas 4.3 and 4.7). Cusps can be thought of as an attempt to quantify how far a convex body is from having a non-trivial touching space. Intuitively, Lemmas 4.3 and 4.7 show that \(\mathrm{TS}(K,u)\) is trivial if and only if the \(k\)-topes in \(\mathrm{supp}\,\mu\) keep a minimum distance from having a non-trivial touching space. Lemmas 4.3 and 4.7 will be employed in the crucial Witness Lemma 5.8. **Definition 4.1**.: For all \(u\in\mathbf{S}^{n-1}\) and \(c>0\), define a cone with apex at \(0\), \[\mathfrak{C}_{c}(u)\coloneqq\{x\in\mathbb{R}^{n}\mid\ \langle x,u\rangle \leq-c\|x\|\}.\] Let \(K\subseteq\mathbb{R}^{n}\) be a convex body, \(u\in\mathbf{S}^{n-1}\) and \(c>0\). Then \(K\) is said to _have a \(c\)-cusp in direction \(u\in\mathbf{S}^{n-1}\)_ if there is some \(x\in K\) such that \(K\subseteq x+\mathfrak{C}_{c}(u)\). Note that \(\mathfrak{C}_{c}(u)=\{0\}\) if \(c>1\) and \(\mathfrak{C}_{1}(u)=-[0,\infty)u\); the cone \(\mathfrak{C}_{c}(u)\) is getting smaller as \(c\in(0,1]\) is getting larger. In particular, if \(K\) has a \(c\)-cusp in direction \(u\), then it also has a \(c^{\prime}\)-cusp in direction \(u\) for \(0<c^{\prime}<c\). **Lemma 4.2**.: _Let \(K\in\mathcal{K}^{n}\) be a convex body, \(u\in\mathbb{S}^{n-1}\) and \(c>0\). Then the following are equivalent:_ 1. \(K\) _has a_ \(c\)_-cusp in direction_ \(u\)_._ 2. \(h_{K}\) _is linear on_ \(U(u,c)\coloneqq cB^{n}+u\)_._ Proof.: The statement is invariant under translations. "(a) \(\implies\) (b)": Assume that there is some \(x\in K\) with \(K\subseteq x+\mathfrak{C}_{c}(u)\). Translating \(K\), we can arrange that \(x=0\). Then the Cauchy-Schwarz inequality shows that for all \(v\in U(u,c)\) and \(y\in K\subseteq\mathfrak{C}_{c}(u)\), \[\langle y,v\rangle\leq\langle y,u\rangle+\|u-v\|\|y\|\leq(-c+\|u-v\|)\|y\|\leq 0 =\langle x,v\rangle\,.\] So \(h_{K}(v)=0\) for \(v\in U(u,c)\). "(b) \(\implies\) (a)": Assume that there is some \(x\in\mathbb{R}^{n}\) such that \(h_{K}=\langle x,\cdot\rangle\) on \(U(u,c)\). Translating \(K\) by \(-x\), we can arrange that \(x=0\). Then for all \(y\in K\setminus\{0\}\), \[\langle y,u\rangle=\left\langle y,u+\frac{c}{\|y\|}y\right\rangle-c\|y\|\leq h _{K}\left(u+\frac{c}{\|y\|}y\right)-c\|y\|=-c\|y\|.\] So \(K\subseteq\mathfrak{C}_{c}(u)\) (remembering \(0\in\mathfrak{C}_{c}(u)\)). Moreover, \(h_{K}^{\prime}(u;\cdot)=0\) because \(U(u,c)\) is a neighborhood of \(u\) where \(h_{K}\equiv 0\). With [7, Thm. 1.7.2] it follows that \[h_{F(K,u)}=h_{K}^{\prime}(u;\cdot)=0=h_{\{0\}},\] proving that \(0\in\{0\}=F(K,u)\subseteq K\). So \(K\subseteq\mathfrak{C}_{c}(u)\) and \(0\in K\). Next we use Lemma 4.2 to characterize the situation when the touching space is trivial. **Lemma 4.3**.: _Let \(K\in\mathcal{K}^{n}\) be a convex body, and let \(u\in\mathbb{S}^{n-1}\). Then the following are equivalent._ 1. \(\mathrm{TS}(K,u)=\{0\}\)_._ 2. _There is some_ \(c>0\) _such that_ \(K\) _has a_ \(c\)_-cusp in direction_ \(u\) Proof.: "(a) \(\implies\) (b)": Assume that \(\operatorname{TS}(K,u)=\{0\}\). Then \(u\in\operatorname{int}N(K,F(K,u))\). So there is \(c>0\) such that \(U(u,c)=\{u\}+cB^{n}\subseteq N(K,F(K,u))\). Choosing \(x\in F(K,u)\), it follows that \(h_{K}=\langle x,\cdot\rangle\) on \(U(u,c)\subseteq N(K,F(K,u))\). Then by Lemma 4.2, \(K\) has a \(c\)-cusp in direction \(u\). "(b) \(\implies\) (a)": Assume that \(K\) has a \(c\)-cusp in direction \(u\) for some \(c>0\). Then by Lemma 4.2, there is \(x\in\mathbb{R}^{n}\) such that \(h_{K}=\langle x,\cdot\rangle\) on \(U(u,c)=\{u\}+cB^{n}\). By [7, Thm. 1.7.2], all \(v\in\operatorname{int}U(u,c)\) satisfy \(h_{F(K,v)}=h_{K}^{\prime}(v;\cdot)=\langle x,\cdot\rangle\), so that \(F(K,v)=\{x\}=F(K,u)\). Hence, \(\operatorname{int}(\{u\}+cB^{n})\subseteq N(K,F(K,u))\), showing that \(u\in\operatorname{int}N(K,F(K,u))\) and \(\operatorname{TS}(K,u)=\{0\}\). In the following we need to understand how the local linearity of the support function of a macroid is related to the local linearity of the support functions of the polytopes in the support of a generating measure of the macroid. This relation is given in Lemma 4.6, which we prepare by two simple lemmas. The first lemma is well-known, but we state it for easier reference. The proof of the second lemma is included, since it is crucial for the proof of Lemma 4.6. **Lemma 4.4**.: _Let \(A\subseteq\mathbb{R}^{n}\) be a convex set, \(f\colon A\to\mathbb{R}\) a convex function and \(a\in\operatorname{relint}A\). Then there is \(u\in\overline{\operatorname{span}}A\) such that_ \[f(x)\geq\langle x-a,u\rangle+f(a)\quad\text{for all $x\in A$.}\] **Lemma 4.5**.: _Let \(A\subseteq\mathbb{R}^{n}\) be a convex set, and let \(f\colon\mathbb{R}^{n}\to\mathbb{R}\) be positively \(1\)-homogeneous. Then the following are equivalent._ 1. \(f\) _is linear on_ \(A\) _(i.e. agrees on_ \(A\) _with a function_ \(x\mapsto\langle x,u\rangle\)_, where_ \(u\in\mathbb{R}^{n}\)_)._ 2. \(f\) _is affine on_ \(A\) _(i.e. agrees on_ \(A\) _with a function_ \(x\mapsto\langle x,u\rangle+c\)_, where_ \(u\in\mathbb{R}^{n}\) _and_ \(c\in\mathbb{R}\)_)._ 3. \(f\) _is convex and concave on_ \(A\)_._ Proof.: (a) implies (b) and (b) implies (c). Without loss of generality, \(A\) is nonempty. "(b) \(\implies\) (a)": Assume that there are \(u\in\mathbb{R}^{n}\) and \(c\in\mathbb{R}\) such that \[f(x)=\langle x,u\rangle+c\quad\text{for $x\in A$.}\] Let \(E\) be the affine span of \(A\). If \(0\in E\), then choose \(x\in\operatorname{relint}A\). There is \(\lambda\in(0,1)\) such that \(\lambda x\in A\), so that we obtain \[\lambda\,\langle x,u\rangle+c=f(\lambda x)=\lambda f(x)=\lambda\,\langle x,u \rangle+\lambda c\implies c=\lambda c\implies c=0.\] If \(0\notin E\), then \(E\cap\overline{\operatorname{span}}\,A=\varnothing\). Choose \(a\in A\). Then \(a\notin\overline{\operatorname{span}}\,A=(\overline{\operatorname{span}}\,A)^{ \perp\perp}\) and there is \(v\in(\overline{\operatorname{span}}\,A)^{\perp}\) such that \(\langle a,v\rangle\neq 0\). Also observe that if \(x\in A\), then \(x-a\in\overline{\operatorname{span}}\,A\), so that \(\langle x,v\rangle=\langle a,v\rangle\). So \[f(x)=\langle x,u\rangle+c=\langle x,u\rangle+c\frac{\langle x,v\rangle}{ \langle a,v\rangle}=\left\langle x,u+c\frac{v}{\langle a,v\rangle}\right\rangle \quad\text{for all }x\in A.\] "(c) \(\implies\) (b)": Let \(a\in\operatorname{relint}\,A\). By convexity of \(f\) and Lemma 4.4, there is \(u\in\overline{\operatorname{span}}\,A\) such that \[f(x)\geq\langle x-a,u\rangle+f(a)\quad\text{for all }x\in A.\] By concavity of \(f\) and Lemma 4.4 applied to \(-f\), there is \(v\in\overline{\operatorname{span}}\,A\) such that \[f(x)\leq\langle x-a,v\rangle+f(a)\quad\text{for all }x\in A.\] Hence, \(\langle x-a,v-u\rangle\geq 0\) for all \(x\in A\). Because \(a\in\operatorname{relint}\,A\) and \(u,v\in\overline{\operatorname{span}}\,A\), this shows that \(v=u\) and so \[f(x)=\langle x-a,u\rangle+f(a)=\langle x,u\rangle-\langle a,u\rangle+f(a)\quad \text{for all }x\in A,\] which completes the proof. **Lemma 4.6**.: _Let \(\varnothing\neq\mathcal{K}_{*}\subseteq\mathcal{K}^{n}\) be a Borel set. Let \(K\in\mathcal{K}^{n}\) be a \(\mathcal{K}_{*}\)-macroid with generating measure \(\mu\) on \(\mathcal{K}_{*}\), and let \(A\subseteq\mathbb{R}^{n}\) be convex. Then \(h_{K}\) is linear on \(A\) if and only if \(h_{P}\) is linear on \(A\) for all \(P\in\operatorname{supp}\mu\)._ Proof.: Every support function of a convex body is convex and positively \(1\)-homogeneous. So by Lemma 4.5, it is linear on \(A\) if and only if it is concave on \(A\). "\(\implies\)": Assume that there is \(P\in\operatorname{supp}\mu\) such that \(h_{P}\) is not concave on \(A\). Then there are an open neighborhood \(U\) of \(P\), \(\lambda\in(0,1)\) and \(y,z\in A\) such that for all \(Q\in U\), \[h_{Q}(\lambda y+(1-\lambda)z)<\lambda h_{Q}(y)+(1-\lambda)h_{Q}(z).\] On the other hand, for all \(Q\in U^{\mathrm{c}}\), convexity of \(h_{Q}\) implies \[h_{Q}(\lambda y+(1-\lambda)z)\leq\lambda h_{Q}(y)+(1-\lambda)h_{Q}(z).\] Since \(\mu(U)>0\), we thus obtain from (10) that \[h_{K}(\lambda y+(1-\lambda)z)<\lambda h_{K}(y)+(1-\lambda)h_{K}(z).\] Therefore, \(h_{K}\) is not concave on \(A\). \({}^{a}\Longleftarrow\)": Assume that \(h_{K}\) is not concave on \(A\). Then there are \(\lambda\in(0,1)\) and \(y,z\in A\) such that \[h_{K}(\lambda y+(1-\lambda)z)<\lambda h_{K}(y)+(1-\lambda)h_{K}(z).\] In particular, there is at least one \(P\in\operatorname{supp}\mu\) with \[h_{P}(\lambda y+(1-\lambda)z)<\lambda h_{P}(y)+(1-\lambda)h_{P}(z).\] Therefore, \(h_{P}\) is not concave on \(A\). **Lemma 4.7**.: _Let \(\varnothing\neq\mathcal{K}_{*}\subseteq\mathcal{K}^{n}\) be a Borel set. Let \(K\in\mathcal{K}^{n}\) be a \(\mathcal{K}_{*}\)-macroid with generating measure \(\mu\) on \(\mathcal{K}_{*}\). Let \(u\in\mathbb{S}^{n-1}\) and \(c>0\). Then the following are equivalent._ 1. \(K\) _has a_ \(c\)_-cusp in direction_ \(u\)_._ 2. _Every_ \(P\in\operatorname{supp}\mu\) _has a_ \(c\)_-cusp in direction_ \(u\)_._ Proof.: By Lemma 4.2, \(K\) has a \(c\)-cusp in direction \(u\) if and only if \(h_{K}\) is linear on \(U(u,c)\). By Lemma 4.6, this is equivalent to \(h_{P}\) being linear on \(U(u,c)\) for all \(P\in\operatorname{supp}\mu\). Again by Lemma 4.2, this in turn is equivalent to \(P\) having a \(c\)-cusp in direction \(u\) for all \(P\in\operatorname{supp}\mu\). As a consequence of Lemmas 4.3 and 4.7, we obtain the following corollary. **Corollary 4.8**.: _Let \(\varnothing\neq\mathcal{K}_{*}\subseteq\mathcal{K}^{n}\) be a Borel set. Let \(K\in\mathcal{K}^{n}\) be a \(\mathcal{K}_{*}\)-macroid with generating measure \(\mu\) on \(\mathcal{K}_{*}\) and let \(u\in\mathbb{S}^{n-1}\). Then the following statements are equivalent:_ 1. \(\operatorname{TS}(K,u)\neq\{0\}\)_._ 2. _For each_ \(c>0\) _there exists some_ \(P\in\operatorname{supp}\mu\) _that does not have a_ \(c\)_-cusp in direction_ \(u\)_._ We denote by \(\mathcal{K}^{n}_{sm}\) the set of all smooth convex bodies. Since the complement of \(\mathcal{K}^{n}_{sm}\) is a countable union of closed sets, \(\mathcal{K}^{n}_{sm}\) is measurable. It follows from [7, Thm. 2.2.1 (a)] that a finite Minkowski sum of convex bodies, one of which is smooth, is smooth again. In other words, if the sum is not smooth, then none of the summands is smooth. Next we show that this fact extends to macroids. In particular, there is no point in considering \(\mathcal{K}^{n}_{sm}\)-macroids. **Corollary 4.9**.: _Suppose that \(K\) is a \(\mathcal{K}_{*}\)-macroid with generating measure \(\mu\) and \(K\) is not smooth. Then none of the \(L\in\operatorname{supp}\mu\) is smooth._ Proof.: If \(K\) is not smooth, then there is a convex cone \(A\) with \(\dim A\geq 2\) such that \(h_{K}\) is linear on \(A\). By Lemma 4.7, \(h_{P}\) is linear on \(A\), for each \(P\in\operatorname{supp}\mu\). But then \(P\) is not smooth, for each \(P\in\operatorname{supp}\mu\). ## 5 Pruning This section develops a technique that is only relevant for proving one of the two inclusions on which the characterization Theorem1.1 is based: Let \(\boldsymbol{\mathcal{C}}=(C_{1},\ldots,C_{n-1})\) be a tuple of \(k\)-polyoids with generating measures \(\mu_{1},\ldots,\mu_{n-1}\). If \(u\in\operatorname{ext}\boldsymbol{\mathcal{C}}\), then we have to show that \(u\in\operatorname{supp}\operatorname{S}(\boldsymbol{\mathcal{C}})\). Because this is the most difficult aspect of Support Characterization Theorem1.1, we begin with some examples. The first example introduces the idea of a "witness polytope" that is used to prove that some normal vector is in the support of a mixed area measure. The other two examples exemplify how to find "witness polytopes" in more complicated situations using _pruning_, the method developed in this section. **Example 5.1** (A witness polytope).: Let \(n=2\). Let \((e_{1},e_{2})\) be the standard orthonormal basis of \(\mathbb{R}^{2}\). Let \[C^{(\ell)}\coloneqq\operatorname{conv}\bigl{\{}0,e_{2},e_{1}+(1+\ell^{-1})e_ {2}\bigr{\}},\quad\ell\in\mathbb{N},\] and define the triangle body (i.e., the \(3\)-polyoid) \[C\coloneqq\sum_{\ell=1}^{\infty}2^{-\ell}C^{(\ell)}\] with generating measure \[\mu\coloneqq\sum_{\ell=1}^{\infty}2^{-\ell}\delta_{C^{(\ell)}};\] see Figure1 for an illustration. The sequence \((C^{(\ell)})_{\ell}\) converges to the triangle \[K\coloneqq\operatorname{conv}\{0,e_{2},e_{1}+e_{2}\}\] and so \[\operatorname{supp}\mu=\Bigl{\{}K,C^{(1)},C^{(2)},\ldots\Bigr{\}}.\] By Corollary4.8 we find that \(\operatorname{TS}(C,e_{2})\neq\{0\}\) because \(K\in\operatorname{supp}\mu\) does not have a \(c\)-cusp in direction \(e_{2}\) for any \(c>0\). Hence, \(e_{2}\) is a \((C)\)-extreme normal vector. So Theorem1.1 predicts \(e_{2}\in\operatorname{supp}\operatorname{S}(C)\). Indeed, Theorem2.23 and \(K\in\operatorname{supp}\mu\) show that \[e_{2}\in\operatorname{supp}\operatorname{S}(K)\subseteq\operatorname{supp} \operatorname{S}(C).\] Alternatively, we could argue that \(C^{(\ell)}\to K\) and so Lemma2.24 and Theorem2.23 yield \[e_{2}\in\operatorname{supp}\operatorname{S}(K)\subseteq\bigcup_{\ell=1}^{ \infty}\operatorname{supp}\operatorname{S}(C^{(\ell)})\subseteq\bigcup_{P \in\operatorname{supp}\mu}\operatorname{supp}\operatorname{S}(P)= \operatorname{supp}\operatorname{S}(C).\] We have used \(K\in\operatorname{supp}\mu\) as a "witness polytope" to establish \(e_{2}\in\operatorname{supp}\operatorname{S}(C)\). Figure 1: The situation of Example 5.1 **Example 5.2** (Pruning).: Let \(n=2\). Let again \((e_{1},e_{2})\) be the standard orthonormal basis of \(\mathbb{R}^{2}\). Let \[C^{(\ell)}\coloneqq\operatorname{conv}\bigl{\{}-e_{2},0,-\ell^{-1}e_{1}-\ell^{- 2}e_{2}\bigr{\}},\quad\ell\in\mathbb{N};\] see Figure 2 (a) and (b). Then we define \(C\) to be the \(3\)-polyoid with generating measure \[\mu\coloneqq\sum_{\ell=1}^{\infty}2^{-\ell}\delta_{C^{(\ell)}}.\] The sequence \((C^{(\ell)})_{\ell}\) converges to the segment \[K\coloneqq\operatorname{conv}\bigl{\{}0,-e_{2}\bigr{\}},\] so that \(\operatorname{supp}\mu=\bigl{\{}K,C^{(1)},C^{(2)},\ldots\bigr{\}}\). This time, \(\operatorname{TS}(C^{(\ell)},e_{2})\) for \(\ell\in\mathbb{N}\) and \(\operatorname{TS}(K,e_{2})\) are all \(0\)-dimensional. However, there is no fixed \(c>0\) such that every \(C^{(\ell)}\) has a \(c\)-cusp in direction \(e_{2}\), so that \(\operatorname{TS}(C,e_{2})\) is nontrivial by Lemmas 4.3 and 4.7. Theorem 1.1 predicts again that \(e_{2}\in\operatorname{supp}\operatorname{S}(C)\). However, since \(e_{2}\notin\operatorname{supp}\operatorname{S}(C^{(\ell)})\) for all \(\ell\in\mathbb{N}\) and \(e_{2}\notin\operatorname{supp}\operatorname{S}(K)\), we cannot choose a "witness polytope" in \(\operatorname{supp}\mu\) and repeat the argument from Example 5.1. The problem is this. In the previous example, the faces between the second and third vertex of \(C^{(\ell)}\) converged to a one-dimensional face of the limit triangle \(K\) with normal \(e_{2}\). In the current example, however, these faces degenerate to a \(0\)-dimensional face. The only glimmer of hope is that the outer normals of these degenerating faces do still converge to \(e_{2}\). If we could just scale up \(C^{(\ell)}\) by a factor of \(\ell\), the faces would not degenerate, but then we are confronted with the problem that \(C^{(\ell)}\) is an unbounded sequence of convex bodies that does not converge to anything we might call a "witness polytope" anymore. On the other hand, by Lemma 2.4 we find a neighborhood \(U\subseteq\mathbb{S}^{n-1}\) of \(e_{2}\) such that for large enough \(\ell\) and all \(v\in U\), \[F(C^{(\ell)},v)\subseteq\operatorname{conv}\bigl{\{}0,-\ell^{-1}e_{1}-\ell^{- 2}e_{2}\bigr{\}}\eqqcolon F^{(\ell)};\] see Figure 2 (c) for an illustration. Therefore, for all Borel sets \(V\subseteq U\), \[\tau(C^{(\ell)},V)=\tau(F^{(\ell)},V).\] Now Lemma 2.12 implies that \[\operatorname{S}(C^{(\ell)})_{\perp}U=\operatorname{S}(F^{(\ell)})_{\perp}U.\] So if we can show that \(e_{2}\in\operatorname{cl}\bigcup_{\ell=1}^{\infty}\operatorname{supp}\operatorname{S }(F^{(\ell)})\), then \(e_{2}\in\operatorname{cl}\bigcup_{\ell=1}^{\infty}\operatorname{supp} \operatorname{S}(C^{(\ell)})=\operatorname{supp}\operatorname{S}(C)\). Indeed, \(\ell\cdot F^{(\ell)}\to F:=\operatorname{conv}\{0,-e_{1}\}\), and therefore Lemma 2.24 yields \[e_{2}\in\operatorname{supp}\operatorname{S}(F)\subseteq\operatorname{cl}\bigcup _{\ell=1}^{\infty}\operatorname{supp}\operatorname{S}(\ell\cdot F^{(\ell)})= \operatorname{cl}\bigcup_{\ell=1}^{\infty}\operatorname{supp}\operatorname{S }(F^{(\ell)}).\] In this example, we have leveraged that the \(c\)-cusps of \(C^{(\ell)}\) in direction \(e_{2}\) become more and more obtuse in the sense that \(c>0\) becomes smaller and smaller. This helped us find a sequence of faces, which unfortunately degenerated to a \(0\)-dimensional face in the limit. After "pruning" the sequence of triangles, i.e. removing some irrelevant vertices, we were able to scale up the polytopes in the sequence so that the sequence of faces converged to a \(1\)-dimensional face \(F\), which we used as our "witness polytope" to prove that \(e_{2}\in\operatorname{supp}\operatorname{S}(C)\). **Example 5.3** (Double pruning).: We consider again \(\mathbb{R}^{2}\) with the standard or Figure 2: The situation of Example 5.2 thonormal basis \((e_{1},e_{2})\). Define for all \(\ell\in\mathbb{N}\), \[v_{1}^{(\ell)}\coloneqq-e_{2},\quad v_{2}^{(\ell)}\coloneqq 0,\quad v_{3}^{(\ell)} \coloneqq-\ell^{-1}e_{1}-\ell^{-1}e_{2},\quad v_{4}^{(\ell)}\coloneqq-\ell^{- 2}e_{1}-\ell^{-3}e_{2},\] \[C^{(\ell)}\coloneqq\operatorname{conv}\Bigl{\{}v_{1}^{(\ell)},v _{2}^{(\ell)},v_{3}^{(\ell)},v_{4}^{(\ell)}\Bigr{\}}.\] The vertices \(v_{2}^{(\ell)},v_{3}^{(\ell)},v_{4}^{(\ell)}\) all converge to \(0\), which is the unique element of the support set \(F(\lim C^{(\ell)},e_{2})\). In analogy to the previous example, we remove \(v_{1}^{(\ell)}\) and scale by a factor of \(\ell\) to obtain a sequence of triangles \[D^{(\ell)}\coloneqq\operatorname{conv}\bigl{\{}0,-e_{1}-e_{2},-\ell^{-1}e_{1} -\ell^{-2}e_{2}\bigr{\}}.\] Again, \(F(\lim D^{(\ell)},e_{2})\) is a singleton and the vertices \(\ell v_{2}^{(\ell)},\ell v_{4}^{(\ell)}\) of \(D^{(\ell)}\) converge to its unique element. Removing \(\ell v_{3}^{(\ell)}\) and scaling by \(\ell\) again, we get \[E^{(\ell)}\coloneqq\operatorname{conv}\bigl{\{}0,-e_{1}-\ell^{-1}e_{2}\bigr{\}}.\] Now, \(F(\lim E^{(\ell)},e_{2})=\lim E^{(\ell)}\) is one-dimensional. Applying similar arguments as in the previous example, we conclude from \(e_{2}\in\operatorname{cl}\bigcup_{\ell=1}^{\infty}\operatorname{supp}\operatorname {S}(E^{(\ell)})\) that \(e_{2}\in\operatorname{supp}\operatorname{S}(C)\). This example shows that the pruning procedure may have to be repeated several times. After these preparatory examples, we describe the general approach. **Definition 5.4**.: Let \(\mathbb{Q}=(Q_{\ell})_{\ell}\) be a bounded sequence of polytopes with a uniformly bounded number of vertices and \(u\in\mathbb{S}^{n-1}\). Let \(k\in\mathbb{N}\) be the smallest number such that all polytopes in \(\mathbb{Q}\) are \(k\)-topes. Choose an arbitrary sequence \(\mathbb{V}=(V_{\ell})_{\ell}=((v_{\ell}^{(1)},\dots,v_{\ell}^{(k)}))_{\ell}\) of \(k\)-tuples of points in \(\mathbb{R}^{n}\) such that \[Q_{\ell}=\operatorname{conv}\Bigl{\{}v_{\ell}^{(i)}\ \Big{|}\ i\in[k]\Bigr{\}} \quad\text{for all $\ell\in\mathbb{N}$}.\] Let \(\mathbb{V}^{\prime}=(V_{\ell_{s}})_{s}\) be a convergent subsequence of \(\mathbb{V}\) and \(Q\coloneqq\lim_{t\to\infty}Q_{\ell_{t}}\). Then we define a sequence \(\operatorname{prune}(\mathbb{Q},u)=(\operatorname{prune}(\mathbb{Q},u,s))_{s}\) of polytopes \[\operatorname{prune}(\mathbb{Q},u,s) \coloneqq c_{s}\left(\operatorname{conv}\Bigl{\{}v_{\ell_{s}}^{(i)}\ \Big{|}\ i\in[k],\lim_{t\to\infty}v_{\ell_{t}}^{(i)}\in F(\mathbb{Q},u)\Bigr{\}} -v_{\ell_{s}}^{(i_{0})}\right)\] \[\subseteq c_{s}\left(Q_{\ell_{s}}-v_{\ell_{s}}^{(i_{0})}\right),\] where \(i_{0}\in[k]\) is chosen such that \(\lim_{t\to\infty}v_{\ell_{t}}^{(i_{0})}\in F(\mathbb{Q},u)\) and \(c_{s}\) is the unique positive number such that \(\operatorname{diam}\operatorname{prune}(\mathbb{Q},u,s)=1\) if the convex hull by which \(\operatorname{prune}(\mathbb{Q},u,s)\) is defined is not a singleton, otherwise we set \(c_{s}\coloneqq 1\), for \(s\in\mathbb{N}\). Note that \(0\in\operatorname{prune}(\mathbb{Q},u,s)\). We may also pass to a subsequence of \(\operatorname{prune}(\mathbb{Q},u)\) and denote it in the same way; in any case, the sequence \(\operatorname{prune}(\mathbb{Q},u)\) is subject to various choices and not uniquely determined by \(\mathbb{Q}\) and \(u\). The polytopes in \(\operatorname{prune}(\mathbb{Q},u)\) have diameter \(1\) or are singletons and they contain \(0\). If \[\lim_{t\to\infty}v_{\ell_{t}}^{(i)}\notin F(Q,u)\quad\text{for some $i\in[k]$},\] then \(k\geq 2\) and \(\operatorname{prune}(\mathbb{Q},u)\) consists of \((k-1)\)-topes. After finitely many steps, the members of the sequence of sequences defined by \[\operatorname{prune}_{0}(\mathbb{Q},u)\coloneqq\mathbb{Q},\quad\operatorname {prune}_{m+1}(\mathbb{Q},u)\coloneqq\operatorname{prune}(\operatorname{prune }_{m}(\mathbb{Q},u),u)\quad\text{for all $m\in\mathbb{N}$}\] remain unchanged (if we do not pass to a subsequence) and become equal to some "fixpoint" sequence \(\operatorname{prune}_{*}(\mathbb{Q},u)\). **Remark 5.5**.: If \(\operatorname{prune}_{*}(\mathbb{Q},u)\) is obtained as described in Definition 5.4 and \(Q^{*}\coloneqq\lim_{s\to\infty}\operatorname{prune}_{*}(\mathbb{Q},u,s)\), then \(0\in Q^{*}\subset u^{\perp}\) and \(\operatorname{diam}Q^{*}\in\{0,1\}\). The next two lemmas prepare the proof of the crucial Witness Lemma 5.8. The first is Lemma 5.6 which implies that at least locally pruning does not change the mixed area measures as far as their support is concerned. Lemma 5.7 then states a condition, which can be used to ensure that the limit of a pruning sequence is non-degenerate. **Lemma 5.6** (Pruning lemma).: _Let \(\mathbb{Q}=(Q_{\ell})_{\ell}\) be a bounded sequence of polytopes in \(\mathbb{R}^{n}\) with a uniform bound on the number of vertices, let \(u\in\mathbb{S}^{n-1}\) and \(m\in\mathbb{N}_{0}\). Then there are an \(\mathbb{S}^{n-1}\)-open neighborhood \(U\subseteq\mathbb{S}^{n-1}\) of \(u\), a subsequence \((Q_{\ell_{s}})_{s}\) and a sequence of positive numbers \((\lambda_{s})_{s}\) such that for all but finitely many \(s\in\mathbb{N}\) and for all \((n-2)\)-tuples \(\boldsymbol{\mathcal{C}}\) of convex bodies in \(\mathbb{R}^{n}\),_ \[\operatorname{S}(Q_{\ell_{s}},\boldsymbol{\mathcal{C}})\llcorner U=\lambda_{ s}\operatorname{S}(\operatorname{prune}_{m}(\mathbb{Q},u,s),\boldsymbol{\mathcal{C}}) \llcorner U.\] _In particular, the statement is true if \(\operatorname{prune}_{m}\) is replaced by \(\operatorname{prune}_{*}\)._ Proof.: The proof is by induction on \(m\in\mathbb{N}_{0}\). If \(m=0\), the claim follows from \(\mathbb{Q}=\operatorname{prune}_{0}(\mathbb{Q},u)\). Now assume \(m\geq 1\) and that the claim is true for smaller \(m\). Let \(k\in\mathbb{N}\) be the smallest possible number such that \(\mathbb{Q}\) consists of \(k\)-topes, just as in Definition 5.4. Let \(\mathbb{V}=(V_{\ell})_{\ell}=((v_{\ell}^{(1)},\ldots,v_{\ell}^{(k)}))_{\ell}\) be a sequence of spanning points and \(\mathbb{V}^{\prime}=(V_{\ell_{s}})_{s}\) a convergent subsequence (as in Definition 5.4). Write \[v^{(i)}\coloneqq\lim_{s\to\infty}v_{\ell_{s}}^{(i)}\quad\text{for all $i\in[k]$}.\] We apply Lemma 2.4 to \(\lim_{s\to\infty}Q_{\ell_{s}}=\operatorname{conv}\bigl{\{}v^{(i)}\ \big{|}\ i\in[k]\bigr{\}}\). Let \[I\coloneqq\Bigl{\{}i\in[k]\ \Big{|}\ v^{(i)}\in F\Bigl{(}\lim_{s\to\infty}Q_{\ell_{s} },u\Bigr{)}\Bigr{\}}.\] Lemma 2.4 shows that there is \(\varepsilon\in(0,1)\) such that for all \(w,x_{1},\ldots,x_{k}\) with \(d(u,w)<\varepsilon\) and \(d(v^{(i)},x_{i})<\varepsilon\ (i\in[k])\), the polytope \(P\coloneqq\operatorname{conv}\{x_{i}\ |\ i\in[k]\}\) satisfies \[F(P,w)\subseteq\operatorname{conv}\{x_{i}\ |\ i\in I\}.\] In particular, there is an open neighborhood \(U\subseteq\mathbb{R}^{n}\setminus\{0\}\) of \(u\) such that for all but finitely many \(s\) and for all \(w\in U\), \[F(Q_{\ell_{s}},w)-v^{(i_{0})}_{\ell_{s}}\subseteq\operatorname{conv}\Bigl{\{} v^{(i)}_{\ell_{s}}\ \Big{|}\ i\in I\Bigr{\}}-v^{(i_{0})}_{\ell_{s}}=c_{s}^{-1} \operatorname{prune}(\mathbb{Q},u,s)\subseteq Q_{\ell_{s}}-v^{(i_{0})}_{\ell _{s}},\] where \(c_{s}\) is the positive factor in Definition 5.4. It follows that \[F(\operatorname{prune}(\mathbb{Q},u,s),w)=c_{s}\left(F(Q_{\ell_{s}},w)-v^{(i_{ 0})}_{\ell_{s}}\right),\] and by Lemma 2.12 and the translation invariance of mixed area measures we get, for all but finitely many \(s\in\mathbb{N}\) and for every \((n-2)\)-tuple \(\mathcal{C}\) of convex bodies, \[S(Q_{\ell_{s}},\mathcal{C})\llcorner(U\cap\mathbb{S}^{n-1})=c_{s}^{-1}S( \operatorname{prune}(\mathbb{Q},u,s),\mathcal{C})\llcorner(U\cap\mathbb{S}^{n -1}).\] Applying the inductive hypothesis for \(m-1\) to \(\operatorname{prune}(\mathbb{Q},u)\), we obtain an \(\mathbb{S}^{n-1}\)-open neighborhood \(V\subseteq\mathbb{S}^{n-1}\) of \(u\), a subsequence \((\operatorname{prune}(\mathbb{Q},u,s_{t}))_{t}\) and a sequence of positive numbers \((\mu_{t})_{t}\) such that for all but finitely many \(t\in\mathbb{N}\) and for every \((n-2)\)-tuple \(\mathcal{C}\) of convex bodies, \[S(\operatorname{prune}(\mathbb{Q},u,s_{t}),\mathcal{C})\llcorner V=\mu_{t}\,S (\operatorname{prune}_{m}(\mathbb{Q},u,t),\mathcal{C})\llcorner V.\] Now, \(V\cap U\subseteq\mathbb{S}^{n-1}\) is also an \(\mathbb{S}^{n-1}\)-open neighborhood of \(u\), and for all but finitely many \(t\in\mathbb{N}\), \[S(Q_{\ell_{s_{t}}},\mathcal{C})\llcorner(V\cap U) =c_{s_{t}}^{-1}\,S(\operatorname{prune}(\mathbb{Q},u,s_{t}), \mathcal{C})\llcorner(V\cap U)\] \[=c_{s_{t}}^{-1}\mu_{t}\,S(\operatorname{prune}_{m}(\mathbb{Q},u,t ),\mathcal{C})\llcorner(V\cap U)\] for every \((n-2)\)-tuple \(\mathcal{C}\) of convex bodies. This concludes the induction. Because there is \(m\in\mathbb{N}_{0}\) such that \(\operatorname{prune}_{*}(\mathbb{Q},u)=\operatorname{prune}_{m}(\mathbb{Q},u)\), the claim is also true for \(\operatorname{prune}_{*}\). **Lemma 5.7** (Sticky vertices).: _Let \(u\in\mathbb{S}^{n-1}\). Let \(\mathbb{Q}=(Q_{\ell})_{\ell}\) be a bounded sequence of polytopes with a uniform bound on the number of vertices, satisfying the following property \(\mathfrak{P}(\mathbb{Q},u)\): For all but finitely many \(\ell\in\mathbb{N}\), there are distinct vertices \(x_{\ell},y_{\ell}\) of \(Q_{\ell}\) such that \(x_{\ell}\in F(Q_{\ell},u)\) and \(\|x_{\ell}-y_{\ell}\|^{-1}\langle x_{\ell}-y_{\ell},u\rangle\to 0\) as \(\ell\to\infty\)._ _Then the sequences \(\operatorname{prune}_{m}(\mathbb{Q},u)\), \(m\in\mathbb{N}\), can be chosen in such a way that the property \(\mathfrak{P}(\operatorname{prune}(\mathbb{Q},u)_{m},u)\) is satisfied for all \(m\in\mathbb{N}\)._ Proof.: Let \(k\in\mathbb{N}\) be the smallest possible number such that \(\mathbb{Q}\) consists of \(k\)-topes. It suffices to prove the claim for prune, since the argument can then be iterated. Let \(\mathbb{V}=(V_{\ell})_{\ell}=((v_{\ell}^{(1)},\ldots,v_{\ell}^{(k)}))_{\ell}\) be a sequence of spanning points chosen as in Definition 5.4 which has \(\mathbb{V}^{\prime}=(V_{\ell_{s}})_{s}\) as a convergent subsequence. Moreover, the subsequence can be chosen such that there are distinct \(i,j\in[k]\) with \(x_{\ell_{s}}=v_{\ell_{s}}^{(i)}\) and \(y_{\ell_{s}}=v_{\ell_{s}}^{(j)}\) for \(s\in\mathbb{N}\). Then we set \(Q\coloneqq\lim_{s\to\infty}Q_{\ell_{s}}\) and \[v^{(i^{\prime})}\coloneqq\lim_{s\to\infty}v_{\ell_{s}}^{(i^{\prime})}\quad \text{for }i^{\prime}\in[k].\] It follows that \[\left\langle v^{(i)},u\right\rangle\leftarrow\left\langle v_{\ell_{s}}^{(i)},u\right\rangle=\left\langle x_{\ell_{s}},u\right\rangle=h_{\mathbb{Q}_{\ell_ {s}}}(u)\to h_{Q}(u),\] as \(s\to\infty\), hence \(\left\langle v^{(i)},u\right\rangle=h_{Q}(u)\) and \(v^{(i)}\in F(Q,u)\). Moreover, \[\left\langle v^{(j)},u\right\rangle\leftarrow\left\langle v_{\ell_{s}}^{(j)},u\right\rangle=\left\langle y_{\ell_{s}},u\right\rangle=h_{\mathbb{Q}_{\ell_ {s}}}(u)+\left\langle y_{\ell_{s}}-x_{\ell_{s}},u\right\rangle\to h_{Q}(u),\] as \(s\to\infty\), because of the assumption and since \(\left(\left\|y_{\ell_{s}}-x_{\ell_{s}}\right\|\right)_{s}\) is bounded. Hence, we also have \(v^{(j)}\in F(Q,u)\). The construction of \(\operatorname{prune}(\mathbb{Q},u,s)\) then shows that \(c_{s}(v_{\ell_{s}}^{(i)}-v_{\ell_{s}}^{(i_{0})})\) and \(c_{s}(v_{\ell_{s}}^{(j)}-v_{\ell_{s}}^{(i_{0})})\) are distinct vertices of \(\operatorname{prune}(\mathbb{Q},u,s)\) for all \(s\in\mathbb{N}\), where \(c_{s}\) is the positive scaling factor in Definition 5.4. for \(s\in\mathbb{N}\). In addition, \(c_{s}(v_{\ell_{s}}^{(i)}-v_{\ell_{s}}^{(i_{0})})\in F(\operatorname{prune}( \mathbb{Q},u,s),u)\) and \[\frac{\left\langle c_{s}(v_{\ell_{s}}^{(i)}-v_{\ell_{s}}^{(i_{0})})-c_{s}(v_{ \ell_{s}}^{(j)}-v_{\ell_{s}}^{(i_{0})}),u\right\rangle}{\left\|c_{s}(v_{\ell_{ s}}^{(i)}-v_{\ell_{s}}^{(i_{0})})-c_{s}(v_{\ell_{s}}^{(j)}-v_{\ell_{s}}^{(i_{0})}) \right\|}=\frac{\left\langle x_{\ell_{s}}-y_{\ell_{s}},u\right\rangle}{\left\| x_{\ell_{s}}-y_{\ell_{s}}\right\|}\to 0,\] as \(s\to\infty\). Thus \(\operatorname{prune}(\mathbb{Q},u)\) has the required property and the iteration can be continued. After these preparations, we state and prove the main auxiliary result in this section. **Lemma 5.8** (Witness lemma).: _Let \(u\in\mathbb{S}^{n-1}\). Let \(M\subset\mathbb{R}^{n}\) be a \(k\)-polyoid with generating measure \(\mu\). If \(\operatorname{TS}(M,u)\neq\{0\}\), then there is a \(k\)-tope \(\operatorname{Re}(M,u)\subset u^{\perp}\) with \(\{0\}\subset\operatorname{Re}(M,u)\) (that is not a singleton) such that for every \((n-2)\)-tuple \(\mathcal{C}\) of convex bodies in \(\mathbb{R}^{n}\),_ \[u\in\operatorname{supp}\operatorname{S}(\operatorname{Re}(M,u),\mathcal{C}) \quad\text{implies}\quad u\in\operatorname{supp}\operatorname{S}(M,\mathcal{C }).\] Proof.: For every \(Q\in\operatorname{supp}\mu\), choose an arbitrary vertex \(x_{Q}\in F(Q,u)\). If \(\operatorname{TS}(M,u)\neq\{0\}\), then Corollary 4.8 shows that for every \(c>0\) there is some \(P\in\operatorname{supp}\mu\)_not_ having a \(c\)-cusp in direction \(u\). Hence we can find a sequence of \(k\)-topes \(\mathbf{Q}:=(Q_{\ell})_{\ell}\) in \(\operatorname{supp}\mu\) and a sequence of vertices \(y_{\ell}\in Q_{\ell}\), \(\ell\in\mathbb{N}\), such that \(x_{Q_{\ell}}\neq y_{\ell}\) and \[\left\|y_{\ell}-x_{Q_{\ell}}\right\|^{-1}\left\langle y_{\ell}-x_{Q_{\ell}},u \right\rangle\to 0\quad\text{as $\ell\to\infty$.}\] By Lemma 5.7, \(\operatorname{prune}_{*}(\mathbf{Q},u,\ell)\) has at least two distinct vertices, and by Definition 5.4 diameter 1, for all but finitely many \(\ell\). So \(\operatorname{prune}_{*}(\mathbf{Q},u)\), being a convergent sequence of \(k\)-topes, converges to a \(k\)-tope \(\operatorname{Re}(M,u)\subset u^{\perp}\) of diameter 1 with \(0\in\operatorname{Re}(M,u)\) (see Remark 5.5). In particular, \(\operatorname{Re}(M,u)\) is not a singleton. By Pruning Lemma 5.6, there is a sequence of positive numbers \((\lambda_{s})_{s}\), a subsequence \((Q_{\ell_{s}})_{s}\) of \(\mathbf{Q}\) and an \(\mathbf{S}^{n-1}\)-open neighborhood \(U\subseteq\mathbf{S}^{n-1}\) of \(u\) such that for an arbitrary \((n-2)\)-tuple \(\mathcal{C}\) of convex bodies, \[\operatorname{S}(Q_{\ell_{s}},\mathcal{C})\llcorner U=\lambda_{s}\operatorname {S}(\operatorname{prune}_{*}(\mathbf{Q},u,s),\mathcal{C})\llcorner U. \tag{13}\] Now assume that \(u\in\operatorname{supp}\operatorname{S}(\operatorname{Re}(M,u),\mathcal{C})\). Then by continuity of \(\operatorname{S}\) and Lemma 2.24, \[u\in\operatorname{cl}\bigcup_{s=1}^{\infty}\operatorname{supp}\operatorname{ S}(\operatorname{prune}_{*}(\mathbf{Q},u,s),\mathcal{C}),\] and because \(U\) is a neighborhood of \(u\), eq. (13) and Theorem 2.23 now imply \[u\in\operatorname{cl}\bigcup_{s=1}^{\infty}\operatorname{supp}\operatorname{ S}(Q_{\ell_{s}},\mathcal{C})\subseteq\operatorname{supp}\operatorname{S}(M, \mathcal{C}),\] which proves the assertion. ## 6 Switching In this section, we provide a lemma that will be needed in the proof of our main result to carry out the induction step. Recall the conventions and the notation concerning tuples of sets introduced in Section 2. As usual, a linear subspace \(R\) of some ambient vector space is said to be trivial if \(R=\{0\}\). **Lemma 6.1** (Switching lemma).: _Assume that \(n\geq 2\) and \(u\in\mathbf{S}^{n-1}\). Let \(\mathcal{T}=(T_{1},\ldots,T_{n-1})\) and \(\mathcal{R}=(R_{1},\ldots,R_{n-1})\) be tuples of linear subspaces of \(u^{\perp}\) such that \(\mathcal{T}\) is semicritical and \(R_{i}\) is nontrivial for all \(i\in[n-1]\). Then there are index sets \(\varnothing\neq I\subseteq J\subseteq[n-1]\) such that \(\mathcal{R}_{I}\) spans an \(|I|\)-dimensional subspace and \(\mathcal{R}_{J}+\mathcal{T}_{J^{c}}\) is semicritical._ Proof.: Denote by \(\mathbf{\mathcal{S}}=(T_{1}+R_{1},\ldots,T_{n-1}+R_{n-1})\) the tuple of elementwise sums of \(\mathbf{\mathcal{T}}\) and \(\mathbf{\mathcal{R}}\). Choose \(J\subseteq[n-1]\) inclusion-maximal such that \[\mathrm{V}\big{(}\mathbf{\mathcal{R}}_{J}+\mathbf{\mathcal{S}}_{J^{c}}\big{)}>0. \tag{14}\] Such \(J\) exists since \(\mathbf{\mathcal{S}}=\mathbf{\mathcal{R}}_{\varnothing}+\mathbf{\mathcal{S}}_{[n-1]}\) is semicritical: \(T_{i}\subseteq S_{i}\) for \(i\in[n-1]\), \(\mathbf{\mathcal{T}}\) is semicritical by assumption and hence Lemma 2.18 (6) implies the assertion. Because \(J\) is maximal, even \(\mathbf{\mathcal{R}}_{J}+\mathbf{\mathcal{T}}_{J^{c}}\) is semicritical: Repeatedly applying Lemma 2.21, we find a set \(K\subseteq J^{c}\) such that \(\mathbf{\mathcal{R}}_{J\cup K}+\mathbf{\mathcal{T}}_{J^{c}\setminus K}\) is semicritical. But then also \(\mathbf{\mathcal{R}}_{J\cup K}+\mathbf{\mathcal{S}}_{J^{c}\setminus K}\) is semicritical, forcing \(K=\varnothing\) because \(J\) is inclusion-maximal. Furthermore, let \(I\subseteq[n-1]\) be inclusion-minimal such that \(\mathbf{\mathcal{R}}_{I\cap J}+\mathbf{\mathcal{S}}_{I\setminus J}\) is subcritical. Such \(I\) exists because \(\mathbf{\mathcal{R}}_{J}+\mathbf{\mathcal{S}}_{J^{c}}\) is subcritical as an \((n-1)\)-tuple of \(u^{\perp}\)-subspaces (since \(n\geq 2\)), showing that at least \([n-1]\) satisfies the desired property. Note that \(I\neq\varnothing\), since by definition an empty tuple is not subcritical. Then \(E\coloneqq\overline{\mathrm{span}}\big{(}\mathbf{\mathcal{R}}_{I\cap J}+\mathbf{ \mathcal{S}}_{I\setminus J}\big{)}\) is \(|I|\)-dimensional: On the one hand, \(\mathbf{\mathcal{R}}_{I\cap J}+\mathbf{\mathcal{S}}_{I\setminus J}\) is semicritical by Lemma 2.18 (1). On the other hand, it is subcritical by the construction of \(I\). If it spanned a higher-dimensional subspace, then this tuple would have to contain an even smaller subcritical set, contradicting the minimality of \(I\). By Lemma 2.19 and relation (14), it follows that \[\mathrm{V}\big{(}\pi_{E^{\perp}}(\mathbf{\mathcal{R}}_{J\setminus I}+\mathbf{ \mathcal{S}}_{J^{c}\setminus I})\big{)}>0. \tag{15}\] It remains to show that \(I\subseteq J\). Assume for a contradiction that, without loss of generality, \(1\in I\setminus J\). Because \(I\) was chosen inclusion-minimally such that \(\mathbf{\mathcal{R}}_{I\cap J}+\mathbf{\mathcal{S}}_{I\setminus J}\) is subcritical, it follows that \(\mathbf{\mathcal{R}}_{I\cap J}+\mathbf{\mathcal{S}}_{I\setminus(J\cup\{1\})}\) is critical and as \(R_{1}\) is nontrivial, Lemma 2.18 (7) implies that \(\mathbf{\mathcal{R}}_{I\cap(J\cup\{1\})}+\mathbf{\mathcal{S}}_{I\setminus(J\cup\{1\})}\) is semicritical. Since \(1\in I\setminus J\) and \(R_{1}\subseteq S_{1}\subseteq E\), \(\mathbf{\mathcal{R}}_{I\cap(J\cup\{1\})}+\mathbf{\mathcal{S}}_{I\setminus(J\cup\{1\})}\) spans a subspace of \(E\) of dimension \(|I|=\dim E\), in other words, it also spans \(E\). By Lemma 2.19 and relation (15), it follows that \[\mathrm{V}\big{(}\mathbf{\mathcal{R}}_{J\cup\{1\}}+\mathbf{\mathcal{S}}_{(J\cup\{1\}) ^{c}}\big{)}>0,\] that is, \(\mathbf{\mathcal{R}}_{J\cup\{1\}}+\mathbf{\mathcal{S}}_{(J\cup\{1\})^{c}}\) is semicritical, in contradiction to the maximality of \(J\) as expressed by relation (14). This proves that \(I\subseteq J\). Finally, we get \(\dim\mathbf{\mathcal{R}}_{I}=|I|\), since \(I\subseteq J\) and \(\dim E=|I|\). ## 7 Proof of the characterization theorem Now we are ready to confirm Theorem 1.1 for smooth convex bodies and polyoids, for which eq. (1o) should be recalled. Proof.: First, observe that it suffices to prove the claim for tuples that only contain polyoids (macroids). Consider the case that \(\boldsymbol{\mathcal{M}}\) does _not_ solely consist of polyoids (macroids) and let \(\boldsymbol{\mathcal{M}}^{\prime}\) be the tuple obtained from \(\boldsymbol{\mathcal{M}}\) by replacing all smooth bodies by \(B^{n}\). Clearly, \(\boldsymbol{\mathcal{M}}^{\prime}\) consists of polyoids (macroids), and the claim for \(\boldsymbol{\mathcal{M}}\) is equivalent to the claim for \(\boldsymbol{\mathcal{M}}^{\prime}\) by the following argument. All smooth convex bodies have the same, \((n-1)\)-dimensional, touching spaces. Therefore, \(\operatorname{cl}\operatorname{ext}\boldsymbol{\mathcal{M}}=\operatorname{ cl}\operatorname{ext}\boldsymbol{\mathcal{M}}^{\prime}\). We can assume that the smooth and strictly convex body, contained in \(\boldsymbol{\mathcal{M}}\) by assumption, is the first one. By [7, Lem. 7.6.15], \(\operatorname{supp}\operatorname{S}(\boldsymbol{\mathcal{M}})=\operatorname{ supp}\operatorname{S}(B^{n},\boldsymbol{\mathcal{M}}_{\setminus 1})\). Now [10, Cor. 14.3] shows that we can replace the remaining smooth bodies in \(\boldsymbol{\mathcal{M}}_{\setminus 1}\) by \(B^{n}\), and we obtain \(\operatorname{supp}\operatorname{S}(\boldsymbol{\mathcal{M}})= \operatorname{supp}\operatorname{S}(B^{n},\boldsymbol{\mathcal{M}}^{\prime}_{ \setminus 1})=\operatorname{supp}\operatorname{S}(\boldsymbol{\mathcal{M}}^{ \prime})\). Hence, it suffices to prove the claim for tuples that only contain polyoids (macroids), such as \(\boldsymbol{\mathcal{M}}^{\prime}\). It remains to show for tuples \(\boldsymbol{\mathcal{M}}\) of polyoids that \[\operatorname{supp}\operatorname{S}(\boldsymbol{\mathcal{M}})= \operatorname{cl}\operatorname{ext}\boldsymbol{\mathcal{M}}.\] For this we prove two inclusions. "\(\subseteq\)": For this part of the argument, we only need the weaker assumption that \(\boldsymbol{\mathcal{M}}\) is a tuple of macroids. By Theorem 2.23 and Lemma 2.22, we get \[\operatorname{supp}\operatorname{S}(\boldsymbol{\mathcal{M}})= \operatorname{cl}\bigcup_{\mathcal{P}\in\prod_{i=1}^{n-1}\operatorname{supp} \mu_{i}}\operatorname{supp}\operatorname{S}(\mathcal{P})=\operatorname{cl} \bigcup_{\mathcal{P}\in\prod_{i=1}^{n-1}\operatorname{supp}\mu_{i}} \operatorname{ext}\mathcal{P}.\] So it remains to verify that \[\operatorname{cl}\bigcup_{\mathcal{P}\in\prod_{i=1}^{n-1}\operatorname{supp} \mu_{i}}\operatorname{ext}(\mathcal{P})\subseteq\operatorname{cl} \operatorname{ext}\boldsymbol{\mathcal{M}}.\] Let \(\mathcal{P}=(P_{1},\ldots,P_{n-1})\in\prod_{i=1}^{n-1}\operatorname{supp}\mu_{i}\). We claim that for all \(u\in\operatorname{S}^{n-1}\) and \(i\in[n-1]\), \[\operatorname{TS}(P_{i},u)\subseteq\operatorname{TS}(M_{i},u), \tag{16}\] which would imply \(\operatorname{ext}\mathcal{P}\subseteq\operatorname{ext}\boldsymbol{\mathcal{M}}\) by Lemma 2.18 (6) and conclude the proof (for zonoids, compare (16) with [6, Lem. 3.2]). Set \(W\coloneqq\operatorname{TS}(M_{i},u)^{\perp}\) and note that \(u\in W\). Then by Lemma 3.3, relation (16) is equivalent to \(\operatorname{TS}_{W}(\pi_{W}(P_{i}),u)\subseteq\operatorname{TS}_{W}(\pi_{W }(M_{i}),u)=\{0\}\). Now \(\pi_{W}(P_{i})\) is in the support of the generating measure of \(\pi_{W}(M_{i})\) by Lemma 3.4 (here we only need the inclusion which holds for general macroids). Together with \(\operatorname{TS}_{W}(\pi_{W}(M_{i}),u)=\{0\}\) and Lemmas 4.3 and 4.7, this implies \(\operatorname{TS}_{W}(\pi_{W}(P_{i}),u)=\{0\}\) and relation (16). "\(\supseteq\)": The proof of this inclusion is by induction on \(n\). The case \(n=1\) follows from Remark 2.6 and the fact that the empty tuple is semicritical, rendering every \(u\in S^{0}\) extreme. Assume \(n\geq 2\) and that the claim is true for smaller \(n\). Let \(u\in\operatorname{ext}\boldsymbol{\mathcal{M}}\) be given. The linear subspaces \(\operatorname{TS}(M_{i},u)\subseteq u^{\perp}\), \(i\in[n-1]\), form a semicritical tuple since \(u\in\operatorname{ext}\boldsymbol{\mathcal{M}}\), in particular \(\operatorname{TS}(M_{i},u)\neq\{0\}\). Then the linear subspaces \(\operatorname{span}\operatorname{Re}(M_{i},u)\subseteq u^{\perp}\), which were defined in Witness Lemma 5.8, are nontrivial and \(\{0\}\subset\operatorname{Re}(M_{i},u)\) by Lemma 5.8. Define \[\boldsymbol{\mathcal{D}} \coloneqq(\operatorname{span}\operatorname{Re}(M_{1},u),\ldots, \operatorname{span}\operatorname{Re}(M_{n-1},u))\] \[=(\operatorname{TS}(\operatorname{Re}(M_{1},u),u),\ldots, \operatorname{TS}(\operatorname{Re}(M_{n-1},u),u)),\] where the equality follows from Lemma 2.5, since \(\operatorname{Re}(M_{i},u)\) are polytopes with \(0\in\operatorname{Re}(M_{i},u)\subseteq u^{\perp}\) so that \(F(\operatorname{Re}(M_{i},u),u)=\operatorname{Re}(M_{i},u)\), for \(i\in[n-1]\). According to Lemma 6.1, there are index sets \(\varnothing\neq I\subseteq J\subseteq[n-1]\) such that \(\boldsymbol{\mathcal{D}}_{I}\) spans an \(|I|\)-dimensional subspace \(E\) and \(\boldsymbol{\mathcal{D}}_{J}+\operatorname{TS}(\boldsymbol{\mathcal{M}}_{J^{c} },u)\) is semicritical. We now interpret the \(k\)-tope \(\operatorname{Re}(M_{i},u)\), where \(i\in J\), as a \(k\)-polyoid with generating Dirac measure \(\delta_{\operatorname{Re}(M_{i},u)}\) and define \[\boldsymbol{\mathcal{M}}^{\prime}\coloneqq(M_{1}^{\prime},\ldots,M_{n-1}^{ \prime}),\quad\text{where }M_{i}^{\prime}\coloneqq\begin{cases}\operatorname{Re}(M_{i},u),&i\in J, \\ M_{i},&i\notin J.\end{cases}\] It now suffices to prove that \(u\in\operatorname{supp}\operatorname{S}(\boldsymbol{\mathcal{M}}^{\prime})\): Using that \(\operatorname{Re}(M_{j},u)=M_{j}^{\prime}\) (\(j\in J\)), repeated applications of Witness Lemma 5.8 show that if \(u\in\operatorname{supp}\operatorname{S}(\boldsymbol{\mathcal{M}}^{\prime})\), then \(u\in\operatorname{supp}\operatorname{S}(\boldsymbol{\mathcal{M}})\). Clearly, \(\boldsymbol{\mathcal{M}}^{\prime}\) is also a tuple of \(k\)-polyoids and \(\operatorname{TS}(\boldsymbol{\mathcal{M}}^{\prime},u)\) is semicritical because \(\boldsymbol{\mathcal{D}}_{J}+\operatorname{TS}(\boldsymbol{\mathcal{M}}_{J^{c} },u)\) is a semicritical permutation. Furthermore, all spaces in \(\operatorname{TS}(\boldsymbol{\mathcal{M}}_{I}^{\prime},u)=\boldsymbol{ \mathcal{D}}_{I}\) are subspaces of \(E\). Lemmas 2.19 and 3.3 now imply that \(\pi_{E^{\perp}}\operatorname{TS}(\boldsymbol{\mathcal{M}}_{I^{c}}^{\prime},u)= \operatorname{TS}_{E^{\perp}}(\pi_{E^{\perp}}\boldsymbol{\mathcal{M}}_{I^{c}} ^{\prime},u)\) is also a semicritical tuple, that is, we have \(u\in\operatorname{ext}\pi_{E^{\perp}}(\boldsymbol{\mathcal{M}}^{\prime})_{I^{c}}\). There is an inner product space isomorphism \(E^{\perp}\cong\mathbb{R}^{\dim E^{\perp}}\). Using this isomorphism and \(\dim E^{\perp}=n-|I|\in[1,n-1]\), we can apply the inductive hypothesis to \(u\in E^{\perp}\) and the tuple \(\pi_{E^{\perp}}(\boldsymbol{\mathcal{M}}^{\prime})_{I^{c}}\) of \(k\)-polyoids in \(E^{\perp}\) and thus conclude from \(u\in\operatorname{ext}\pi_{E^{\perp}}(\boldsymbol{\mathcal{M}}^{\prime})_{I^{c}}\) that \[u\in\operatorname{supp}\operatorname{S}_{E^{\perp}}(\pi_{E^{\perp}}( \boldsymbol{\mathcal{M}}^{\prime})_{I^{c}}). \tag{17}\] On the other hand, \(\boldsymbol{\mathcal{M}}_{I}^{\prime}\) consists of \(k\)-topes in \(E\). So Lemma 2.15 yields \[\begin{pmatrix}n-1\\ |I|\end{pmatrix}\operatorname{S}(\boldsymbol{\mathcal{M}}^{\prime})= \operatorname{V}(\boldsymbol{\mathcal{M}}_{I}^{\prime})\cdot\operatorname{S}_{E^ {\perp}}^{\prime}(\pi_{E^{\perp}}(\boldsymbol{\mathcal{M}}_{I^{c}}^{\prime})). \tag{18}\] Because (span \(M^{\prime}_{i}\))\({}_{i\in I}=\mathcal{D}_{I}\) is a subtuple of the semicritical tuple \(\mathcal{D}_{J}+\mathrm{TS}(\mathcal{M}_{I^{c}},u)\), since \(I\subseteq J\), it follows from Lemma 2.16 that \(\mathrm{V}(\mathcal{M}^{\prime}_{I})>0\) and we conclude with relations (17) and (18) that \[u\in\mathrm{supp}\,S^{\prime}_{E^{\perp}}(\pi_{E^{\perp}}(\mathcal{M}^{\prime}) _{I^{c}})=\mathrm{supp}\,\mathrm{S}(\mathcal{M}^{\prime})\] and, as noted previously, therefore \(u\in\mathrm{supp}\,\mathrm{S}(\mathcal{M})\). Finally, we note the following more general result which is implied by the preceding proof. **Proposition 7.1**.: _Let \(\mathcal{C}=(C_{1},\ldots,C_{n-1})\) be an \((n-1)\)-tuple of macroids (or smooth convex bodies provided at least one of the bodies \(C_{i}\) is smooth and strictly convex) in \(\mathbb{R}^{n}\). Then_ \[\mathrm{supp}\,\mathrm{S}(\mathcal{C},\cdot)\subseteq\mathrm{cl}\,\mathrm{ext }\,\mathcal{C}. \tag{19}\] **Acknowledgements.** D. Hug was supported by DFG research grant HU 1874/5-1 (SPP 2265).
2303.00093
Analytic calculation of the vison gap in the Kitaev spin liquid
Although the ground-state energy of the Kitaev spin liquid can be calculated exactly, the associated vison gap energy has to date only been calculated numerically from finite size diagonalization. Here we show that the phase shift for scattering Majorana fermions off a single bond-flip can be calculated analytically, leading to a closed-form expression for the vison gap energy $\Delta = 0.2633J$. Generalizations of our approach can be applied to Kitaev spin liquids on more complex lattices such as the three dimensional hyper-octagonal lattice.
Aaditya Panigrahi, Piers Coleman, Alexei Tsvelik
2023-02-28T21:40:24Z
http://arxiv.org/abs/2303.00093v2
# Analytic calculation of the vison gap in the Kitaev spin liquid ###### Abstract Although the ground-state energy of the Kitaev spin liquid can be calculated exactly, the associated vison gap energy has to date only been calculated numerically from finite size diagonalization. Here we show that the phase shift for scattering Majorana fermions off a single bond-flip can be calculated analytically, leading to a closed-form expression for the vison gap energy \(\Delta=0.2633J\). Generalizations of our approach can be applied to Kitaev spin liquids on more complex lattices such as the three dimensional hyper-octagonal lattice. ## I Introduction Kitaev spin liquids (KSL) are a class of exactly solvable quantum spin liquid that exhibit spin fractionalization, anyonic excitations and long-range entanglement [1; 2; 3; 4; 5]. The fractionalization of spins into Majorana fermions is accompanied by the formation of emergent \(\mathbb{Z}_{2}\) gauge fields, giving rise to \(\mathbb{Z}_{2}\) vortex excitations or "visons". These excitations are gapped, and the energy cost associated with creating two visons on adjacent plaquettes is called the vison gap \(\Delta_{v}\) (Fig[1]). Proposals for the practical realization of Kitaev spin liquids in quantum materials, including \(\alpha\)-RuCl\({}_{3}\)[5; 6; 7; 8; 9; 10; 11; 12; 13; 14] and Iridates [15; 16; 11] have renewed interest in the thermodynamics of Kitaev spin liquid [17; 18; 19; 20; 21; 22; 23; 24]. The extension of these ideas to Yao-Lee spin liquid [25; 26] and its application to Kondo models, [27; 28] motivate the development of an analytical approach to calculate the vison gap \(\Delta_{v}\). The vison gap in KSLs has to date, been determined by numerical diagonalization of finite size systems [1; 3]. Here we present a Green's function approach for the analytical computation of the vison gap \(\Delta_{v}\) from the scattering phase shift associated with a a \(\mathbb{Z}_{2}\) bond-flip. Our work builds on theoretical developments in the field of Kitaev spin liquids which relate to the interplay between Majorana fermions and visons [1; 29; 30; 31; 32; 33; 34; 35; 36; 37]. Using exact calculations, we find the vison gap energy of \(\Delta_{v}=0.263313(6)J\) for the Kitaev spin liquid on honeycomb lattice in the gapless phase, extending the accuracy of previous calculations [1; 3]. Our calculations reveal the formation of Majorana resonances in the density of states which accompany the formation of two adjacent visons. Our approach can be simply generalized to more complex lattices and are immediately generalizable to Yao-Lee spin liquids. ## II Vison gap in the Kitaev honeycomb model The Kitaev honeycomb lattice model [1] is described by the Hamiltonian \[H_{K}=\sum_{<ij>}J_{\alpha_{ij}}\sigma_{i}^{\alpha_{ij}}\sigma_{j}^{\alpha_{ij }}, \tag{1}\] where the Heisenberg spins \(\vec{\sigma}_{i}=(\sigma_{i}^{x},\sigma_{i}^{y},\sigma_{i}^{z})\) at site \(i\) interact with their nearest neigbors via an Ising coupling between the \(\alpha_{ij}=x,y,z\) spin components, along the coresponding bond directions \(\langle ij\rangle\), with strength \(J_{\alpha_{ij}}\), as shown in Fig[1]. An exact solution of Kitaev Model [1] is found by representing the spins as products of Majorana fermions, \(\sigma_{j}^{\alpha}=2c_{j}b_{j}^{0}\) which satisfy canonical anti-commutation algebras, \(\{c_{i},b_{j}^{\alpha}\}=0\), \(\{b_{i}^{0},b_{j}^{0}\}=\delta_{ij}\delta^{\alpha,\beta}\), (taking the convention that \(c_{j}^{z}=(b_{j}^{\alpha})^{2}=1/2\)). The system is projected into the physical subspace by selecting \(\mathcal{D}_{j}\equiv-4ic_{j}b_{j}^{x}b_{j}^{y}b_{j}^{z}=1\) at each site, allowing the Hamiltonian (1) to be rewritten as \(\mathbb{Z}_{2}\) gauge theory \[H_{KSL}=2\sum_{<ij>}J_{\alpha_{ij}}\tilde{u}_{ij}(ic_{i}c_{j}), \tag{2}\] Figure 1: (a) The Kitaev honeycomb lattice model, where the Ising spin couplings along the x, y and z directions are labelled by blue, green and red bonds respectively, with primitive lattice vectors \(\vec{a}_{1}\) and \(\vec{a}_{2}\). (b) A bond-reversal at the origin creates a vison pair, costing an energy \(\Delta_{v}\). The string connecting the adjacent visons is indicated in light blue. where the gauge fields \(\hat{u}_{ij}=2ib_{i}^{\alpha_{ij}}b_{j}^{\alpha_{ij}}=\pm 1\) on bond \(ij\) commute with the Hamiltonian, \([\hat{u}_{ij},H_{K}]=0\). The plaquette operators \({\cal W}_{p}\) \[{\cal W}_{p}=\prod_{<i,j>\in p}u_{ij}\quad(i\in A,j\in B). \tag{3}\] formed from the product of gauge fields \(\hat{u}_{ij}\) around the hexagonal loop \(p\) ( plaquette), are gauge invariant and also commute with the Hamiltonian \([{\cal W}_{p},H_{K}]=0\) and constraint operators \([{\cal W}_{p},{\cal D}_{j}]=0\), giving rise to a set of static constants of motion which take values \({\cal W}_{p}=\pm 1\). Each eigenstate is characterized by the configurations of \(\{{\cal W}_{p}\}\); Lieb's theorem Lieb (1961) specifies that the ground state configuration is flux-free, i.e. \({\cal W}_{p}=1\) for all hexagons \(p\). In what follows we will choose the gauge \(\hat{u}_{ij}=1\) when \(i\in A\) and \(j\in B\) sublattice, assigning \[H_{0}=H_{KSL}[u_{ij}\to 1]. \tag{4}\] Rewriting \(H_{0}\) in momentum space, we obtain \[H_{0}=\frac{1}{2}\sum_{{\bf k}\in{\bf BZ}}\psi_{\bf k}^{\dagger}(\vec{\gamma} _{\bf k}\cdot\vec{\tau})\psi_{\bf k}, \tag{5}\] where \[\psi_{\bf k}=\frac{1}{\sqrt{N_{c}}}\sum_{j}\left(\begin{array}{c}c_{j,A}\\ c_{j,B}\end{array}\right)e^{-i{\bf k}\cdot{\bf R}_{j}} \tag{6}\] creates a Majorana in momentum space, where \(N_{c}\) is the number of unit cells and \({\bf R}_{j}\) is the location of the unit cell and \(\vec{\gamma}_{\bf k}=(Re(\gamma_{\bf k}),-Im(\gamma_{\bf k}))\) is expressed in terms of the form factor \[\begin{split}\gamma_{\bf k}&=2i(J_{z}+J_{x}e^{ik_{1}}+J _{y}e^{ik_{2}}),\\ {\bf k}&=\frac{k_{1}}{2\pi}{\bf b_{1}}+\frac{k_{2}}{2 \pi}{\bf b_{2}},\quad k_{1},k_{2}\in[0,2\pi].\end{split} \tag{7}\] Here we have employed a reciprocal lattice basis \({\bf b_{1}},{\bf b_{2}}\) to span the momentum \({\bf k}\in{\bf BZ}\), which transforms to rhombus shaped Brillouin zone in the reciprocal lattice (see Fig. 2 ). The Majorana excitation spectrum of the Kitaev spin liquid is given by the eigenvalues of \(H_{0}\), \(\epsilon_{\bf k}=\pm|\gamma_{\bf k}|\). We create two adjacent visons by flipping the gauge field in the unit cell at origin to \(\hat{u}_{(0,A)(0,B)}=-1\) as shown in Fig [1], resulting in the following Hamiltonian: \[H_{KSL+2v}=H_{0}+V, \tag{8}\] where \[\hat{V}=-4J_{z}(ic_{0,A}c_{0,B}) \tag{9}\] acts as a scattering term for majoranas in the bulk. In this way, the vison gap calculation is formulated as a scattering problem. For this case, the Hamiltonian is given by \[H_{KSL+2v}=\frac{1}{2}\sum_{{\bf k}\in{\bf BZ}}\psi_{\bf k}^{\dagger}(\vec{ \gamma}_{\bf k}\cdot\vec{\tau})\psi_{\bf k}+\frac{1}{2}{\bf c}_{0}^{T}(V\tau_{ 2}){\bf c}_{0}, \tag{10}\] \[{\bf c}_{0}=\left(\begin{array}{c}c_{0,A}\\ c_{0,B}\end{array}\right)=\frac{1}{\sqrt{N_{c}}}\sum_{{\bf k}\in{\bf BZ}}\psi_{ \bf k} \tag{11}\] creates a Majorana fermion at the origin and \(V=4J_{z}\). We now set up the scattering problem in terms of Green's functions. The Green's function of the unscattered majoranas is \(G_{0}=G_{0}(i\omega_{n},{\bf k})\delta_{{\bf k},{\bf k}^{\prime}}\), where \[G_{0}(i\omega_{n},{\bf k})=[i\omega_{n}-\vec{\gamma}_{\bf k}\cdot\vec{\tau}]^{ -1}. \tag{12}\] In the presence of the bond-flip at the origin, the Green's function of the scattered majoranas is given by \(G=(G_{0}^{-1}-\hat{V})^{-1}\), where \(\hat{V}_{{\bf k},{\bf k}^{\prime}}=(V\tau_{2})/N_{c}\) is the scattering matrix. The total free energy of the non-interacting ground-state in the presence of the scattering is given by the standard formula \[\beta F=-\frac{1}{2}{\rm Tr}[\ln(-G^{-1})]=-\frac{1}{2}{\rm Tr}\ln[-G_{0}^{-1} +\hat{V}], \tag{13}\] where \({\rm Tr}\) denotes the full trace over Matsubara frequencies, momenta and sublattice degrees of freedom. The change in free energy is then given by \[\Delta F=-\frac{1}{2\beta}{\rm Tr}[\ln(1-\hat{V}G_{0})]=\frac{1}{2\beta}\sum_{ r=1}^{\infty}\frac{1}{r}{\rm Tr}\big{[}(\hat{V}G_{0})^{r}\big{]}. \tag{14}\] We now carry out the trace over the Matsubara frequencies and momenta, so that \[\Delta F=\frac{1}{2\beta}\sum_{i\omega_{n}}\sum_{r=1}^{\infty}\frac{1}{r}{\rm tr }\bigg{[}\bigg{(}\frac{V\tau_{2}}{N_{c}}\sum_{{\bf k}}G_{0}(i\omega_{n},{\bf k}) \bigg{)}^{r}\bigg{]}, \tag{15}\] where \({\rm tr}\big{[}\quad\big{]}\) denotes the residual trace over sublattice degrees of freedom. Now, we can incorporate the summations over momentum by introducing the local Green's function \[g(z)=\frac{1}{N_{c}}\sum_{{\bf k}\in{\bf BZ}}G_{0}(z,{\bf k}), \tag{16}\] Figure 2: Rearranged first Brillouin zone (**BZ**) constructed in the reciprocal lattice vector basis spanned by \({\bf b_{1}}\) and \({\bf b_{2}}\). so that \[\Delta F = \frac{1}{2\beta}\sum_{i\omega_{n}}\sum_{r=1}^{\infty}\frac{1}{r} \text{tr}\left[(V\tau_{2}g(i\omega_{n}))^{r}\right] \tag{17}\] \[= -\frac{1}{2\beta}\sum_{i\omega_{n}}\text{tr}[\ln(1-V\tau_{2}g(i \omega_{n}))],\] where we have re-assembled the Taylor series as a logarithm. We shall illustrate our method for the isotropic case \(J_{x}=J_{y}=J_{z}=J\), setting \(K=2J\) and \(V=4J\). In this case, \(\gamma_{\mathbf{k}}=iK(1+e^{ik_{1}}+e^{ik_{2}})\). If we divide \(\gamma_{\mathbf{k}}=i(\gamma_{c}(\mathbf{k})+i\gamma_{s}(\mathbf{k}))\) into its even and odd components \[\gamma_{c}(\mathbf{k})=K(1+\cos k_{1}+\cos k_{2}), \tag{18}\] then \(G_{0}(i\omega_{n})\) can be rewritten as \[g(z)=\frac{1}{N_{c}}\sum_{\mathbf{k}\in\mathbf{BZ}}\frac{z-(\gamma_{c}( \mathbf{k})\tau_{2}+\gamma_{s}(\mathbf{k})\tau_{\mathbf{1}})}{z^{2}-|\gamma_{ \mathbf{k}}|^{2}}. \tag{19}\] The odd component \(\gamma_{s}(\mathbf{k})\) vanishes under momentum summation so that \[1-\hat{V}g(z) = 1-\frac{2V}{N_{c}}\sum_{\mathbf{k}\in\mathbf{BZ}}\frac{z\tau_{2 }-\gamma_{c}(\mathbf{k})}{z^{2}-|\gamma_{\mathbf{k}}|^{2}}\] \[= 1-V(\tau_{2}g_{0}(z)-\mathbb{I}_{2}g_{2}(z))\] where \[g_{0}(z) \equiv \frac{1}{N_{c}}\sum_{\mathbf{k}\in\mathbf{BZ}}\frac{z}{z^{2}-| \gamma_{\mathbf{k}}|^{2}},\] \[g_{2}(z) \equiv \frac{1}{N_{c}}\sum_{\mathbf{k}\in\mathbf{BZ}}\frac{\gamma_{c}( \mathbf{k})}{z^{2}-|\gamma_{\mathbf{k}}|^{2}}. \tag{21}\] Carrying out the trace in the free energy we then obtain \[\Delta F = -\frac{T}{2}\text{Tr}\big{[}\ln(1-\hat{V}G_{0})\big{]}\] \[= -\frac{T}{2}\sum_{i\omega_{n}}\ln\big{[}(1+Vg_{2}(i\omega_{n}))^ {2}-(Vg_{0}(i\omega_{n}))^{2}\big{]}\,.\] The Matsubara summation can then be carried out as an anti-clockwise contour integral around the imaginary axis weighted by Fermi function, \(f(z)=[e^{\beta z}+1]^{-1}\). Deforming the contour to run clockwise around the real axis we obtain \[\Delta F=\int_{-\infty}^{\infty}\frac{d\omega}{2\pi}\left(\frac{1}{2}-f( \omega)\right)\delta_{v}(\omega), \tag{23}\] where \[\delta_{v}(\omega)=\text{Im }\ln\!\left[(1+2Kg_{2}(z))^{2}-(2Kg_{0}(z))^{2} \right]_{z=\omega-i\delta} \tag{24}\] is identified as the scattering phase shift. Note that \(\delta_{v}(\omega)=-\delta_{v}(-\omega)\) is an antisymmetric function of frequency. At zero temperature the vison gap is then \[\Delta_{v}=-K\int_{-\infty}^{0}\frac{dx}{2\pi}\ \text{Im}\ln\!\left[(1+2g_{2}(z))^{2}- (2g_{0}(z))^{2}\right]_{z=x-i\delta} \tag{25}\] where we have rescaled the frequency in units of \(K\), setting \(z=\omega/K\). In the reciprocal basis \[g_{0}(z) = \int_{0}^{2\pi}\frac{dk_{1}}{2\pi}\int_{0}^{2\pi}\frac{dk_{2}}{2 \pi}\frac{z}{z^{2}-|\gamma_{\mathbf{k}}|^{2}}, \tag{26}\] \[g_{2}(z) = \int_{0}^{2\pi}\frac{dk_{1}}{2\pi}\int_{0}^{2\pi}\frac{dk_{2}}{2 \pi}\frac{\gamma_{c}(\mathbf{k})}{z^{2}-|\gamma_{\mathbf{k}}|^{2}},\] where we have set \(K=1\) in \(\gamma(\mathbf{k})\), i.e \(\gamma_{\mathbf{k}}=1+e^{ik_{1}}+e^{ik_{2}}\) and \(\gamma_{c}=\cos(k_{1})+\cos(k_{2})\). The interior integral over Figure 3: Showing real and imaginary parts of (a)\(g_{0}(\omega)\) and (b) \(g_{2}(\omega)\) as defined in equations (21), (27) and (28). can be carried out as a complex contour integral over \(w=e^{ik_{2}}\) around the unit circle, (Appendix A), giving \[g_{0}(z)=\int_{0}^{2\pi}\frac{dk}{2\pi}\frac{z}{(z^{2}-(3+2c))\sqrt{1-\frac{8(c+1 )}{(z^{2}-(3+2c))^{2}}}}, \tag{27}\] \[g_{2}(z)=\int_{0}^{2\pi}\frac{dk}{2\pi}\frac{2c+1}{(z^{2}-(3+2c))\sqrt{1-\frac{ 8(c+1)}{(z^{2}-(3+2c))^{2}}}}, \tag{28}\] where \(c\equiv\cos(k)\). These integrals were evaluated numerically, to obtain the phase shift \(\delta_{v}(\omega)\) (Fig[4]). The phase shift was interpolated over over a discrete set of \(N\) points and the integral (25) was carried out numerically on the interpolated phase shift. By extrapolating the limit \(1/N\to 0\), we find the vison gap energy to be \(\Delta_{v}=0.1311656(3)K=0.263313(6)J\) for the isotropic case \(J_{z}=J_{y}=J_{z}=J\). This analytically-based calculation improves on the earlier result obtained via numerical diagonalization of finite size systems[1] i.e. \(\Delta_{v}\approx 0.267J\). Its main virtue however, is that the method can be easily generalized, and we gain insights from the calculated scattering phase shifts. From the calculated phase shift, we can calculate the change in density of states (DOS) \[\Delta\rho(\omega)=\frac{1}{2\pi}\frac{d\delta_{v}}{d\omega} \tag{29}\] (Fig. 4 c.) associated with a Bond flip, which is seen to contain a resonance centered around \(\epsilon_{0}\approx\pm 0.07K\). This resonance can be examined in detail by expanding \(g_{0}(z)\) and \(g_{2}(z)\) for small \(z\): \[\begin{split} g_{0}(\omega)&=\frac{\omega}{\sqrt{3 }\pi}\ln\left(\frac{3}{|\omega|}\right)+i\frac{|\omega|}{\sqrt{3}}\\ g_{2}(\omega)&=-\frac{2}{3}-\frac{\omega^{2}}{3 \sqrt{3}\pi}\left[\ln\left(\frac{3}{|\omega|}\right)+i\pi\text{sign }\omega\right]\end{split} \tag{30}\] Which can be used to evaluate scattering phase shift \(\delta_{v}(\omega)\) (24), and the resonant DOS change \(\Delta\rho(\omega)\) (29) analytically. The position of the resonance is determined by the integration over the entire band but its width is determined by the density of states at low energies. Since the DOS vanishes inside the spectral gap, the resonance may become sharp in the topological state. The sharp peak in the gapped state signifies the binding of Majorana fermions to the visons formed by \(\mathbb{Z}_{2}\) bond flip at origin. ## III Discussion In this work we have presented an analytical method for determination of the vison gap by treating the flipping of the \(\mathbb{Z}_{2}\) gauge field as a scattering potential for the Majorana fermions. In this way, we have been able to analytically extend the numerical treatment by Kitaev for the isotropic model on honeycomb lattice[1] to obtain an analytic result for the vison gap energy \(\Delta_{v}\). A key part of our approach is the calculation of the Majorana phase shift for scattering off the bond-flipped configuration. One of the interesting observations is that Figure 4: (a)The scattering phase shift \(\delta_{v}(\omega)\) associated with the creation of two adjacent visons, as a function of frequency \(\omega\) in units of \(K\).(b)Scattering phase shift \(\delta_{v}(\omega)\) on an expanded scale, showing inflection point at origin. (c)Resonance in the scattering density of states around \(\epsilon_{0}=\pm 0.07K\) in the density of state change \(\Delta\rho(\omega)\) due to the bond flip potential, as a function of frequency \(\omega\) in units of \(K\). This resonance may become sharp in the gapped topological state, signifying vison bound-states. the scattering contains a Majorana bound-state resonance, located at an energy \(\epsilon_{0}\approx\pm 0.07K\). Since this bound-state is formed from scattering throughout the entire Brillouin zone, its location is expected to be quite robust. Thus in those cases where the excitation spectrum acquires a gap, eg through time-reversal symmetry breaking [1; 39], we expect this resonance to transform into a sharp in-gap excitation. While it is possible to extend our method to analytically calculate the energy associated with anyons by flipping \(x-x\) bonds along \(\mathbf{a_{1}}\) direction, a much simpler derivation of the anyon energy in the KSL can be made by taking making two copies of the KSL, forming a complex fermion Hamiltonian \(H_{c}=H_{KSL}+H_{KSL}\). The line of reverse bonds around the torus can then be absorbed by a unitary transformation that redistributes the odd boundary condition into an effective vector potential that shifts all the momenta \(\mathbf{k}=(k_{1},k_{2})\rightarrow(k_{1}+\frac{\pi}{L},k_{2})\), equivalent to introducing a half magnetic flux with vector potential \(A_{x}=\frac{\pi}{L}\). Treating the response to the vector potential in an analogous fashion to a superconductor, the putative the energy cost of an anyon would be \[\Delta E=\int d^{2}x\frac{\rho_{s}}{4}A_{x}^{2}=\rho_{s}\frac{\pi^{2}}{4}, \tag{31}\] where \(\rho_{s}\) is the superfluid stiffness associated with the ground-state, \(A=\pi/L\) is the vector potential and the factor of 4 derives from halving the energy of the complex fermion system. However, since the complex fermion Hamiltonian \(H_{c}\) preserves the global \(U(1)\) symmetry, its superfluid stiffness \(\rho_{s}\) vanishes so it costs no energy to create anyons in the gapless state. From this line of reasoning, we can see that the ground state of the Kitaev spin liquid has a four-fold degeneracy and is topologically ordered. Finally, we note that our method also admits various generalizations. For example, it can be extended to anisotropic couplings i.e. \(J_{x}\neq J_{y}\neq J_{z}\) as well as to higher dimensions, such as the three-dimensional hyperoctagonal lattice. Moreover, our method can be applied to study the impact of spinor order formation as a consequence of hybridization between conduction electrons and Majorana spinons in the CPT model for a Kondo lattice coupled to a Yao-Lee spin liquid [27; 28]. This allows us to study the stability of Yao-Lee spin liquid against spinor order formation, the subject of a forthcoming article by the authors. ###### Acknowledgements. This work was supported by Office of Basic Energy Sciences, Material Sciences and Engineering Division, U.S. Department of Energy (DOE) under Contracts No. DE-SC0012704 (AMT) and DE-FG02-99ER45790 (AP and PC ). All authors contributed equally to this work. ## Appendix A Analytic Calculation of Green's Function in Honeycomb Lattice Here we show how to simplify the integrals \[\begin{split} g_{0}(z)&=\int_{0}^{2\pi}\frac{dk_{1} }{2\pi}\int_{0}^{2\pi}\frac{dk_{2}}{2\pi}\frac{z}{z^{2}-|\gamma_{\mathbf{k}}|^ {2}},\\ g_{2}(z)&=\int_{0}^{2\pi}\frac{dk_{1}}{2\pi}\int_{ 0}^{2\pi}\frac{dk_{2}}{2\pi}\frac{\gamma_{c}(\mathbf{k})}{z^{2}-|\gamma_{ \mathbf{k}}|^{2}},\end{split} \tag{32}\] where \(\gamma_{c}(\mathbf{k})=1+\cos(k_{1})+\cos(k_{2})\), using a contour integral. We begin by noting that the integrals over \(k_{1}\) and \(k_{2}\) can be carried out in either order, allowing us to pull the cosines in \(\gamma_{c}(k)\) out of the inner integral, so that \[g_{0}(z)=\int_{0}^{2\pi}\frac{dk_{1}}{2\pi}zI_{0}(z,k_{1}),\] \[g_{2}(z)=\int_{0}^{2\pi}\frac{dk_{1}}{2\pi}(1+2\cos k_{1})I_{0}( z,k_{1}), \tag{33}\] where \[I_{0}(z,k)=\int_{0}^{2\pi}\frac{dk_{2}}{2\pi}\frac{1}{z^{2}-|\gamma_{\mathbf{k }}|^{2}}. \tag{34}\] Figure 5: Schematic illustration of the resonance in the density of states for the gapless Kitaev spin liquid. The resonance is expected to become sharp when a gap opens in the bulk density of states, forming a fermionic bound-state at the vison pair. Figure 6: Hexagonal lattice of Kitaev spin liquid is embedded on a torus by application of periodic boundary condition. An anyons forms within the torus is formed flipping the bonds along a non-contractable loop that encircles the torus. Writing \(s=e^{ik_{1}}\) and \(w=e^{ik_{2}}\), we can rewrite \(I_{0}\) as a counter-clockwise integral around the unit circle \(|w|=1\), \[I_{0}(z,k)\equiv I_{0}(z,s)=\oint\limits_{|w|=1}\frac{dw}{2\pi iw}\frac{1}{z^{2} -|\gamma(s,w)|^{2}}. \tag{10}\] Rewriting the denominator as a quadratic function of \(w\), \[\begin{split} z^{2}-|\gamma(s,w)|^{2}&=z^{2}-(1+s+ w)(1+\frac{1}{s}+\frac{1}{w})\\ &=-\frac{(1+s)}{sw}(w^{2}+wb+s),\end{split} \tag{11}\] where \[b=\frac{1+3s+s^{2}-sz^{2}}{(1+s)}. \tag{12}\] We can thus write the integral in the form \[I_{0}(z,s)=-\frac{s}{1+s}\oint\frac{dw}{2\pi i}\frac{1}{(w-w_{+})(w-w_{-})} \tag{13}\] where \[w_{\pm}=-\frac{b}{2}\pm\sqrt{\left(\frac{b}{2}\right)^{2}-s} \tag{14}\] are the poles of the integrand. Now since \(w_{+}w_{-}=s=e^{ik_{1}}\), it follows that \(|w_{+}w_{-}|=1\), so that only one of these poles lies inside the contour. (In general, this may depend on the way we treat the branch cuts inside the square root of (14). However, we don't actually need to know which pole it is, as this we will fix the sign and the branch-cuts in the final expression by demanding that the asymptotic behavior of \(I_{0}\sim 1/z^{2}\) is analytic at large \(z\).) Lets assume that the pole closest to the origin is at \(w=w_{-}\), then we obtain \[I_{0}(z,s)=\frac{s}{1+s}\frac{1}{w_{+}-w_{-}}=\frac{s}{1+s}\frac{1}{\sqrt{b^{2 }-4s}}. \tag{15}\] Now expanding the denominator, we have \[\begin{split}(1+s)\sqrt{b^{2}-4s}&=\sqrt{(1+3s+s^ {2}-sz^{2})^{2}-4s(1+s)^{2}}\\ &=s\sqrt{(3+2\cos k_{1}-z^{2})^{2}-8(\cos k+1)}\\ &=s(z^{2}-(3+2\cos k_{1}))\sqrt{1-\frac{8(\cos k+1)}{(z^{2}-(3+2 \cos k_{1}))^{2}}},\end{split} \tag{16}\] where we have factorized the final expression, to guarantee that at large \(z\), \(I_{0}(z,s)\sim 1/z^{2}\) is analytic. Combining the above results, gives us the following expressions for \(g_{0}(z)\) and \(g_{2}(z)\) \[\begin{split} g_{0}(z)&=\int_{0}^{2\pi}\frac{dk}{2 \pi}\frac{z}{(z^{2}-(3+2c))\sqrt{1-\frac{8(c+1)}{(z^{2}-(3+2c))^{2}}}}\\ g_{2}(z)&=\int_{0}^{2\pi}\frac{dk}{2\pi}\frac{2c+1 }{(z^{2}-(3+2c))\sqrt{1-\frac{8(c+1)}{(z^{2}-(3+2c))^{2}}}}\end{split} \tag{17}\] Where \(c\equiv\cos(k)\), which are the expressions given in (27) and (28).
2303.00087
Coupled cluster downfolding techniques: a review of existing applications in classical and quantum computing for chemical systems
In this manuscript, we provide an overview of the recent developments of the coupled cluster (CC) downfolding methods, where the ground-state problem of a quantum system is represented through effective/downfolded Hamiltonians defined using active spaces. All CC downfolding techniques discussed here are derived from a single-reference exponential ansatz for the ground-state problem. We discuss several extensions of the non-Hermitian and Hermitian downfolding approaches to the time domain and the so-called quantum flows. We emphasize the important role of downfolding formalisms in transitioning chemical applications from noisy quantum devices to scalable and error-corrected quantum computers.
Nicholas P. Bauman, Bo Peng, Karol Kowalski
2023-02-28T21:16:26Z
http://arxiv.org/abs/2303.00087v1
Coupled cluster downfolding techniques: a review of existing applications in classical and quantum computing for chemical systems ###### Abstract In this manuscript, we provide an overview of the recent developments of the coupled cluster (CC) downfolding methods, where the ground-state problem of a quantum system is represented through effective/downfolded Hamiltonians defined using active spaces. All CC downfolding techniques discussed here are derived from a single-reference exponential ansatz for the ground-state problem. We discuss several extensions of the non-Hermitian and Hermitian downfolding approaches to the time domain and the so-called quantum flows. We emphasize the important role of downfolding formalisms in transitioning chemical applications from noisy quantum devices to scalable and error-corrected quantum computers. ## I Introduction The coupled cluster (CC) theory [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11] has evolved into one of the most accurate formulations to describe the correlation effects in chemistry [9; 10; 11], material sciences, and physics [6; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22]. Although the CC formalism originates in the Linked Cluster Theorem [23; 24], it has been successfully extended to describe excited states, properties, and time evolution of the system [6; 7; 8; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35]. Over the last few decades, a significant effort has been exerted to address the steep scaling of canonical CC formulations and apply them to realistic chemical processes. Parallel computing, especially with recently developed exascale computing architectures, has extended the applicability of conventional CC methods, but only modestly before encountering prohibitive costs once more. As a result, there has been much development in recent years on new reduced-scaling approaches for classical and quantum computing paradigms to push the envelope of the system sizes tractable by CC formalisms. Mathematically rigorous formulations for reducing the dimensionality/cost of quantum formulations are urgently needed to shift the envelope of system-size tractable by accurate many-body formulations in chemistry, material sciences, and physics. Among the most successful formulations, one should mention local coupled cluster (CC) formulations, various partitioning and incremental schemes, and embedding methods [36; 37; 38; 39; 40; 41; 42; 43; 44; 45]. These approaches are driven by various design principles from the locality of correlation effects in the wave function approaches to properties of self-energy in correlated systems. Thanks to these formulations, significant progress has been achieved in describing correlation effects in large molecular systems allowing for simulations based on the utilization of modest computational resources. The dimensionality reduction techniques also play a crucial role in enabling the early stages of quantum computing driven by noisy intermediate-scale quantum devices (NISQ). This is associated with the reduction of the qubits required to represent the quantum problem of interest. As an illustration, one should mention several techniques developed to take full advantage of the ubiquitous Variational Quantum Eigensolvers (VQE) approach [46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63] in addressing problems beyond the situation where few electrons are correlated. In the context of the development of quantum algorithms for quantum chemistry, the main goal of dimensionality reduction methods is to provide a mathematically rigorous way of representing interdependencies between static and dynamical correlation effects. However, while the inclusion of static effects can be achieved for small-size systems on currently available quantum hardware, much needed dynamical correlation effects, usually manifesting in a large number of fermionic degrees of freedom (amplitudes) characterized by small values, are beyond the reach of current quantum technologies. The recently introduced downfolding techniques based on the double unitary coupled cluster Ansatz (DUCC) [64; 65; 66; 67; 68; 69; 70; 71] provide one of the solutions to the above-mentioned problem. The DUCC formalism offers a special representation of the ground-state wave function that, in analogy to single-reference sub-system embedding sub-algebras (SES-CC) [72; 73; 72], allows one to construct effective Hamiltonians that integrate out all out-of-active-space degrees of freedom usually identified with dynamical amplitudes. Although the effective Hamiltonian formulations have a long history in quantum history and physics, especially in dealing with strong correlation effects, there are notable distinct features of the DUCC and SES-CC formalisms: (1) both formulations are embedded in the single-reference language employing a straightforward definition of the excitation domain (i.e., wave function parameters) in the vain of single-reference formulations, and (2) the possibility of describing a quantum problem in the form of quantum flows, i.e., coupled small-dimensionality eigenvalue problems. In this way, one can probe large sub-spaces of the Hilbert space without unrealistic quantum resource demands. Since the eigenvalue problems involved in the quantum flow represent physically well-defined problems (defined by the corresponding effective Hamiltonians and density matrices), the quantum flow formulation naturally lent itself to capture possible sparsity characterizing the quantum system. This paper provides a compact overview of the main development threads originating in the single-reference SES-CC formulation (Section 2.1) and its unitary extension (Section 2.2). Section 3 introduces and discusses the basic tenets of quantum flows. The extension of the CC downfolding methods to the time domain and Green's function formalism is discussed in Sections 4 and 5. Finally, Section 6 discusses applications of the downfolding formalisms. ## II Theory The SES-CC and DUCC formulations have been amply discussed in recent papers (see Refs. [64; 70; 72]). Here we overview only the salient features of these approaches. While the SES-CC technique forms the basis for non-Hermitian downfolding, the DUCC expansions provide its Hermitian formulations. In both cases, the ensuing downfolding procedures are encoded in the properties of exponential ansatzes for the ground-state wave functions \(|\Psi\rangle\): \[|\Psi\rangle=e^{T}|\Phi\rangle\, \tag{1}\] for non-Hermitian formalism given by standard SR-CC expansion, and \[|\Psi\rangle=e^{\sigma_{\rm ext}}e^{\sigma_{\rm int}}|\Phi\rangle\, \tag{2}\] for the Hermitian downfolding defined by the DUCC Ansatz. In these equations, \(|\Phi\rangle\) is the so-called reference function usually identified with the Hartree-Fock determinant, \(T\) is the SR-CC cluster operator, and \(\sigma_{\rm ext}\) and \(\sigma_{\rm int}\) are the anti-Hermitian external and internal cluster operators (_vide infra_). Both types of downfolding lead to many-body forms of effective or downfolded Hamiltonians acting in the appropriate active spaces. Although effective Hamiltonian formulations have a long history in electronic structure theory, especially in treating strong correlation effects, the present methods have several unique features compared to the multi-reference effective Hamiltonian approaches. Among the most distinct, one should mention: (1) the possibility of developing effective Hamiltonian formalisms using a very simple single-reference language to define the manifold of excitations used to construct downfolded Hamiltonians and (2) the concept of the quantum flows (QF), which boils down to coupling downfolding procedures corresponding to various active spaces. The former formalism allows for sampling large sub-spaces of the Hilbert space using reduced-dimensionality eigenvalue problems. The QF formalism is not only a convenient representation of appropriate many-body formulations in the form of numerically feasible computational blocks, which plays a crucial role in the early stages of quantum computing development but also provides a natural environment for capturing the sparsity characterizing correlated systems. ### Non-Hermitian CC Downfolding Active spaces play a central role in the development of CC downfolding techniques and are defined by the subset \(R\) of occupied active orbitals (\(R=\{R_{i},\ i=1,\ldots,x_{R}\}\)) and a subset \(S\) of active virtual orbitals (\(S=\{S_{i},\ i=1,\ldots,y_{s}\}\)). Using many-body language, the excited Slater determinants spanning the active space (along with the reference function \(|\Phi\rangle\)) can be generated by generators \(E^{\sigma_{l}}_{i_{l}}=a^{\sigma_{l}}_{i_{l}}\) (\(l_{i}\in R\) and \(a_{l}\in S\)) acting on the reference function \(|\Phi\rangle\). These generators define the so-called \(\mathfrak{g}^{(N)}(R,S)\) sub-algebra. Due to the utilization the particle-hole formalism all generators \(E^{\sigma_{l}}_{i_{l}}\) commute and the \(\mathfrak{g}^{(N)}(R,S)\) is commutative. For the sake of the following analysis, it is convenient to characterize various types of sub-algebras \(\mathfrak{g}^{(N)}(R,S)\) by specifying the numbers of active occupied (\(x_{R}\)) and active virtual (\(y_{S}\)) orbitals, namely, \(\mathfrak{g}^{(N)}(x_{R},y_{S})\). As shown in Refs. [65; 72; 74], each sub-algebra \(\mathfrak{h}=\mathfrak{g}^{(N)}(R,S)\) induces partitioning of the cluster operator \(T\): \[T=T_{\rm int}(\mathfrak{h})+T_{\rm ext}(\mathfrak{h})\, \tag{3}\] where \(T_{\rm int}(\mathfrak{h})\) belongs to \(\mathfrak{h}\) while \(T_{\rm ext}(\mathfrak{h})\) does no belong to \(\mathfrak{h}\). If the expansion \(e^{T_{\rm int}(\mathfrak{h})}|\Phi\rangle\) produces all Slater determinants of the same symmetry as the \(|\Phi\rangle\) state) in the active space, we call \(\mathfrak{h}\) the _sub-system embedding sub-algebra_ for the CC formulation defined by the \(T\) operator. In Ref. [72], we showed that each standard CC approximation has its own class of SESs. The existence of the SESs for standard CC approximations provides alternative ways for calculating CC energies, which can be obtained in contrast to the standard CC energy \(E\) expression \[E=\langle\Phi|e^{-T}He^{T}|\Phi\rangle\, \tag{4}\] as an eigenvalue of the active-space non-Hermitian eigenproblem \[H^{\rm eff}(\mathfrak{h})e^{T_{\rm int}(\mathfrak{h})}|\Phi\rangle=Ee^{T_{ \rm int}(\mathfrak{h})}|\Phi\rangle. \tag{5}\] where \[H^{\rm eff}(\mathfrak{h})=(P+Q_{\rm int}(\mathfrak{h}))\bar{H}_{\rm ext}( \mathfrak{h})(P+Q_{\rm int}(\mathfrak{h})) \tag{6}\] and \[\bar{H}_{\rm ext}(\mathfrak{h})=e^{-T_{\rm ext}(\mathfrak{h})}He^{T_{\rm ext }(\mathfrak{h})}. \tag{7}\] The above result is known as the _SES-CC Theorem_. In Eq. (6), \(P\) stands for the projection operator onto reference function and \(Q_{\rm int}(\mathfrak{h})\) is a projection operator onto all excited Slater determinants with respect to the reference function \(|\Phi\rangle\) that correspond to \(\mathfrak{h}\). One should also mention that the standard energy expression given by Eq. (4) can be reproduced from Eq. (5) when \(\mathfrak{h}\) represents the simplest case when there are no generators (both sets \(R\) and \(S\) are empty). When SES-CC Theorem is applied to the exact cluster operator corresponding to the full coupled cluster approach, the lowest eigenvalues of effective Hamiltonians correspond to the exact ground-state full configuration interaction (FCI) energy. Standard CC approximations (such as CCSD, CCSDT, CCSDT, etc. methods) are characterized by specific classes of SESs. For example, typical CCSD SESs are \(\mathfrak{g}^{(N)}(1_{k},y_{S})\) or \(\mathfrak{g}^{(N)}(x_{R},1_{S})\) sub-algebras. For the CCS-DTQ approach, corresponding SESs are of the \(\mathfrak{g}^{(N)}(2_{R},y_{S})\) or \(\mathfrak{g}^{(N)}(x_{R},2_{S})\) form. From these definitions, it is easy to see that lower-rank CC approximations are SESs for high-rank approaches. For example, the CCSD SESs \(\mathfrak{g}^{(N)}(1_{R},y_{S})\) are SESs for the CCSDTQ formalism. The SES-CC Theorem is flexible in the choice of active spaces, providing a number of alternative ways for calculating CC energies using effective Hamiltonians corresponding to various SESs. For example, for the CCSD approximation with a fixed molecular orbital basis, where the SES \(\mathfrak{g}^{(N)}(R,S)\) defined at the orbital level [73] contains either one occupied active orbital or one virtual active orbital, the number of different SESs \(S_{\text{CCSD}}\) and corresponding effective Hamiltonians that upon diagonalization reproduce the standard CCSD energy is \[S_{\text{CCSD}}=n_{o}(2^{n_{v}}-1)+n_{v}(2^{n_{o}}-1)-n_{o}n_{v}\,. \tag{8}\] This formula is a consequence of binomial expansion, in which \(k\) active virtual/occupied orbitals can be chosen in \({n_{v}\choose k}/{n_{v}\choose k}\) different ways, where \(n_{o}\) and \(n_{v}\) stand for the numbers of correlated occupied and virtual orbitals, respectively. The validity of the SES-CC Theorem has recently been confirmed numerically on the example of several benchmark systems. In addition to the standard spatial orbital-based definition of SES-generated active spaces, it was shown that the SES-CC Theorem also holds for the active spaces defined by a non-trivial number of active spin orbitals. In the extreme case, we demonstrated that the SES-CC Theorem is also satisfied for the active space describing one active electron "correlated" in two active \(\alpha\)-type spin-orbitals. [73] ### Hermitian CC Downfolding The Hermitian form of the downfolded Hamiltonian is obtained as a consequence of utilizing DUCC representation of the wave function [64; 65] \[|\Psi\rangle=e^{\sigma_{\text{ext}}(\mathfrak{h})}e^{\sigma_{\text{int}}( \mathfrak{h})}|\Phi\rangle\;, \tag{9}\] where \(\sigma_{\text{ext}}(\mathfrak{h})\) and \(\sigma_{\text{int}}(\mathfrak{h})\) are general-type anti-Hermitian operators \[\sigma_{\text{int}}^{\dagger}(\mathfrak{h}) =-\sigma_{\text{int}}(\mathfrak{h})\;, \tag{10}\] \[\sigma_{\text{ext}}^{\dagger}(\mathfrak{h}) =-\sigma_{\text{ext}}(\mathfrak{h})\;. \tag{11}\] In analogy to the non-Hermitian case, the \(\sigma_{\text{int}}(\mathfrak{h})\) operator is defined by parameters carrying only active spin-orbital labels and \(\sigma_{\text{ext}}(\mathfrak{h})\) operators are defined by parameters with at least one in-active spin-orbital label. The use of the DUCC Ansatz (9), in analogy to the SES-CC case, leads to an alternative way of determining energy, which can be obtained by solving active-space Hermitian eigenvalue problem: \[H^{\text{eff}}(\mathfrak{h})e^{\sigma_{\text{int}}(\mathfrak{h})}|\Phi\rangle= Ee^{\sigma_{\text{int}}(\mathfrak{h})}|\Phi\rangle, \tag{12}\] where \[H^{\text{eff}}(\mathfrak{h})=(P+Q_{\text{int}}(\mathfrak{h}))\bar{H}_{\text{ ext}}(\mathfrak{h})(P+Q_{\text{int}}(\mathfrak{h})) \tag{13}\] and \[\bar{H}_{\text{ext}}(\mathfrak{h})=e^{-\sigma_{\text{ext}}(\mathfrak{h})}He^{ \sigma_{\text{ext}}(\mathfrak{h})}\;. \tag{14}\] When the external cluster amplitudes are known (or can be effectively approximated), the energy (or its approximation) can be calculated by diagonalizing the Hermitian effective/downfolded Hamiltonian (13) in the active space using various quantum or classical diagonalizers. For quantum computing applications second-quantized representation of \(H^{\text{eff}}(\mathfrak{h})\) is required. In the light of the non-commuting character of components defining the \(\sigma_{\text{ext}}(\mathfrak{h})\) operator, one has to rely on the finite-rank commutator expansions, i.e., \[H^{\text{eff}}(\mathfrak{h})\simeq(P+Q_{\text{int}}(\mathfrak{h}))(H+\sum_{i=1 }^{l}\frac{1}{i!}[\ldots[H,\sigma_{\text{ext}}(\mathfrak{h})],\ldots],\sigma_{ \text{ext}}(\mathfrak{h})]_{i}(P+Q_{\text{int}}(\mathfrak{h}))\;. \tag{15}\] Due to the numerical costs associated with the contractions of multi-dimensional tensors and the rapidly expanding number of terms in this expansion, only approximations based on the inclusion of low-rank commutators are feasible. In recent studies, approximations based on single, double, and part of triple commutators were explored where one- and two-body interactions were retained in the second quantized form of \(H^{\text{eff}}(\mathfrak{h})\) were retained. In practical applications, one also has to determine the approximate form of \(\sigma_{\text{ext}}(\mathfrak{h})\). For practical reasons, we used the following approximation \[\sigma_{\text{ext}}(\mathfrak{h})\simeq T_{\text{ext}}(\mathfrak{h})-T_{\text {ext}}(\mathfrak{h})^{\dagger}\;, \tag{16}\] where \(T_{\text{ext}}\) were defined through the external parts of the \(T_{1}\) and \(T_{2}\) operators obtained in CCSD calculations. ## III Quantum Flows ### Non-Hermitian CC Flows In the case of non-Hermitian downfolding, the SES-CC Theorem can be used to form computational frameworks (quantum flow) that integrate eigenvalue problems [70; 74] \[H^{\text{eff}}(\mathfrak{h}_{i})e^{T_{\text{int}}(\mathfrak{h}_{i})}|\Phi \rangle=Ee^{T_{\text{int}}(\mathfrak{h}_{i})}|\Phi\rangle\;(i=1,\ldots,M_{ \text{SES}})\;, \tag{17}\] where \(M_{\rm SES}\) is the total number of SESs or active space problems included in the flow. In Ref. [70; 74], we demonstrated that a problem defined in this way is equivalent (at the solution) to the standard CC equations with cluster operator defined as a combination of all _unique_ excitations included in \(T_{\rm int}(\mathfrak{h}_{i})\)\((i=1,\ldots,M_{\rm SES})\) operators, i.e., \[T=\bigcup_{i=1}^{M}T_{\rm int}(\mathfrak{h}_{i}) \tag{18}\] and \[Qe^{-T}He^{T}|\Phi\rangle=0\;, \tag{19}\] \[\langle\Phi|e^{-T}He^{T}|\Phi\rangle=E\;, \tag{20}\] where the \(Q\) operator is a projection operator onto a subspace of excited Slater determinants generated by the action of \(T\) operator of Eq. (18) onto the reference function. The discussed equivalence is known as the _Equivalence Theorem_. Initially, as discussed in Ref. [72], the quantum flows were introduced as a form of the invariance of the SES-CC Theorem upon separate rotations of occupied and virtual orbitals. Although the form given by Eqs. (19) and (20) is generally better suited in canonical calculations to take advantage of parallel computing architectures, the representation given by Eq. (17) is well-poised to capture a general type of the sparsity characterizing quantum systems. This is because Eq. (17) represent reduced-dimensionality computational blocks representing quantum problems defined by non-Hermitian Hamiltonians \(H^{\rm eff}(\mathfrak{h}_{i})\). In analogy to the bi-variational CC formulations [6] one can introduce the left eigenvectors of active-space Hamiltonians \(H^{\rm eff}(\mathfrak{h}_{i})\), using either CC-\(\Lambda\) \[\langle\Phi|(1+\Lambda_{\rm int}(\mathfrak{h}_{i}))\;\;(i=1,\ldots,M_{\rm SES })\;, \tag{21}\] or the extended CC formalism \[\langle\Phi|e^{S_{\rm int}(\mathfrak{h}_{i})}\;\;(i=1,\ldots,M_{\rm SES})\;, \tag{22}\] where \(\Lambda_{\rm int}(\mathfrak{h}_{i})\) and \(S_{\rm int}(\mathfrak{h}_{i})\) are de-excitation operators acting in the corresponding active spaces, to form one-particle reduced density matrices \(\gamma(\mathfrak{h}_{i})\). For the \(\Lambda\)-CC formalism the matrix elements of the \(\gamma(\mathfrak{h}_{i})\) are given by the formula \[\gamma^{p}_{q}(\mathfrak{h}_{i}) =\langle\Phi|(1+\Lambda_{\rm int}(\mathfrak{h}_{i}))a^{\dagger}_ {p}a_{q}e^{T_{\rm int}(\mathfrak{h}_{i})}|\Phi\rangle\;, \tag{23}\] \[\quad a^{\dagger}_{p}a_{q}\in\mathfrak{h}_{i}\;,\;\forall_{i=1, \ldots,M_{\rm SES}}\;. \tag{24}\] The above construct, in contrast to the existing local CC formulations where the one-particle reduced density matrices are postulated, allows one to introduce them in a natural way. A more detailed analysis of the local formulations stemming from the Equivalence Theorem is discussed in Refs. [70; 74]. This procedure can also be extended to systems driven by different types of interactions, such as in nuclear structure theory or quantum lattice models, where the extension of the standard local CC formulations as used in quantum chemistry may be less obvious. ### Hermitian CC Flows Using non-Hermitian formulation as a guide, the idea of quantum flow can be generalized to the DUCC case. We start our analysis by assuming that we would like to perform DUCC effective simulations for SES \(\mathfrak{h}\) problem expressed in Eq. (12) for an active space that is too big to be handled either in classical or quantum computing. We will assume that external amplitudes \(\sigma_{\rm ext}(\mathfrak{h})\) can be effectively approximated. For simplicity we will introduce a new DUCC Hermitian Hamiltonian \(A(\mathfrak{h})\) which is defined as \(H^{\rm eff}(\mathfrak{h})\) or its approximation in the \((P+Q_{\rm int}(\mathfrak{h}))\) space (in the simplest case it can be just the \((P+Q_{\rm int}(\mathfrak{h}))H(P+Q_{\rm int}(\mathfrak{h}))\) operator). We will denote \(A(\mathfrak{h})\) simply by \(A\). Next, we assume that excitations in \(\mathfrak{h}\) that are relevant to the state of interest can be captured by excitation sub-algebras: \(\mathfrak{h}_{1}\), \(\mathfrak{h}_{2}\),..., \(\mathfrak{h}_{M}\), where, in analogy to the SR-CC case, we admit the possibility of "sharing" excitations/de-excitations between these sub-algebras. We also assume that the number of excitations belonging to each \(\mathfrak{h}_{i}\)\((i=1,\ldots,M)\) is significantly smaller than the number of excitations in \(\mathfrak{h}\) and therefore numerically tractable in simulations. The \(A(\mathfrak{h})\) Hamiltonian and the \((P+Q_{\rm int}(\mathfrak{h}))\) space can be treated as a starting point for the secondary DUCC decompositions generated by sub-system algebras \(\mathfrak{h}_{i}\)\((i=1,\ldots,M)\) defined above, i.e., \[A^{\rm eff}(\mathfrak{h}_{i})e^{\sigma_{\rm int}(\mathfrak{h}_{i})}|\Phi \rangle=Ze^{\sigma_{\rm int}(\mathfrak{h}_{i})}|\Phi\rangle\;\;(i=1,\ldots,M) \tag{25}\] or in the VQE-type variational representation as \[\min_{\mathbf{\theta}(\mathfrak{h}_{i})}\langle\Psi(\mathbf{\theta}(\mathfrak{h}_{i})) |A^{\rm eff}(\mathfrak{h}_{i})|\Psi(\mathbf{\theta}(\mathfrak{h}_{i}))\rangle\;\;(i= 1,\ldots,M)\;, \tag{26}\] where \(|\Psi(\mathbf{\theta}(\mathfrak{h}_{i}))\rangle\) approximates \(e^{\sigma_{\rm int}(\mathfrak{h}_{i})}|\Phi\rangle\). Each \(A^{\rm eff}(\mathfrak{h}_{i})\) is defined as \[A^{\rm eff}(\mathfrak{h}_{i})=(P+Q_{\rm int}(\mathfrak{h}_{i}))\bar{A}_{\rm ext }(\mathfrak{h}_{i})(P+Q_{\rm int}(\mathfrak{h}_{i})) \tag{27}\] and \[\bar{A}_{\rm ext}(\mathfrak{h}_{i})=e^{-\sigma_{\rm ext}(\mathfrak{h}_{i})}A e^{\sigma_{\rm ext}(\mathfrak{h}_{i})}, \tag{28}\] where we defined external \(\sigma_{\rm ext}(\mathfrak{h}_{i})\) operator with respect to \(\mathfrak{h}\) or \((P+Q_{\rm int}(\mathfrak{h}))\) space (i.e. cluster amplitudes defining \(\sigma_{\rm ext}(\mathfrak{h}_{i})\) must carry at last one index belonging to active spin orbitals defining \(\mathfrak{h}\) and not belonging to the set of active spin orbitals defining \(\mathfrak{h}_{i}\)). In other words, sub-algebras \(\mathfrak{h}_{i}\) generate active sub-spaces in the larger active space \(\mathfrak{h}\), i.e., \((P+Q_{\rm int}(\mathfrak{h}_{i}))\subset(P+Q_{\rm int}(\mathfrak{h}))\). Due to the non-commutativity of components defining \(\sigma\)-operators, connecting DUCC computational blocks given by Eq. (25) or Eq. (26) directly into a flow is a rather challenging task. To address these issues (see Ref. [74]) and define practical DUCC flow, we will discuss the algorithm that combines secondary downfolding steps with Trotterization of the unitary CC operators. Let us assume that the \(\sigma_{\rm int}(\mathfrak{h})\) operator can be approximated by \(\sigma_{\rm int}(\mathfrak{h}_{i})(i=1,\ldots,M)\), i.e., \[\sigma_{\rm int}(\mathfrak{h})\simeq\sum_{i=1}^{M}\sigma_{\rm int}(\mathfrak{h} _{i})+X(\mathfrak{h},\mathfrak{h}_{1},\ldots,\mathfrak{h}_{M})\;, \tag{29}\] where the \(X(\mathfrak{h},\mathfrak{h}_{1},\ldots,\mathfrak{h}_{M})\) operator (or \(X\) for short) eliminates possible overcounting of the "shared" amplitudes. It enables to re-express \(\sigma_{\text{int}}(\mathfrak{h})\) as \[\sigma_{\text{int}}(\mathfrak{h})=\sigma_{\text{int}}(\mathfrak{h}_{i})+R( \mathfrak{h}_{i})\ \ (i=1,\ldots,M)\, \tag{30}\] where \[R(\mathfrak{h}_{i})=^{(i)}\sum_{j=1}^{M}\ \sigma_{\text{int}}(\mathfrak{h}_{j})+X \tag{31}\] and \({}^{(i)}\sum_{j=1}^{M}\) designates the sum where the \(i\)-th element is neglected. Consequently, we get \[e^{\sigma_{\text{int}}(\mathfrak{h})}|\Phi\rangle=e^{\sigma_{\text{int}}( \mathfrak{h}_{i})+R(\mathfrak{h}_{i})}|\Phi\rangle\ \ (i=1,\ldots,M). \tag{32}\] Using the Trotter formula, we can approximate the right-hand side of Eq. (32) for a given \(j\) as \[e^{\sigma_{\text{int}}(\mathfrak{h})}|\Phi\rangle\simeq(e^{R(\mathfrak{h}_{i })/N}e^{\sigma_{\text{int}}(\mathfrak{h}_{i})/N})^{N}|\Phi\rangle. \tag{33}\] Introducing auxiliary operator \(G_{i}^{(N)}\) \[G_{i}^{(N)}=(e^{R(\mathfrak{h}_{i})/N}e^{\sigma_{\text{int}}(\mathfrak{h}_{i })/N})^{N-1}e^{R(\mathfrak{h}_{i})/N}\ \ (i=1,\ldots,M)\, \tag{34}\] the "internal" wave function (32) can be expressed as \[e^{\sigma_{\text{int}}(\mathfrak{h})}|\Phi\rangle\simeq G_{i}^{(N)}e^{\sigma_ {\text{int}}(\mathfrak{h}_{i})/N}|\Phi\rangle\ \ (i=1,\ldots,M)\, \tag{35}\] where \(G_{i}^{(N)}\) is a complicated function of all \(\sigma_{\text{int}}(\mathfrak{h}_{j})\ (j=1,\ldots,M)\) and the above expression does not decouple \(\sigma_{\text{int}}(\mathfrak{h}_{i})\) from the \(G_{i}^{(N)}\) term. However, using this expression, one can define the practical way of determining computational blocks for flow equations. To this end, let us introduce the expansion in Eq. (35) to Eq. (12) (with \(H^{\text{eff}}(\mathfrak{h})\) replaced by the \(A\) operator), pre-multiply both sides by \([G_{i}^{(N)}]^{-1}\), and project onto \((P+Q_{\text{int}}(\mathfrak{h}_{i}))\) sub-space, which leads to non-linear eigenvalue problems \[(P+Q_{\text{int}}(\mathfrak{h}_{i}))[G_{i}^{(N)}]^{-1}AG_{i}^{(N)}e^{\sigma_{ \text{int}}(\mathfrak{h}_{i})/N}|\Phi\rangle\simeq Ee^{\sigma_{\text{int}}( \mathfrak{h}_{i}))/N}|\Phi\rangle\ (i=1,\ldots,M). \tag{36}\] These equations define computational blocks for the DUCC flow. To make practical use of Eqs. (36) let us linearize them by defining the downfolded Hamiltonian \(\Gamma_{i}^{(N)}\), \(\Gamma_{i}^{(N)}=(P+Q_{\text{int}}(\mathfrak{h}_{i}))[G_{i}^{(N)}]^{-1}AG_{i}^ {(N)}(P+Q_{\text{int}}(\mathfrak{h}_{i}))\) as a function of all \(\sigma_{\text{int}}(\mathfrak{h}_{j})\ (j=1,\ldots,M)\) from the previous flow cycle(s) (\(pc\)). We will symbolically designate this fact by using special notation for \(\Gamma_{i}^{(N)}\) effective Hamiltonian, i.e., \(\Gamma_{i}^{(N)}(pc)\) Hamiltonian. Now, we replace eigenvalue problems in Eq. (36) by an optimization procedures described by Eq. (26) which also offer an easy way to deal with "shared" amplitudes. Namely, if, in analogy to SR-CC flow, we establish an ordering of \(\mathfrak{h}_{i}\) sub-algebras, with \(\mathfrak{h}_{1}\) corresponding to the importance of active spaces with respect to the wave function of interest. Then in the \(\mathfrak{h}_{i}\) problem we partition set of parameters \(\mathbf{\theta}_{N}(\mathfrak{h}_{i})\) into subset \(\mathbf{\theta}_{N}^{\text{CP}}(\mathfrak{h}_{i})\) that refers to common pool of amplitudes determined in preceding steps (say, for \(\mathfrak{h}_{j}\ (j=1,\ldots,i-1)\)) and subset \(\mathbf{\theta}_{N}^{\text{X}}(\mathfrak{h}_{i})\) that is uniquely determined in the \(\mathfrak{h}_{i}\) minimization step, i.e, \[\min_{\mathbf{\theta}_{N}^{\text{X}}(\mathfrak{h}_{i})}\langle\Psi(\mathbf{\theta}_{N} ^{\text{X}}(\mathfrak{h}_{i}),\mathbf{\theta}_{N}^{\text{CP}}(\mathfrak{h}_{i}))| \Gamma_{i}^{(N)}(pc)|\Psi(\mathbf{\theta}_{N}^{\text{X}}(\mathfrak{h}_{i}),\mathbf{ \theta}_{N}^{\text{CP}}(\mathfrak{h}_{i}))\rangle\ \ (i=1,\ldots,M)\, \tag{37}\] where \(|\Psi(\mathbf{\theta}_{N}^{\text{X}}(\mathfrak{h}_{i}),\mathbf{\theta}_{N}^{\text{CP} }(\mathfrak{h}_{i}))\rangle\) approximates \(e^{\sigma_{\text{int}}(\mathfrak{h}_{i})/N}|\Phi\rangle\). In this way, each computational block coupled into a flow corresponds to a minimization procedure that optimizes parameters \(\mathbf{\theta}_{N}^{\text{X}}(\mathfrak{h}_{i})\) using quantum algorithms such as the VQE approach. At the end of the iterative cycle, once all amplitudes are converged, in contrast to the SR-CC flows, the energy is calculated using \(\mathfrak{h}_{1}\) problem as an expectation value of the \(\Gamma_{1}^{(N)}\) operator. The discussed formalism introduces a broad class of control parameters defining each computational step's dimensionality. These are the numbers of occupied/unoccupied active orbitals defining \(\mathfrak{h}_{i}\) sub-algebras \(x_{R}/y_{S}\), respectively. An essential feature of the DUCC flow equation is associated with the fact that each computational block (37) can be encoded using a much smaller number of qubits compared to the qubits requirement associated with the original problem. This observation significantly simplifies the qubit encoding of the effective Hamiltonians included in quantum DUCC flows, especially in formulations based on the utilization of localized molecular basis set (for quantum algorithms exploiting locality of interactions, see Refs. [75; 76]). ## IV Time-dependent CC Extensions The SES-CC-based downfolding techniques could also be extended to the time-dependent domain. [65; 70] As in the stationary case, we will assume a general partitioning of the time-dependent cluster operator \(T(t)\) into its internal (\(T_{\text{int}}(\mathfrak{h},t)\)) and external (\(T_{\text{ext}}(\mathfrak{h},t)\)) parts (we also assume that the employed molecular orbitals are time independent), i.e, \[|\Psi(t)\rangle=e^{T_{\rm ext}(\mathfrak{b},t)}e^{T_{\rm int}(\mathfrak{b},t)}| \Phi\rangle\,\forall\mathfrak{b}\in SES. \tag{38}\] For generality, we also include phase factor \(T_{0}(\mathfrak{b},t)\) in the definition of the \(T_{\rm int}(\mathfrak{b},t)\) operator. After substituting (38) into time-dependent Schrodinger equation and utilizing properties of SES algebras, we demonstrated that the ket-dynamics of the sub-system wave function \(e^{T_{\rm int}(\mathfrak{b},t)}|\Phi\rangle\) corresponding to arbitrary SES \(\mathfrak{b}\) \[i\hbar\frac{\partial}{\partial t}e^{T_{\rm int}(\mathfrak{b},t)}|\Phi\rangle= H^{\rm eff}(\mathfrak{b},t)e^{T_{\rm int}(\mathfrak{b},t)}|\Phi\rangle\, \tag{39}\] where \[H^{\rm eff}(\mathfrak{b},t)=(P+Q_{\rm int}(\mathfrak{b}))\bar{H}_{\rm ext}( \mathfrak{b},t)(P+Q_{\rm int}(\mathfrak{b})) \tag{40}\] and \[\bar{H}_{\rm ext}(\mathfrak{b},t)=e^{-T_{\rm ext}(\mathfrak{b},t)}He^{T_{\rm ext }(\mathfrak{b},t)}. \tag{41}\] If \(T_{\rm ext}(\mathfrak{b},t)\) operator is known or can be efficiently approximated, then the dynamics of the entire system can be described by effective Hamiltonian \(H^{\rm eff}(\mathfrak{b},t)\). In analogy to the stationary cases, various subsystems computational blocks can be integrated into a flow enabling sampling of large sub-spaces of Hilbert space through through a number of coupled reduced-dimensionality problems (time-dependent quantum flows), i.e., \[\hbar\frac{\partial}{\partial t}e^{T_{\rm int}(\mathfrak{b},t)}|\Phi\rangle= H^{\rm eff}(\mathfrak{b}_{i},t)e^{T_{\rm int}(\mathfrak{b},t)}|\Phi\rangle\,\ (i=1,\ldots,M_{\rm SES}). \tag{42}\] Given analogies between stationary CC flow equations based on the localized orbitals and local CC formulations developed in the last few decades in quantum chemistry, time-dependent flow equations given by Eq. (42) can be utilized to design reduced-scaling variants of the time-dependent CC formulations. The time-dependent variant of the DUCC Ansatz is represented by the normalized time-dependent wave function \(|\Psi_{\rm DUCC}(\mathfrak{b},t)\rangle\), \[|\Psi_{\rm DUCC}(\mathfrak{b},t)\rangle=e^{\sigma_{\rm int}(\mathfrak{b},t)}e^ {\sigma_{\rm int}(\mathfrak{b},t)}|\Phi\rangle\,\forall\mathfrak{b}\in SES\, \tag{43}\] where \(\sigma_{\rm int}(\mathfrak{b},t)\) and \(\sigma_{\rm ext}(\mathfrak{b},t)\) are general-type time-dependent anti-Hermitian operators \[\sigma_{\rm int}(\mathfrak{b},t)^{\dagger}=-\sigma_{\rm int}( \mathfrak{b},t)\, \tag{44}\] \[\sigma_{\rm ext}(\mathfrak{b},t)^{\dagger}=-\sigma_{\rm ext}( \mathfrak{b},t). \tag{45}\] Again, as in the SES-CC case, the dynamics of the entire system are given by the active-space time-dependent effective Hamiltonian \(H^{\rm eff}(\sigma_{\rm ext}(\mathfrak{b},t),\frac{\partial\sigma_{\rm ext}( \mathfrak{b},t)}{\partial t})\) \[H^{\rm eff}(\sigma_{\rm ext}(\mathfrak{b},t),\frac{\partial\sigma_{\rm ext}( \mathfrak{b},t)}{\partial t})=(P+Q_{\rm int})\{\bar{H}_{\rm ext}(\mathfrak{b},t )-i\hbar A(\sigma_{\rm ext}(\mathfrak{b},t),\frac{\partial\sigma_{\rm ext}( \mathfrak{b},t)}{\partial t})\}(P+Q_{\rm int}). \tag{46}\] where anti-Hermitian operator \(A(\sigma_{\rm ext}(\mathfrak{b},t),\frac{\partial\sigma_{\rm ext}(\mathfrak{b },t)}{\partial t})\) is expressed as \[A(\sigma_{\rm ext}(\mathfrak{b},t),\frac{\partial\sigma_{\rm ext}(\mathfrak{b },t)}{\partial t})=\sum_{k=0}^{\infty}\frac{(-1)^{k}}{(k+1)!}I_{k}(\sigma_{\rm ext }(\mathfrak{b},t),\frac{\partial\sigma_{\rm ext}(\mathfrak{b},t)}{\partial t }) \tag{47}\] If the fast-varying in time part of the wave function (or \(\sigma_{\rm ext}(\mathfrak{b},t)\)-dependent part of the wave function) is known or can be efficiently approximated, then the slow-varying dynamic (captured by the proper choice of the active space and \(\sigma_{\rm int}(\mathfrak{b},t)\) operator) of the entire system can be described as a sub-system dynamics generated by the Hermitian \(H^{\rm eff}(\sigma_{\rm ext}(\mathfrak{b},t),\frac{\partial\sigma_{\rm ext}( \mathfrak{b},t)}{\partial t})\) operator. This decoupling of various time regimes (slow- vs. fast-varying components) is analogous to decoupling high- and low-energy Fermionic degrees of freedom in stationary formulations of the SES-CC and DUCC formalisms. ## V Green's function applications The CC Green's function formulations have recently evolved into important formulations to describe spectral functions in various energy regimes [77; 78; 79; 80; 81; 82; 83] and as high-accuracy solvers for quantum embedding formulations. Following original formulations based on the CC bi-variational approach, the corresponding frequency-dependent Green's function for an \(N\)-particle system can be expressed as \[G_{pq}(\omega)=\] \[\langle\Phi|(1+\Lambda)e^{-T}a_{q}^{\dagger}(\omega+(H-E)-\mathrm{i} \eta)^{-1}a_{p}e^{T}|\Phi\rangle+\] \[\langle\Phi|(1+\Lambda)e^{-T}a_{p}(\omega-(H-E)+\mathrm{i}\eta)^{- 1}a_{q}^{\dagger}e^{T}|\Phi\rangle\;, \tag{49}\] where \(\omega\) denotes the frequency parameter, and the imaginary part \(\eta\) is often called a broadening factor. The cluster operator \(T\) and de-excitation operator \(\Lambda\) define correlated ket (\(|\Psi\rangle\)) and bra (\(\langle\Psi|\)) ground-state wave functions for \(N\)-electron system \[|\Psi\rangle=e^{T}|\Phi\rangle\;, \tag{50}\] \[\langle\Psi|=\langle\Phi|(1+\Lambda)e^{-T}\;. \tag{51}\] The ground-state energy \(E_{0}\), and the amplitudes defining \(T\) and \(\Lambda\) operators are obtained from the following sequence of CC equations. To combine GFCC and DUCC formalisms we replace \(T\), \(\Lambda\), and \(H\) operators in Eq. (49) by cluster (\(\tilde{T}_{\mathrm{int}}\)), de-excitation (\(\tilde{\Lambda}_{\mathrm{int}}\)), and Hermitian \(H^{\mathrm{eff}}(\mathrm{h})\) [Eq. (13), which is further denoted as \(\Gamma\)] operators acting in the some active space generated by \(\mathrm{h}\) (for the notational simplicity we also skip the \(\mathrm{h}\) symbol). 6 We will also consider the case when the set of active orbitals consists of all occupied orbitals and a small subset of active virtual orbitals (containing \(n_{v}^{\mathrm{act}}\) active virtual orbitals), where, in general, \(n_{v}^{\mathrm{act}}\ll n_{v}\), where \(n_{v}\) designates the total number of virtual orbitals. The standard CC equations for \(T\), CC energy \(E\), and \(\Lambda\) are are replaced by their "active" counterparts Footnote 6: The \(\Gamma\)-matrix \(\Gamma\) is defined as \(\Gamma=\Gamma_{0}\), where \(\Gamma_{0}\) is the Pauli operator. \[Q_{\mathrm{int}}e^{-\overline{\tilde{T}}_{\mathrm{int}}}\Gamma e ^{\overline{\tilde{T}}_{\mathrm{int}}}|\Phi\rangle=0\;, \tag{52}\] \[\langle\Phi|e^{-\overline{\tilde{T}}_{\mathrm{int}}}\Gamma e^{ \overline{\tilde{T}}_{\mathrm{int}}}|\Phi\rangle=E_{0}^{\mathrm{int}}\;,\] (53) \[\langle\Phi|(1+\widetilde{\Lambda}_{\mathrm{int}})e^{-\overline {\tilde{T}}_{\mathrm{int}}}\Gamma e^{\overline{\tilde{T}}_{\mathrm{int}}}Q_{ \mathrm{int}}=E_{0}^{\mathrm{int}}\langle\Phi|(1+\widetilde{\Lambda}_{ \mathrm{int}})Q_{\mathrm{int}}\;. \tag{54}\] The coupled cluster Green's function employing the DUCC Hamiltonian \(\Gamma\) can be expressed for active orbitals as follows \[G_{PQ}^{\mathrm{DUCC}}(\omega)=\langle\Phi|(1+\widetilde{\Lambda }_{\mathrm{int}})e^{-\overline{\tilde{T}}_{\mathrm{int}}}a_{Q}^{\dagger}( \omega+(\Gamma-E_{0}^{\mathrm{int}})-\mathrm{i}\eta)^{-1}a_{p}e^{\overline{ \tilde{T}}_{\mathrm{int}}}|\Phi\rangle+\] \[\langle\Phi|(1+\widetilde{\Lambda}_{\mathrm{int}})e^{-\overline {\tilde{T}}_{\mathrm{int}}}a_{p}(\omega-(\Gamma-E_{0}^{\mathrm{int}})+\mathrm{ i}\eta)^{-1}a_{q}^{\dagger}e^{\overline{\tilde{T}}_{\mathrm{int}}}|\Phi\rangle\;, \tag{55}\] where indices \(P,Q,\ldots\) designate active spin orbitals. Again, applying the resolution of identity \(e^{-\overline{\tilde{T}}_{\mathrm{int}}}e^{\overline{\tilde{T}}_{\mathrm{ int}}}\) in the above equation, one gets the following expressions for DUCC Green's function matrix elements \[G_{PQ}^{\mathrm{DUCC}}(\omega)= \langle\Phi|(1+\widetilde{\Lambda}_{\mathrm{int}})a_{Q}^{\dagger \dagger}(\omega+\overline{\Gamma}_{N}-\mathrm{i}\eta)^{-1}\overline{a_{p}}^{ \mathrm{int}}|\Phi\rangle+\] \[\langle\Phi|(1+\widetilde{\Lambda}_{\mathrm{int}})\overline{a_ {p}}^{\mathrm{int}}(\omega-\overline{\Gamma}_{N}+\mathrm{i}\eta)^{-1}a_{Q}^{ \dagger\dagger}|\Phi\rangle\;, \tag{56}\] where we used the following definitions: \[\overline{\Gamma}=e^{-\overline{\tilde{T}}_{\mathrm{int}}}\Gamma e ^{\overline{\tilde{T}}_{\mathrm{int}}}\;, \tag{57}\] \[\overline{\Gamma}_{N}=\overline{\Gamma}-E_{0}^{\mathrm{int}}\;,\] (58) \[\overline{a_{p}}^{\mathrm{int}}=e^{-\overline{\tilde{T}}_{ \mathrm{int}}}a_{p}e^{\overline{\tilde{T}}_{\mathrm{int}}},\] (59) \[\overline{a_{Q}^{\dagger\dagger}}^{\mathrm{int}}=e^{-\overline{ \tilde{T}}_{\mathrm{int}}}a_{Q}^{\dagger}e^{\overline{\tilde{T}}_{\mathrm{int}}}. \tag{60}\] In the active-space driven DUCC-GFCC approach, the \(X_{p}(\omega)\) and \(Y_{q}(\omega)\) operators are replaced by \(X_{P}^{\mathrm{int}}(\omega)\) and \(Y_{Q}^{\mathrm{int}}(\omega)\), respectively, which are given by the following expressions: \[X_{P}^{\mathrm{int}}(\omega)=\sum_{I}x^{I}(P,\omega)^{\mathrm{ int}}a_{I}+\sum_{I<J,A}x_{A}^{IJ}(P,\omega)^{\mathrm{int}}a_{A}^{\dagger}a_{J}a_{I}+\ldots \tag{61}\] \[Y_{Q}^{\mathrm{int}}(\omega)=\sum_{A}y_{A}(Q,\omega)^{\mathrm{int} }a_{A}^{\dagger}+\sum_{I,A<B}y_{AB}^{J}(Q,\omega)^{\mathrm{int}}a_{A}^{\dagger}a_ {B}^{\dagger}a_{I}+ \tag{62}\] where indices \(I,J,\ldots\) and \(A,B,\ldots\) refer to active occupied and unoccupied spin orbitals indices, respectively (again, in the present discussion, we assume that all occupied spin orbitals are treated as active). These operators satisfy \[(\omega+\overline{\Gamma}_{N}-\mathrm{i}\eta)X_{P}^{\mathrm{int}}( \omega)|\Phi\rangle=\overline{a_{P}}^{\mathrm{int}}|\Phi\rangle\;, \tag{63}\] \[(\omega-\overline{\Gamma}_{N}+\mathrm{i}\eta)Y_{Q}^{\mathrm{int}}( \omega)|\Phi\rangle=\overline{a_{Q}^{\mathrm{int}}}^{\mathrm{int}}|\Phi\rangle\;, \tag{64}\] and the \(G_{PQ}^{\mathrm{DUCC}}(\omega)\) is given by the expression \[G_{PQ}^{\mathrm{DUCC}}(\omega)= \langle\Phi|(1+\Lambda_{\mathrm{int}})\overline{a_{Q}^{\mathrm{ int}}}X_{P}^{\mathrm{int}}(\omega)|\Phi\rangle+\] \[\langle\Phi|(1+\Lambda_{\mathrm{int}})\overline{a_{P}}^{ \mathrm{int}}Y_{Q}^{\mathrm{int}}(\omega)|\Phi\rangle\;. \tag{65}\] We demonstrated that the combined GFCC and DUCC frameworks reproduce the main features of the standard GFCCSD spectral function. In a series of test calculations, we demonstrated that increasing active space size leads to monotonic improvements in the location of peaks obtained with the DUCC-GFCCSD approach with respect to the full GFCCSD results. We attribute this behavior to the presence of dynamical (out-of-active-space) correlation effects encapsulated in each of DUCC effective Hamiltonians. In contrast to the DUCC-GFCCSD formalism, the utilization of active space bare Hamiltonians leads to less consistent results for the peak positions. The utilization of the DUCC effective Hamiltonians can also significantly reduce the cost of the GFCC calculations for the energy regime embraced by the corresponding active space. ## VI Review of applications This section briefly reviews several exemplary application areas and numerical studies involving various CC downfolding formalisms. ### Numerical Validation of the SES-CC Theorem The SES-CC Theorem has recently been validated on the example of several benchmark systems (H4, H6, H8 models) used to test CC methodologies in situations corresponding to the presence of weak and strong correlation effects (see Ref. [73]). We numerically verified the SES-CC Theorem using various active spaces corresponding to physically meaningful active spaces, capturing the most important correlation for the ground-state wave function description in the valence region as well as for the active spaces that are remotely related to the ground-state correlation effects. To this end, we used two approaches, CCSD and CCSDTQ, to calculate the eigenvalues of effective/downfolded Hamiltonians. In all cases considered in Ref. [73], we were able to reproduce the CCSD or CCSDTQ energies, obtained using the standard expression for the energy, as lowest-energy eigenvalues of the effective Hamiltonian. In the extreme case, we used a spin-orbital-based definition of active to correlate a single electron, which also resulted in reproducing standard CC energies. ### Approximations Based on Quantum Flows In Ref. [72], we introduced QFs based on the ordered flow of the \(\mathfrak{g}^{(N)}(2_{R})\) sub-algebras: \[\mathfrak{g}^{(N)}(2_{R_{1}})\stackrel{{\mathrm{passingT}}}{{ \longrightarrow}}\mathfrak{g}^{(N)}(2_{R_{2}})\stackrel{{ \mathrm{passingT}}}{{\longrightarrow}}\ldots\stackrel{{ \mathrm{passingT}}}{{\longrightarrow}}\mathfrak{g}^{(N)}(2_{R_{\mathrm{ fault}}}) \tag{66}\] where SESs are ordered according to some importance criterium, for example, corresponding to a descending order with respect to the sum of orbital energies (\(\varepsilon_{k_{i}}+\varepsilon_{l_{i}}\)) corresponding to orbitals included in \(R_{i}\) sets. The Equivalence Theorem states that such a flow (in the discussed case, all \(\mathfrak{g}^{(N)}(2_{R})\)-generated active space problems are integrated) is equivalent to the CC formalism defined by the following cluster operator \(T\): \[T\simeq T_{1}+T_{2}+\sum_{R}T_{\mathrm{int},3}(\mathfrak{g}^{(N)}(2_{R}))+ \sum_{R}T_{\mathrm{int},4}(\mathfrak{g}^{(N)}(2_{R}))\;. \tag{67}\] This approach is further referred to as the self-consistent sub-algebra flow CC method (SCSAF-CC). In contrast to the class of the so-called active-space CC approaches (see Refs. [84] and references therein), this formulation includes classes of triply and quadruply excitations (in addition to all possible single and double excitations) corresponding to triply and quadruply excited amplitudes corresponding to active-space problems included in the flow. The performance of the SCSAF-CC methods was evaluated on the examples involving single and double bond-breaking processes. For example, for the F\({}_{2}\) benchmark system, the non-parallel error (NPE) of the SCSAF-CC formalism given by Eq. (67) in describing ground-state potential energy surface is comparable to the NPSs yielded by the 4-reference reduced multi-reference CCSD(T) approach (see Ref. [72] for more details). As demonstrated in Ref. [72], the SCSAF-CC formalism provided an efficient way for perturbative corrections due to triple or quadruple excitations not included in the iterative SCSAF-CC formulation given by Eq. 67). It was shown that perturbative SCSAD-CC methods could bypass typical problems with perturbative inclusion of higher-rank clusters in situations with strong correlation effects. Similarly to the ground-state case, the SCSAF-CC methods can be extended to their equation-of-motion CC (EOMCC) [26; 27; 85] formulations using a similar manifold of excitations as in the ground-state case. For example, the state-specific excitation operator for \(K\)-th state \(X(K)\) corresponding to the \(\mathfrak{g}^{(N)}(2_{R})\) flow as in Eq. (67) is given by the expansion: \[X(K)\simeq X(K)_{0}+X(K)_{1}+X(K)_{2}+\sum_{R}X(K)_{\mathrm{int},3}( \mathfrak{g}^{(N)}(2_{R}))\] \[+\sum_{R}X(K)_{\mathrm{int},4}(\mathfrak{g}^{(N)}(2_{R}))\;. \tag{68}\] These EOMCC-type extensions have been shown to properly capture excited-state correlation effects for singly excited states and a more challenging class of excited states dominated by double excitations. ### Quantum Computing The Hermitian CC downfolding plays a vital role in realizing quantum computing applications in computational chemistry with limited resources defined by Noisy Intermediate-State Quantum (NISQ) devices. In particular, a significant effort has been expanded to provide frameworks that significantly reduce the size of the virtual space. Using these techniques, we could adequately reproduce total ground-state energies for systems described by 50-70 molecular basis functions employing small-size active spaces (5-15 molecular orbitals) and various types of solvers, including VQE and QPE quantum algorithms. One should stress that quantum simulations for systems described by 50-70 orbitals are currently beyond reach, which is a net effect of the required logical qubits, quantum errors, quantum circuit depth, and large numbers of fermionic degrees of freedom (amplitudes) to be included to achieve the necessary level of accuracy. The last factor translates into a massive number of quantum measurements concerning VQE class of methods. Special consideration is required when constructing approximate downfolded Hamiltonians is required to enable downfolding methods in quantum computing. The key factors that need to be taken care of when approximating infinite expansions for the \(H^{\mathrm{eff}}(\mathfrak{h})\) operator, Eq. (14), are as follows: * **Rank of the commutator expansion.** All approximations introduced in past years are based on finite-rank approximations. In the most accurate approximations, first-, second-, and classes of third-rank commutators are included. * **Source of the \(\sigma_{\mathrm{ext}}(\mathfrak{h})\) amplitudes.** As in the Eq. (16), the \(\sigma_{\mathrm{ext}}(\mathfrak{h})\) are approximated in a UCC way. In practical realizations considered so far, \(T_{\mathrm{ext}}(\mathfrak{h})\) are extracted from the converged CCSD amplitudes. * **Rank of many-body interactions in \(H^{\mathrm{eff}}(\mathfrak{h})\).** Currently, downfolded Hamiltonians are constructed to include one- and two-body effective interactions. * **Perturbative consistency.** Many-body perturbation theory (MBPT) allows one to better balance the correlation effects in the expansion in Eq. (14) (see extensive discussion of MBPT expansions for UCC theories in Refs. [86; 87]). For perturbative "consistency" in some cases, we include Fock operator (\(F_{N}\)) dependent terms. Various approximations for downfolded Hamiltonians discussed in Ref. [71] are collected in Table 1. Obtaining Hermitian downfolded Hamiltonians, especially those including higher-rank commutators, is usually associated with the inclusion of hundreds or thousands of Hugenholtz-type diagrams, which requires developing and utilizing specialized symbolic tools to derive and efficiently implement the corresponding algebraic expressions. The Hermitian versions of the downfolded Hamiltonians have been integrated with various VQE and QPE solvers. The efficiency of these workflows has been illustrated in the example of bond-stretching processes for typical benchmark systems such as H\({}_{2}\), LiH, Li\({}_{2}\), N\({}_{2}\), H\({}_{2}\)O, and C\({}_{2}\)H\({}_{4}\) systems. [66; 67; 71] In all cases, downfolded Hamiltonians significantly improve the accuracy of diagonalization of the bare Hamiltonians in active spaces and provides results much closer to the exact (or nearly exact) results obtained when all orbitals are correlated. ## VII Conclusions The CC downfolding techniques are a relatively new tool to analyze/derive new properties of CC methods. One of the most appealing ones is the possibility of calculating CC energies through the diagonalization of the effective/downfolded Hamiltonians in broad classes of active spaces corresponding to standard CC approximations. These observations have been extended to the time-domain and quantum flows, which provide an alternative way to rigorously encapsulate the inherent sparsity/sparsities of quantum systems. Aside from interesting new properties of CC methodology, the downfolding techniques have been used as a design principle to select classes of amplitudes in the vain of the Equivalence Theorem. These methods proved efficient in treating strong correlation effects in ground and excited states. The Hermitian form of the downfolding formalism is a promising extension for quantum computing applications. Due to the possibility of reducing the dimensionality of the quantum problem, it enables simulations for effective representations of Hamiltonians in basis sets that would be beyond the reach of current simulators and hardware if direct approaches are used. Since the current quantum algorithm, such as the VQE methodologies, can effectively handle only a relatively small number of wave function parameters, usually corresponding to the so-called static correlation effects, the active-space-driven downfolding is a potential tool to extend the area of applications to larger systems and larger basis sets. As a next frontier, our group is intensively developing quantum algorithms based on the unitary CC flow equations. These methods can traverse larger sub-spaces of Hilbert spaces than currently possible using modest quantum computing resources associated with the size of the maximum active space involved in the flow. This is a consequence of the fact that in the global representation of the quantum problem, requiring a full qubit register to assure the antisymmetry of the wave function, is replaced/approximated by flows (or computable reduced dimensionality eigenvalue problems) where global antisymmetry problems no longer exist. New computational paradigms associated with the emergence and broad utilization of machine learning techniques offer an exciting avenue for utilizing Hermitian CC downfolding to extract the analytical form of effective inter-electron interactions. These "phenomenological" interactions are ideal candidates to be integrated with the low-rank formulations such as Hartree-Fock, Density Func tional Theory, and various types of multi-configurational self-consistent field methods as described in Ref. [70]. ## VIII Acknowledgement The main part of this work was supported by the Quantum Science Center (QSC), a National Quantum Information Science Research Center of the U.S. Department of Energy (DOE). NPB and BP acknowledge the support from "Embedding Quantum Computing into Many-body Frameworks for Strongly Correlated Molecular and Materials Systems" project, which is funded by the U.S. Department of Energy (DOE), Office of Science, Office of Basic Energy Sciences, the Division of Chemical Sciences, Geosciences, and Biosciences. All work was performed at Pacific Northwest National Laboratory (PNNL) operated for the U.S. Department of Energy by the Battelle Memorial Institute under Contract DE-AC06-76RLO-1830. One of the authors of this review (KK) would like to express his deep gratitude to all his colleagues, friends, and mentors he had the honor to work with and learn from during his early "CC days." Countless discussions with the quantum chemistry pioneers in Poland have inspired part of the presented material.
2309.16866
Stochastic Digital Twin for Copy Detection Patterns
Copy detection patterns (CDP) present an efficient technique for product protection against counterfeiting. However, the complexity of studying CDP production variability often results in time-consuming and costly procedures, limiting CDP scalability. Recent advancements in computer modelling, notably the concept of a "digital twin" for printing-imaging channels, allow for enhanced scalability and the optimization of authentication systems. Yet, the development of an accurate digital twin is far from trivial. This paper extends previous research which modelled a printing-imaging channel using a machine learning-based digital twin for CDP. This model, built upon an information-theoretic framework known as "Turbo", demonstrated superior performance over traditional generative models such as CycleGAN and pix2pix. However, the emerging field of Denoising Diffusion Probabilistic Models (DDPM) presents a potential advancement in generative models due to its ability to stochastically model the inherent randomness of the printing-imaging process, and its impressive performance in image-to-image translation tasks. This study aims at comparing the capabilities of the Turbo framework and DDPM on the same CDP datasets, with the goal of establishing the real-world benefits of DDPM models for digital twin applications in CDP security. Furthermore, the paper seeks to evaluate the generative potential of the studied models in the context of mobile phone data acquisition. Despite the increased complexity of DDPM methods when compared to traditional approaches, our study highlights their advantages and explores their potential for future applications.
Yury Belousov, Olga Taran, Vitaliy Kinakh, Slava Voloshynovskiy
2023-09-28T21:38:21Z
http://arxiv.org/abs/2309.16866v1
# Stochastic Digital Twin for Copy Detection Patterns ###### Abstract Copy detection patterns (CDP) present an efficient technique for product protection against counterfeiting. However, the complexity of studying CDP production variability often results in time-consuming and costly procedures, limiting CDP scalability. Recent advancements in computer modelling, notably the concept of a "digital twin" for printing-imaging channels, allow for enhanced scalability and the optimization of authentication systems. Yet, the development of an accurate digital twin is far from trivial. This paper extends previous research which modelled a printing-imaging channel using a machine learning-based digital twin for CDP. This model, built upon an information-theoretic framework known as "Turbo", demonstrated superior performance over traditional generative models such as CycleGAN and pix2pix. However, the emerging field of Denoising Diffusion Probabilistic Models (DDPM) presents a potential advancement in generative models due to its ability to stochastically model the inherent randomness of the printing-imaging process, and its impressive performance in image-to-image translation tasks. This study aims at comparing the capabilities of the Turbo framework and DDPM on the same CDP datasets, with the goal of establishing the real-world benefits of DDPM models for digital twin applications in CDP security. Furthermore, the paper seeks to evaluate the generative potential of the studied models in the context of mobile phone data acquisition. Despite the increased complexity of DDPM methods when compared to traditional approaches, our study highlights their advantages and explores their potential for future applications. Copy detection patterns, machine learning, digital twin, denoising diffusion model, TURBO, CycleGAN, pix2pix. ## I Introduction The recent upsurge in the utilization of Copy Detection Patterns (CDP), as described in [1, 2, 3, 4], has emerged as a viable method for safeguarding products against counterfeiting practices. However, the exploration of variability inherent in CDP production represents a process that is both time-intensive and financially demanding. This process necessitates the acquisition of vast volumes of data, a requirement that places a significant constraint on the scalability of the approach to incorporate new products, manufacturing technologies, and imaging devices. Consequently, the expansive adoption and continued research into CDP are impeded. To overcome these limitations and promote the ongoing advancement of CDP, an approach involving computational modeling of the entire production pipeline, incorporating the printing and imaging channels, has been proposed. This method is referred to as a _digital twin_[5]. Through this approach, a comprehensive and accurate simulation of the production process is generated, allowing for an efficient examination and optimization of CDP without the traditional restrictions associated with physical data acquisition. This approach offers considerable potential for increasing the efficiency and effectiveness of anti-counterfeiting measures based on CDP. The design of _digital twin_ for printing-imaging channels is not a trivial task but it is crucial for both the defender and attacker. If successful, it will enable the overall optimization of the whole authentication system and, in particular, the optimization of the estimation of digital templates from the physical samples and synthesis of CDP images from the corresponding digital templates. Moreover, it simplifies the modeling of the intra-class variabilities and the investigation of adversarial examples. The number of training pairs needed for _digital twin_ is small (in the order of hundreds), while the trained model can be applied to millions of unseen digital templates. The current work is a continuation of our previous work [5] that was dedicated to modeling a printing-imaging channel using a machine learning-based _digital twin_ for CDP. The model studied in [5] is based on an information-theoretic framework called _Turbo_. In our current work, we aim at comparing Turbo to the Denoising Diffusion Probabilistic Models (DDPM) [6], which present a popular family of modern generative models (Fig. 1). DDPM model the prior data distribution via a diffusion process. Recently the DDPM methods demonstrated remarkable performance in the image-to-image translation tasks and outperformed many state-of-the-art models based on GAN-like architectures [7]. In contrast to the GAN-based generators that are mostly deterministic in nature, DDPM allows stochastic outputs, i.e., the different Fig. 1: Schematic block-diagram of investigated DDPM generative model \(g_{\varphi}\). The DDPM generative model can generate \(K\) synthetic CDP \(\{\mathbf{x}_{k}^{T}\}_{k=1}^{K}\) from digital template \(\mathbf{z}\) and vice versa generate \(K\) synthetic templates \(\{\mathbf{z}_{k}^{T}\}_{k=1}^{K}\) for a given \(\mathbf{x}\). We show only the first case. The stochasticity of the generative process is ensured by different noise realizations \(\boldsymbol{\epsilon}\). outputs for the same input. Taking into account the natural randomness of the printing-imaging process, the stochasticity of the synthesised twins is a key factor for high-precision simulation of real CDP. Besides this valuable advantage, the DDPM methods have high complexity compared to the traditional approaches. That is why the study of real advantages of DDPM based CDP digital twins represents a great practical interest. In our previous work [5] we demonstrated the superiority of the Turbo framework over the state-of-the-art generative models. The main goal of this study is to compare Turbo with DDPM on the same CDP datasets and to establish the real advantages of DDPM models. Moreover, we aim at evaluating the generative capabilities of the models in the context of mobile phone data acquisition. ## II Related work ### _Turbo family_ The Turbo framework was derived based on the solid information theoretic foundations [5]. That framework consists of two paths, i.e., direct and reverse ones. In the general case, both paths are trained simultaneously and share common training blocks. However, in particular cases, the framework might be trained in one path only. At the inference stage, both paths or just one of them might be used depending on the targeted application. As it was investigated in [5], Turbo can be trained on paired or unpaired data, providing flexibility in its application. Turbo generalizes pix2pix (paired) [8] and CycleGAN (unpaired) [9] image-to-image translation systems. Turbo consists of several building blocks and losses that make the training procedure complex enough. Once trained, Turbo is quite efficient and very fast at the inference stage. The main drawback of Turbo is the deterministic nature of generated CDP, i.e., for the given input it provides only one output. In [5] we extensively investigated the impact of various factors such as the backbone architectures, the discriminator types, losses, etc., on the overall system's performance and found the optimal ones. In the current work, we use two found optimal configurations: * TURBO\({}_{\text{CNN-RESNET-CNN}}^{\text{paired (w \mathcal{D})}}\) \[\mathcal{L}_{\text{CNN-RESNET-CNN}}^{\text{paired (w \mathcal{D})}}(\phi,\theta) =\mathcal{L}_{\hat{z}}(\mathbf{z},\tilde{\mathbf{z}})+\mathcal{D}_{ \hat{z}}(\mathbf{z},\tilde{\mathbf{z}})\] \[+\lambda_{D}\mathcal{L}_{\hat{x}}(\mathbf{x},\hat{\mathbf{x}})+ \lambda_{D}\mathcal{D}_{\hat{x}}(\mathbf{x},\hat{\mathbf{x}})\] \[+\lambda_{T}\lambda_{R}\mathcal{L}_{\hat{z}}(\mathbf{z},\hat{ \mathbf{z}})+\lambda_{T}\lambda_{R}\mathcal{D}_{\hat{z}}(\mathbf{z},\hat{ \mathbf{z}}),\] * TURBO\({}_{\text{UNET}}^{\text{paired (w/o \mathcal{D})}}\) \[\mathcal{L}_{\text{UNET}}^{\text{paired (w/o \mathcal{D})}}(\phi,\theta) =\mathcal{L}_{\hat{z}}(\mathbf{z},\tilde{\mathbf{z}})+\lambda_{D} \mathcal{L}_{\hat{x}}(\mathbf{x},\hat{\mathbf{x}})\] \[+\lambda_{T}\mathcal{L}_{\hat{x}}(\mathbf{x},\tilde{\mathbf{x}}) +\lambda_{T}\lambda_{R}\mathcal{L}_{\hat{z}}(\mathbf{z},\hat{\mathbf{z}}),\] where \(\mathbf{x}\) denotes the image of CDP, \(\mathbf{z}\) denotes the digital template, \(\hat{\mathbf{x}}\) and \(\hat{\mathbf{z}}\) denote the reconstructions and \(\tilde{\mathbf{x}}\) and \(\tilde{\mathbf{z}}\) are the generated images. The terms \(\mathcal{L}_{\hat{z}}(\mathbf{z},\tilde{\mathbf{z}})\), \(\mathcal{L}_{\hat{x}}(\mathbf{x},\hat{\mathbf{x}})\), \(\mathcal{L}_{\hat{z}}(\mathbf{x},\tilde{\mathbf{x}})\) and \(\mathcal{L}_{\hat{z}}(\mathbf{z},\hat{\mathbf{z}})\) are the conditional cross-entropy terms that are implemented as \(\ell_{1}\)-norm pair-wise losses between the corresponding entities. \(\mathcal{D}_{\hat{z}}(\mathbf{z},\tilde{\mathbf{z}})\), \(\mathcal{D}_{\hat{z}}(\mathbf{x},\hat{\mathbf{x}})\), \(\mathcal{D}_{\hat{z}}(\mathbf{x},\hat{\mathbf{x}})\) and \(\mathcal{D}_{\hat{z}}(\mathbf{z},\hat{\mathbf{z}})\) impose Kullback-Leibler (KL)-divergence constraints, i.e., the distribution matching losses, a.k.a. adversarial losses, between the corresponding distributions and the parameters \(\lambda_{T}\), \(\lambda_{D}\) and \(\lambda_{R}\) trade-off the losses. The detailed development of Turbo's losses and the schematic representation of the direct and reverse paths are given in [5] and the corresponding code. ### _Ddpm_ The DDPM [6] are based on the minimization of Fisher divergence, which is also closely linked with the KL-divergence, between the data distribution and the energy-based model approximating the data distribution. The core concept of DDPM is to use a score function representing the gradient of the logarithm of the energy-based model with respect to the data sample to suppress the dependence on the normalization constant, which is infeasible to compute in practice [10]. Similar to Turbo, DDPM consists of forward and reverse paths. However, in contrast to Turbo, the DDPM forward path is not trainable and is based on the addition of noise to network input. The addition of noise with variable variance Fig. 2: The stochasticity in the DDPM Model. The first column displays the original digital template \(\mathbf{z}\) at the top and its counterpart physical sample \(\mathbf{x}\) at the bottom. The subsequent columns show the stochastic estimations of the CDP images \(\{\tilde{\mathbf{x}}^{k}\}_{k=1}^{5}\) and the digital templates \(\{\tilde{\mathbf{z}}^{k}\}_{k=1}^{5}\), generated by the DDPM based on the Palette framework. To enhance visual comprehension, only an enlarged \(11\times 11\) central crop is displayed. aims at "interpolation" of data distribution represented by sparse training data samples. The variable variance of noise should address the different regions of data distribution in a function of the estimated probability density function [10]. From the point of view of the CDP nature, the trained Turbo model can produce the printing simulations and the digital template estimations simultaneously, while DDPM requires training of two separate models. With respect to the number of optimised losses, the DDPM training is simpler and includes only one loss. We use the Palette model [11] to implement conditional DDPM. For the \(\mathbf{z}\rightarrow\tilde{\mathbf{x}}\) case, the model's loss is: \[\mathcal{L}^{DDPM}(\varphi)=\mathbb{E}_{t,\mathbf{z},\mathbf{x},\boldsymbol{ \epsilon}}\left[\left\|\boldsymbol{\epsilon}-g_{\varphi}\left(\sqrt{\bar{ \alpha}_{t}}\mathbf{x}+\sqrt{1-\bar{\alpha}_{t}}\boldsymbol{\epsilon},\mathbf{ z},t\right)\right\|^{2}\right],\] where \(\mathbf{x}\) denotes the target image, \(\mathbf{z}\) denotes the digital template used as a conditioning, \(\boldsymbol{\epsilon}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\) denotes the noise added at step \(t\), \(g_{\varphi}\) stands for the parametrized denoiser model, \(\bar{\alpha}_{t}\) denotes the noise scale parameter [6]. For the \(\mathbf{x}\rightarrow\tilde{\mathbf{z}}\) channel modeling, a similar loss is used, but the digital template \(\mathbf{z}\) is used as the target image, and the model is conditioned by \(\mathbf{x}\). In contrast to Turbo, the DDPM loss does not allow the training on unpaired data. Training and inference stages are iterative and require adapting to many noise levels. Contrary to Turbo, which generates data in a single step, DDPM might need hundreds of steps to produce the final result. Despite this, a notable distinction between DDPM and Turbo lies in the stochastic nature of DDPM, enabling it to generate multiple outputs from a single input, thereby accommodating the intrinsic randomness associated with the printing process. The schematic block diagram of DDPM is shown in Fig. 1. ## III Dataset and training details ### _Dataset_ For empirical evaluation of the models under investigation we used the data acquired by two modern mobile phones and by a high-resolution scanner. The experiments on the scanner data are an extension of our previous work [5]. In this respect, the same Indigo \(1\times 1\) symbol dataset [12]1 was used. This dataset consists of 720 digital templates of size \(228\times 228\) with \(1\times 1\) pixel symbol size. The digital templates have been printed at HP Indigo 7600 industrial printer at a resolution of 812.8 dpi and enrolled by Epson Perfection V850 Pro scanner at a resolution of 2400 dpi. Considering the ratio between the printing and acquisition resolutions, the obtained CDP are of size \(684\times 684\), i.e., \(1\times 1\) pixel in the digital template corresponds to a \(3\times 3\) block in the acquired CDP. The final codes are 16-bit grayscale images. Fig. 3: An example of 2D variability for a randomly selected CDP. We used pixel-wise standard deviation \(\sigma\) to estimate the variability among the generated images. For better visual comprehension, we display a central crop that is equal to half the dimensions of the full image. The experiments on the mobile phone data were performed on the recently created Indigo 1x1 variability dataset [13]2 that consists of 1440 digital templates of size \(228\times 228\) with \(1\times 1\) pixel symbol size. The templates have been printed at HP Indigo 5500 industrial printer at a resolution of 812.8 dpi and enrolled by iPhone 12 Pro and Samsung Galaxy Note 20 Ultra cell phones. The obtained CDP images are of size \(228\times 228\) and encoded as 8-bit RGB images. However, for the sake of simplicity, we convert them into grayscale images. Footnote 2: [http://sip.unige.ch/projects/snf-it-dis/datasets/indigo-variability](http://sip.unige.ch/projects/snf-it-dis/datasets/indigo-variability) Both mentioned datasets contain original and fake CDP. For our experiments, we used only the original codes. However, it should be noted that in both cases the fakes were produced on the same printing and acquisition equipment as the original codes. In this respect, the model trained on the original codes can be effectively applied to generate fake codes. ### _Training details_ The Turbo framework architectures' details and training conditions are the same as in [5]. As a DDPM-based framework, we used the Palette model [11] with the UNET architecture inspired by [7]. We modify UNET in the following way: it takes two channels as input, where the first channel is a noise and the second one is used for conditioning; the model incorporates attention with resolutions of 16 and includes two residual blocks per downsampling step. We initialized the model using Kaiming initialization. Additionally, we set the dropout rate to 0.2 to prevent overfitting due to the high similarity between the CDP. We train our models on a single A100 GPU with 80GB of memory with a mini-batch of size 36 for 15000 training epochs. We use a standard Adam optimizer with the \(5e^{-5}\) learning rate and without a learning rate warmup schedule. Similarly to [11], we use 0.9999 EMA but, during the inference, we do not perform the hyper-parameter tuning over noise schedules and refinement steps. During training, we employ the same linear noise schedule of (\(1e^{-6}\), 0.01) with 2000 time-steps and 1000 refinement steps with a linear schedule of (\(1e^{-4}\), 0.09) during inference as in [11]3. Footnote 3: The code and configuration files are publicly available at [https://gitlab.unige.ch/sip-group/stochastic-digital-twin](https://gitlab.unige.ch/sip-group/stochastic-digital-twin) ## IV Results and discussion ### _Palette model stochasticity_ Fig. 2 demonstrates several examples of the diverse outputs of the Palette model produced for the same randomly selected input from the iPhone subset. The top row corresponds to the modeling of printing channel \(\mathbf{z}\rightarrow\tilde{\mathbf{x}}\) and the bottom one shows the digital template estimation channel \(\mathbf{x}\rightarrow\tilde{\mathbf{z}}\). To illustrate the variability in the generated data, we picked the same template and stacked the produced outputs as a 3D tensor, then we calculated the standard deviation in the image dimension, i.e., for each pixel of generated images. The obtained results are visualized in Fig. 3. The left part of the figure shows the results for the \(\mathbf{x}\rightarrow\tilde{\mathbf{z}}\) for the iPhone, Samsung, and scanner, respectively. It is straightforward to observe numerous regions characterized by diminutive standard deviation, as indicated by the dark blue hue. These regions symbolize the model's degree of confidence in the generated outcomes, which correlates to the conglomerations of white and black pixels. Conversely, the yellow hue denotes areas of heightened standard deviation, reflective of the model's uncertainty. These areas typically align with the transitional regions, manifesting the boundary conditions between different pixel clusters. It is important to note that these features are best seen in the case of the scanner due to the higher acquisition resolution. The right part shows the results for the \(\mathbf{z}\rightarrow\tilde{\mathbf{x}}\) channel. In contrast to the \(\mathbf{x}\rightarrow\tilde{\mathbf{z}}\) channel, the general dynamic range for the obtained standard deviation is about \(3\)-\(5\) times smaller. This can be explained by the fact that the printed images are more continuous, i.e., have a more uniformly distributed histogram, which makes the image synthesis more reliable. For Samsung, the dynamic range of the obtained deviation is about Fig. 4: The x-axis represents the 512 different possible patterns \(\omega\) ordered by their flattened binary representations. The y-axis represents the standard deviation of the central pixel of each pattern for the iPhone. Fig. 5: The x-axis represents the 512 different possible patterns \(\omega\) ordered by their flattened binary representations. The y-axis represents the probability of bit-flipping for the central pixel of each pattern computed from iPhone data. 1.5 times smaller than for iPhone. For the scanner results one also observes a smaller amount of unreliable regions with the edge-transition regions being very well pronounced. It was shown in [14] that the printed pixel's variability depends on the surrounding neighborhood that we refer to as _pattern_\(\omega\), where \(\omega\) denotes a \(3\times 3\) configuration of each pattern. To investigate if the same effect is present in the synthetically generated codes we define \(2^{(3\times 3)}=512\) possible patterns and calculate the standard deviation of the central pixel for each of them through all generated codes. The results obtained for the iPhone dataset are shown in Fig. 4. For Samsung, we observed quite a similar picture. In Fig. 4 we can see the same tendency as in Fig. 3, namely, the general dynamic range of the standard deviation for the \(\mathbf{x}\rightarrow\tilde{\mathbf{z}}\) channel is higher than for the \(\mathbf{z}\rightarrow\tilde{\mathbf{x}}\), i.e., \(0\)-\(0.45\) versus \(0.06\)-\(0.09\). Also, we observe the pattern dependence for both channels but this dependence differs between the channels that is natural. In particular, in the \(\mathbf{x}\rightarrow\tilde{\mathbf{z}}\) channel there are more patterns with the less variable central pixel, i.e., \(\sigma(\omega)\) close to 0. To study the similarity between the real CDP \(\mathbf{x}\) and the synthetic counterparts \(\tilde{\mathbf{x}}\), we compute the probability of bit-flipping for the central pixel for each pattern after Otsu binarization, as suggested in [14]. The results obtained for the iPhone are shown in Fig. 54. We can see that some patterns almost certainly flip with \(P_{b}(\omega)\) close to 1, whereas others produce reliable results with \(P_{b}(\omega)\) close to 0. But the most important thing is that the bit-flipping probability for the real \(\mathbf{x}\) perfectly correlates with one for the synthetic \(\tilde{\mathbf{x}}\). Footnote 4: For Samsung we observed the same tendency. ### _Aggregation techniques_ For the Palette model, we study the impact of the number of realizations and different aggregation techniques. The iPhone results are shown in Fig. 65, where Palette\({}_{\text{mean}}\) denotes that the final output is obtained as a mean of predictions, while in Palette\({}_{\text{median}}\) we take a median of predictions, and then the score is calculated for the aggregated prediction. In the case Palette\({}_{\text{mean}}\) of scores, the reference metric is calculated for each prediction and then the mean value of obtained scores is taken. Footnote 5: The results for the Samsung data are similar to iPhone. One can observe that Palette\({}_{\text{mean}}\) of scores error is almost constant, while the errors for Palette\({}_{\text{mean}}\) and Palette\({}_{\text{median}}\) are close to each other and decrease as the number of realizations increases. This can be explained by the fact that from one side the global error is the same at each realization but the local errors appear in different positions. The aggregation allows us to reduce them but, at the same time, it leads to the loss of stochasticity. In the same plot, one can see the results for pix2pix and TURBO\({}_{\text{UNET}}^{\text{paired (w/o $\mathcal{D}$)}}\) models, but these models are deterministic, and their results are not impacted by the number of realizations. Palette easily outperforms pix2pix on all metrics after 5-7 realizations. In terms of Hamming distance and SSIM Palette is not capable of outperforming Fig. 6: Impact of the number of realizations on different metrics for the iPhone dataset. \(\text{TURBO}_{\text{UNET}}^{\text{paired (w/o $\mathcal{D}$)}}\). For MSE, Palette needs at least \(10\)-\(20\) realizations to surpass Turbo. ### _Models general performance_ For further performance evaluation, we use \(\text{Palette}_{\text{mean}}\). The inference time for a single realization of the Palette model applied to 280 test images is approximately 30 minutes. In contrast, the Turbo model achieves an inference time of just 15 seconds on the same GPU for the same dataset. Considering the inference time complexity, we found that using 21 realizations strikes a good balance between reasonable inference execution time, stochasticity preservation, and the accuracy of the final result. We compare the performance of the Palette in this configuration with the state-of-the-art pip2pix [8], CycleGAN [9], \(\text{TURBO}_{\text{CNN-RESE-CNN}}^{\text{paired (w/o $\mathcal{D}$)}}\) on the same set of metrics as in [5]. _W/O processing_ setup is used to estimate the baseline performance where we assume \(\tilde{\mathbf{z}}=\mathbf{x}\) and \(\tilde{\mathbf{x}}=\mathbf{z}\), i.e., an ideal printing-imaging channel without any distortions. The results obtained for the data enrolled by the iPhone and Samsung mobile phones are given in Tables I and II respectively. It should be noted that the results obtained for the FID metric for the \(\mathbf{x}\rightarrow\tilde{\mathbf{z}}\) and \(\mathbf{z}\rightarrow\tilde{\mathbf{x}}\) channels are quite unstable and differ a lot between the models. This can be explained by the fact that FID was developed for natural images while, in our case, all CDP significantly differ from them. The other metrics demonstrate more coherent results. For both mobile phones, the MSE results are almost identical for Palette and both Turbo configurations. In terms of SSIM, the results are also very close with a slight \(\text{TURBO}_{\text{UNET}}^{\text{paired (w/o $\mathcal{D}$)}}\) superiority. \(\text{TURBO}_{\text{UNET}}^{\text{paired (w/o $\mathcal{D}$)}}\) also outperforms the other models in terms of Hamming distance. The results for the remaining models exhibit marginal inferiority. The result obtained for the data enrolled by the scanner are given in Table III and the general tendency is the same, namely, FID behavior is very unstable; CycleGAN demonstrates the worst results; the results for the Palette and Turbo are quite close but Palette is slightly superior to Turbo on the Hamming distance, albeit at the cost of significantly higher complexity of inference. ## V Conclusion The current work is a continuation of our previous study [5] related to modeling of complex physical printing-imaging processes using a machine learning based models known as a _digital twin_ for anti-counterfeiting applications based on CDP. The current work is dedicated to investigation of the applicability of DDPM for such modeling. Our main interest was to explore the stochasticity of DDPM. The obtained results show that synthetic digital and CDP images are close enough to the real ones in terms of considered metrics. Moreover, the synthetic CDP images produced by DDPM fully reflect the natural randomness of the printing process. This makes the DDPM-based model a suitable candidate for the role of a synthetic generator. The general performance of the studied Palette model is comparable to that of the Turbo framework. The main drawback of the DDPM is the computation complexity of the inference stage. The investigation of more advanced sampling techniques at the training and inference stages for the improvement of DDPM complexity is the main direction for our future work.
2301.00198
An Integrated Visual System for Unmanned Aerial Vehicles Tracking and Landing on the Ground Vehicles
The vision of unmanned aerial vehicles is very significant for UAV-related applications such as search and rescue, landing on a moving platform, etc. In this work, we have developed an integrated system for the UAV landing on the moving platform, and the UAV object detection with tracking in the complicated environment. Firstly, we have proposed a robust LoG-based deep neural network for object detection and tracking, which has great advantages in robustness to object scale and illuminations compared with typical deep network-based approaches. Then, we have also improved based on the original Kalman filter and designed an iterative multi-model-based filter to tackle the problem of unknown dynamics in real circumstances of motion estimations. Next, we implemented the whole system and do ROS Gazebo-based testing in two complicated circumstances to verify the effectiveness of our design. Finally, we have deployed the proposed detection, tracking, and motion estimation strategies into real applications to do UAV tracking of a pillar and obstacle avoidance. It is demonstrated that our system shows great accuracy and robustness in real applications.
Kangcheng Liu
2022-12-31T13:41:00Z
http://arxiv.org/abs/2301.00198v1
An Integrated Visual System for Unmanned Aerial Vehicles Tracking and Landing on the Ground Vehicles ###### Abstract The vision of unmanned aerial vehicles is very significant for UAV-related applications such as search and rescue, landing on a moving platform, etc. In this work, we have developed an integrated system for the UAV landing on the moving platform, and the UAV object detection with tracking in the complicated environment. Firstly, we have proposed a robust LoG-based deep neural network for object detection and tracking, which has great advantages in robustness to object scale and illuminations compared with typical deep network-based approaches. Then, we have also improved based on the original Kalman filter and designed an iterative multi-model-based filter to tackle the problem of unknown dynamics in real circumstances of motion estimations. Next, we implemented the whole system and do ROS Gazebo-based testing in two complicated circumstances to verify the effectiveness of our design. Finally, we have deployed the proposed detection, tracking, and motion estimation strategies into real applications to do UAV tracking of a pillar and obstacle avoidance. It is demonstrated that our system shows great accuracy and robustness in real applications. ## I Introduction and Related Work Visual tracking and detection play a key role in all kinds of UAV navigation applications [1]. It has a wide range of applications such as UAV search and rescue, UAV detection and tracking, UAV surveillance and environmental monitoring [2], security surveillance, geographical mapping, power-line, and pipeline inspection, an autonomous inspection of large-scale bridges, warehouse management, and logistic delivery. However, the visual tracking and detection of the target objects are of great significance to improve the autonomy of unmanned aerial systems. However, some great challenges remain. First, the detection of the target object needs to be realized in real-time for the UAV. Currently, most current deep neural network-based approaches merely focus on the development of sophisticated network architecture, and pre-training algorithms to solve the detection in diverse modalities. However, the efficient algorithms which can be deployed on the UAV platform have not been sufficiently explored. Also, the robustness is poor when faced with low illuminations and rapid rotations. In order to track the UAV effectively, we need to do the motion estimation and tracking of the target object to perform the landing task. The Kalman Filter [3]-[5] has been proven extensively to be an effective approach to achieving estimation of some dynamic variables given the sensor measurements observed over a period of time. However, the traditional Kalman Filter can not tackle the problem of sensor noise as well as the nonlinear motion patterns of the target. Yang et al. use a fuzzy logic complementary Kalman Filter (KF) based on visual and IMU data for the landing of the UAV [6]. Yuan Wei et al. [7] use the radars installed on vehicles or the UAVs for tracking applications. Ashraf Qadir et al. use the onboard visual tracking system to implement a Kalman Filter-based visual tracking system, and the system is capable of continuously detecting the object if the tracking failure occurs [8]. But the real flight test of them remains the future work. Zhao [9] et al. proposes a visual ground target tracking strategy for the rotorcraft UAV. Oh, et al. propose an autonomous visual tracking algorithm with Extended Kalman Filter (EKF) for micro aerial vehicles [10]. They have proposed an efficient object-tracking algorithm for UAVs, and effective ground object tracking can be achieved. Recently, various learning-based methods have been proposed for object segmentation, detection and tracking [11]-[18], but the deep learning-based methods suffer greatly from the poor generalization capacity and the large computational and memory cost [19]-[21]. In order to tackle the problems above, in the vision-based object detection and tracking, we have proposed to use a Laplacian of Gaussian filter and used it to construct a convolutional network, which makes it more appropriate for real-time object detection. It is demonstrated that our method can achieve real-time performance in an unknown environment. The recognition rate can achieve 45 frames per second, which fulfills the real-time requirements. The UAV vision-based applications are very fundamental to all related applications [15], [22], [23]. However, great challenges remain. The first is that the typical visual detection system can not handle the rotation of the targeted object and low illuminations, which makes the subsequent tracking and landing difficult. The second is that the previous Kalman filter-based motion estimation suffers from low accuracy and will greatly decrease the success rate in fulfilling the task Fig. 1: The Detailed System Framework of the Computer Vision System for ROS-based UAV Target Tracking and Landing. The color blue indicates our proposed modules and the color green indicates other modules to fulfill the tracking and landing task. of tracking and landing. As shown in Fig. 2, to tackle the challenges mentioned above, in this paper, we have proposed an integrated system for UAV tracking and landing applications. Taking the RGB-D images as input, we utilize our proposed Laplacian of Gaussian (LoG) filter to construct the deep neural networks to perform the object detection, which achieves robustness and accuracy under low illumination. We have proposed to use iterative multi-model methods based on the original Kalman Filter to improve the accuracy in tracking and motion estimation. Also, we have integrated our proposed approach with other robotics modules such as SLAM and motion/task planning as a whole system to perform UAV-based tracking and landing of the target objects in real applications. The deep learning-based methods have been demonstrated to be very effective in object recognition and tracking [11, 13, 14, 22, 24, 25, 26, 27]. In summary, we have the following prominent contributions: 1. We have proposed a general network for object detection and integrated it with ROS for real robotics search and rescue applications. Moreover, we have integrated the Laplacian of Gaussian (LoG) filter into the deep neural networks and it is demonstrated that the LoG-based method has a great advantage in robustness to object scale and illuminations. 2. We have done real experiments to demonstrate the effectiveness of our proposed approach. It turns out that the IMM-based filter in motion estimation shows satisfactory accuracy under various circumstances. We have also done real UAV experiments to demonstrate the effectiveness of our design. 3. We have also integrated our method with the point clouds segmentation methods for dynamic objects removal, and also we have integrated the proposed approach with motion planning approaches, which realize the real demos of the UAV landing on the UGV moving platform, as well as UAV based motion estimation of the moving/boxes pillars. ## II Proposed Methodology In this work, we have two prominent contributions. The first is that we proposed Laplacian of Gaussian (LoG) filter to construct the deep neural networks to perform the object detection. The second is that we have proposed to use iterative multi-model methods based on the original Kalman Filter to improve the accuracy in tracking and motion estimation. The details of these two contributions will be illustrated in detail in the remaining of this Section. ### _The Iterative Multi-model Filter for dynamic object tracking_ #### Ii-A1 Kalman Filter Kalman filter is an algorithm that uses linear system state equations to optimally estimate system state through system input and output observation data. Since the observation data includes the influence of noise and interference in the system, the optimal estimation can also be regarded as a filtering process. The core meaning is that the Kalman filter can estimate the state from the noisy data process very well. Moreover, the Kalman filter is also one of the breakthrough technologies used in Apollo's moon landing. Kalman filtering is also a recursive filtering algorithm based on state space in the time domain, which is easy to be realized in real time on computer, and has a small amount of computation and storage. This method can deal with the filtering problem of multi-variable non-stationary random processes and the filtering problem of time-varying systems. For example, in the course of the flight, the disturbance encountered by an aircraft is usually time-varying non-stationary noise. At this time, the Kalman filter can be used to effectively remove the interference and obtain more real state estimation data. To improve the detection and tracking performance, and improve the precision of positional prediction, in our application, we also develop the real-time object tracking algorithm based on the basic idea of the Kalman filter. The Kalman filter is utilized in this experiment to remove the noise in the observer and controller of the control system and minimize the number of squared errors. The advantage of the Kalman filter is that it can make the optimal state estimation of the system by taking advantage of measurement data. Kalman filter is essential because the noise and disturbance in the system influence the true data in the measurement. For autonomous Unmanned aerial vehicles in our applications, denote the initial state matrix of the UAV as \(X\), the initial process covariance matrix as \(P\), which denotes the error in the state estimation. Next, the initial state becomes previous. Utilize subscript \(K\) to represent each state in the iteration cycle. In the next time step, the current state becomes the previous one. Then the new state can be predicted based on the physical model and previous state, which can be formulated as: \[X_{K}=AX_{K-1}+BU_{K-1}+W_{K-1} \tag{1}\] \[Z_{k}^{{}^{\prime}}=AP_{K-1}A^{T}+Q_{k} \tag{2}\] The matrix \(A\) is the \(n\times n\) system matrix. The matrix \(B\) represents the function of the input to state, which is called the input matrix or control matrix. The matrix \(W_{K-1}\) denotes the predicted state noise matrix. Where the matrix \(U\) denotes the control variable matrix, \(Q\) denotes the process noise covariance matrix, which keeps the state covariance matrix from becoming too small or going to zero. Denote \(P_{k}^{{}^{\prime}}\) as the prior estimation of the state, it can be represented as: \[P_{k}^{{}^{\prime}}=AP_{K-1}A^{T}+Q_{k} \tag{3}\] Let \(A,B\) and \(C\) denote the adaptation matrices, which convert the input state to the process state. And \(Y\) denotes the measurement of state, \(Z\) denotes the measurement noise. Then the measurement from sensors can be denoted as: \[Y_{k}=CX_{K}^{\star}+Z_{k} \tag{4}\] According to the, we can calculate the **Kalman Gain**\(K\) as: \[K=\frac{P_{k}^{{}^{\prime}}H}{HP_{k}^{{}^{\prime}}H^{T}+R} \tag{5}\] Where \(K\) is the **Kalman Gain**, \(R\) is the sensor noise or the measurement covariance matrix. And \(H\) is the conversion matrix to make the size consistent. Then the state update can be formulated as: \[X_{k}=X_{k}^{{}^{\prime}}+K(Y_{k}-HX_{k}^{{}^{\prime}}) \tag{6}\] And the covariance update can be formulated as: \[P_{k}=(I-KH)P_{k}^{{}^{\prime}} \tag{7}\] Where \(I\) is the identity matrix. Then for the \(k_{th}\) time step, the state matrix \(X_{k}\) and the process covariance matrix which represents an error in the estimate can be obtained. Note that the adjustment of Kalman Gain is essentially the adjustment of the noise value of \(Q\) and \(R\). Note that: 1. The smaller the \(K\) is, we can trust more on the estimation of the model. 2. The bigger the \(K\) is, we can trust more on the estimation of the sensors. 3. The value of \(K\) is related to the accuracy of the sensors and the error within the environment. It can be seen from the Eq. 6 that, we directly use the difference between the prediction value and the estimation value. We use the parameter \(K\) to determine we trust more on the observation value \(Z\) or the prediction value \(X\). We apply the Kalman filter to the horizontal position and velocity of the UAV on the world coordinate. Because the x-direction and the y-direction of the horizontal coordinate are independent of each other, we merely need to set a directional state in the Kalman Filter. The state \(x\) can be the position in the UAV-based visual tracking of moving object application using RGB-D camera. The position and velocity of the ground vehicle to be tracked can be set as: \[x=\left[\begin{array}{c}x\\ vx\end{array}\right] \tag{8}\] The next state estimation of the vehicle can be represented as: \[x_{k+1}^{-}=\left[\begin{array}{c}x_{k+1}^{-}\\ vx_{k+1}^{-}\end{array}\right]=\left[\begin{array}{c}x_{k}+vx_{k}\Delta t+w _{k}\\ vx_{k}+w_{v,k}\end{array}\right] \tag{9}\] Then we can also obtain the system matrix \(A\) from the partial derivative of \(f\) with respect to \(x_{k}\) as follows: \[A=\left[\begin{array}{cc}1&\delta t\\ 0&1\end{array}\right] \tag{10}\] Then we can also obtain the \(P_{k}^{{}^{\prime}}\), which is the prior estimation of the state, it can be represented as: \[P_{k}^{{}^{\prime}}=AP_{K-1}A^{T}+Q_{k} \tag{11}\] Also, the \(K,X_{k},P_{k}\) can be obtained which are the **Kalman Gain**, the state, and the covariance update. Then the motion estimation of the target UGV can be achieved. #### Iii-B2 Proposed Iteractive Multi-model (IMM) Methods However, in real systems, the object may have great mobility, and it may take sudden turning and acceleration. Merely utilizing the original Kalman Filter may not realize the most ideal results, and adaptive methods must be taken. The iterative multi-model (IMM) methods [28] overcome the limitations mentioned above. The output of the filter will be the weighted average of estimation from multiple filters. The weight is the probability of correctly describing the model at the current moment. Fig. 2: Detailed illustration of the proposed object detection system. The green parts are the user configuration and input configuration of the detailed model. The blue part is the deep learning-based model. The orange part is for the visualization and publishing of our detection results. And the red part is for the image or video stream input. #### Ii-A3 Proposed Laplacian of Gaussian-based Object Tracking Methods We have proposed the Laplacian of Gaussian-based object tracking results. And we have fused the Laplacian of Gaussian (LoG) operator into modern deep neural networks such as ResNext. The ResNext-based network has acceptable efficiency and we have designed methods to integrate the LoG operator seamlessly into the modern ResNext architecture. We have utilized some of our previous network designs mentioned in our FG-Net [11]-[14], [29]-[31]. According to our experiments, utilizing our proposed Laplacian of Gaussian operator, the performance of object detection and tracking can be significantly boosted. And we can still maintain the real-time performance and inference speed in fulfilling the tasks of aerial UAV-based object detection and tracking. ## III Experimental Results the co-moving of the UAV and UGV can be achieved with great robustness. Also, as shown in Fig. 5 and Fig. 6. the UAV can land very precisely, thanks to the accurate motion estimations of position and velocity. The demos of the UAV tracking of the UGVs through various of obstacles in the road circumstances are illustrated in Fig. 5 and Fig. 6. It can be demonstrated that the UAV can take off and approach the target autonomously, and then track the target UGV continuously in the obstacle. The UAV and the UGV can navigate intelligently through the obstacles. Finally, the UAV can navigate the target autonomously, and then track the target UGV continuously in the obstacle. The UAV can navigate the target autonomously through the obstacle. Finally, the UAV can navigate the target autonomously through the obstacle. can find the way out with the UGV. The real experiments of UAV landing on the moving platform is shown in Fig. 5 and 6, it can be seen that the UAV can achieve accurate and robust landing with our proposed approaches. We have compared the tracking trajectory of the UAV compared with the target ground truth trajectory. The UAV can realize very precise tracking of the target. It further demonstrates that our proposed approach can successfully fulfill the UAV tracking and landing task with satisfactory accuracy and robustness in motion estimation. ## IV Conclusions In this work, we have proposed an integrated visual system for unmanned aerial vehicles landing on the targeted moving UGV platform. We have proposed a systematical design of the vision systems for the UAV tracking and landing applications. Firstly, we have integrated the Laplacian of Gaussian filter into deep neural networks. It has shown great merits in robustness to rotation and low illumination. Secondly, we have designed an iterative multi-model Kalman filter adapted based on the original Kalman Filter for object tracking, which achieves great accuracy. Finally, we have integrated our proposed approach with other robotics modules such as SLAM and motion/task planning as a whole system to perform UAV tracking and landing in real applications. UAV Vision is important [34, 35]. Our integrated visual system is very important for future UAV-based detection and tracking applications such as autonomous landing on moving vehicles.
2301.01286
Pseudo-Inverted Bottleneck Convolution for DARTS Search Space
Differentiable Architecture Search (DARTS) has attracted considerable attention as a gradient-based neural architecture search method. Since the introduction of DARTS, there has been little work done on adapting the action space based on state-of-art architecture design principles for CNNs. In this work, we aim to address this gap by incrementally augmenting the DARTS search space with micro-design changes inspired by ConvNeXt and studying the trade-off between accuracy, evaluation layer count, and computational cost. We introduce the Pseudo-Inverted Bottleneck Conv (PIBConv) block intending to reduce the computational footprint of the inverted bottleneck block proposed in ConvNeXt. Our proposed architecture is much less sensitive to evaluation layer count and outperforms a DARTS network with similar size significantly, at layer counts as small as 2. Furthermore, with less layers, not only does it achieve higher accuracy with lower computational footprint (measured in GMACs) and parameter count, GradCAM comparisons show that our network can better detect distinctive features of target objects compared to DARTS. Code is available from https://github.com/mahdihosseini/PIBConv.
Arash Ahmadian, Louis S. P. Liu, Yue Fei, Konstantinos N. Plataniotis, Mahdi S. Hosseini
2022-12-31T22:56:04Z
http://arxiv.org/abs/2301.01286v3
# Pseudo-inverted Bottleneck Convolution for Darts Search Space ###### Abstract Differentiable Architecture Search (DARTS) has attracted considerable attention as a gradient-based neural architecture search method. Since the introduction of DARTS, there has been little work done on adapting the action space based on state-of-art architecture design principles for CNNs. In this work, we aim to address this gap by incrementally augmenting the DARTS search space with micro-design changes inspired by ConvNeXt and studying the trade-off between accuracy, evaluation layer count, and computational cost. We introduce the Pseudo-inverted Bottleneck Conv (PIBConv) block intending to reduce the computational footprint of the inverted bottleneck block proposed in ConvNeXt. Our proposed architecture is much less sensitive to evaluation layer count and outperforms a DARTS network with similar size significantly, at layer counts as small as \(2\). Furthermore, with less layers, not only does it achieve higher accuracy with lower computational footprint (measured in GMACs) and parameter count, GradCAM comparisons show that our network can better detect distinctive features of target objects compared to DARTS. Code is available from [https://github.com/mahdihosseini/PIBConv](https://github.com/mahdihosseini/PIBConv). Arash Ahmadian1, 1 Louis S.P. Liu1, 1 Yue Fei1, Konstantinos N. Plataniotis1, Mahdi S. Hosseini2\({}^{1}\)The Edward S. Rogers Sr. Department of Electrical & Computer Engineering, University of Toronto, Canada 2Computer Science and Software Engineering (CSSE), Concordia University, Canada Footnote 1: Equal Contribution ## 1 Introduction Since the introduction of Vision Transformers (ViTs) by _Dosovitskiy et al._[1], a new class of research has emerged, pushing the boundaries of Transformer-based architectures on a variety of computer vision tasks [2, 3, 4, 5]. These advances make it seem inevitable that ViTs would overtake conventional Convolutional Neural Networks (CNNs). Recently, _Liu et al._'s ConvNeXt [6] has sparked a resurgence in further exploring the architectural designs of CNNs in image recognition. Specifically, they argued that by adapting components from Transformers into the standard ResNet backbone [7], the trained models can match or outperform state-of-the-art ViTs in image classification, objection detection, and segmentation. If CNNs can still be improved by design elements that were previously overlooked, this begs the question: _Can we apply the same Transformer principles to a Neural Architecture Search (NAS) framework to improve its performance?_ NAS has historically seen immense success on large-scale image classification prior to ViTs [8, 9, 10] as it alleviates the task of manually designing for the optimal neural network architecture. Early works of NAS employed Reinforcement Learning [11], Evolutionary Search [12], and Bayesian Optimization [13] while more recent works have shifted to the One-Shot NAS paradigm [14]. One popular branch stream of NAS is Differentiable Architecture Search (DARTS) [15]. It relaxes the search space from discrete to continuous by attributing weights to each operation in the set using a _Softmax_ function and choosing the best candidate. In DARTS, a _\(n\)-layer_ network is constructed by replicating a _normal cell_, \(n\) times and adding _reduction cells_ at the \(1/3\) and \(2/3\) of the total depth with. We refer the reader to [15] for more details. Several works investigate improving the NAS operation space using methods such as increasing the granularity of operations by breaking down search units across input channels [16], grouping similar operations to combat the effects of multi-collinearity [17], creating more expressive operations by replacing the DFT matrices in convolution's diagonalization with K-matrices [18], and reducing the operation set [19]. Here, we investigate optimizations to the search space through a different set of lens by drawing inspiration from ConvNeXt. We start with the second-order DARTSV2 cell (vanilla) structure and incrementally augment the search operations by adapting design elements from ConvNeXt. For each stage, we conduct search and evaluation phases on CIFAR-10 [20] using the same training setup and hyper-parameters as DARTS [15]. In our experiments, we encountered a large increase in parameter count when directly adopting the ConvNeXt convolution block with hindering performances. To combat this, we propose Pseudo-Inverted Bottleneck Convolution (PIBConv) structure to incorporate an inverted bottleneck while minimizing model size. Our proposed architecture is much less sensitive to evaluation layer count and achieves better test error than the original DARTSV2 with comparable parameter count and computations. We further demonstrate its effectiveness by performing a GradCAM [21] analysis, showing that it is able to capture prominent image features at 10 layers vs. a 20-layer DARTSV2. Our contributions are: [**C1.**] We present an incremental experiment procedure to evaluate how design components from ConvNeXt impact the performance of DARTS by redesigning its search space. [**C2.**] We introduce PIBConv block to implement an inverted bottleneck structure while minimizing model footprint and computations. This outperforms vanilla DARTSV2 with lower layer count, parameter count, and GMACs. ## 2 Methodology Our approach to modernizing the DARTS operation set involves incrementally making micro-changes to the design of the separable conv block used within DARTS. However, not all changes proposed in ConvNeXt can be transferred to DARTS. (1) _Changing the stage compute ratio_ to match that of the Swin Transformer [3] is not applicable as it would require major restructuring of the DARTS framework (_i.e._ changing the placement of reduction cells) which is beyond our scope of updating the operation set. (2) _Modifying the stem cell_ to mimic the "patchify" operation in Swin is not applicable since a \(4\times\) downsampling is too aggressive for the \(32\times 32\) images in CIFAR-10. With every change, we search for a cell structure (or _genotype_), under hyper-parameter settings described in Section 4 and evaluate on different layer counts (1). We compare the highest achieved accuracies and corresponding GMACs. Below we present this exploration step by step. Note that incremental changes are accumulated from step-to-step unless otherwise stated explicitly. **Replacing ReLU with GELU** We replace the widely used ReLu [22] activation with GELU [23] which provides an approximation of the former with the key distinction that a small portion of negative signals are let through to the next layer. _This boosts the accuracy by \(0.12\%\) and from now on we use GELU instead of ReLU_. **Replacing BatchNorm with LayerNorm** There have been multiple attempts to develop an alternative to normalization however it remains a key ingredient in modern NN design [24]. In ConvNeXt, replacing BN with LN slightly improves the accuracy of the network. We replace BatchNorm [25] with LayerNorm [26] in our separable convolution operation. Initially, this results in minor degradation in accuracy. We also experiment with retaining LN and adding the various micro-changes proposed in this section. We did not achieve a performance boost from LN in any setting. _We will use BN instead of LN_. **Adapting the ConvNeXt Block** Vanilla DARTS uses depthwise separable convolution as popularized by Xception [27]. The stacked topology used in DARTS is depicted in Fig. 2a. However, the inverted bottleneck popularized by MobileNetV2 [28] has made its way to multiple modern networks [8, 29] and thus warrants exploration in the DARTS framework. We implement the ConvNeXt block structure in Fig. 2b (refer to [6] for further details on the reasoning behind the architectural design choices). It consists of three key changes: (1) Reducing the number of activation and normalization functions, (2) Adapting to an inverted bottleneck structure, and (3) Moving up the depthwise separable conv layer to facilitate training with large kernel sizes. However, directly adapting the ConvNeXt block significantly increases the number of parameters and GMACs while sharply decreasing accuracy. To manage the number of learnable parameters, we introduce PIBConv (Pseudo-Inverted Bottleneck Conv block) as depicted in Fig. 2. We add a depthwise convolution after the intermediate pointwise conv layer which reduces the number of channels. We keep the positions of the activation and normalization the same relative to the next layer based on the ConvNeXt block. This structure also inhibits the stacked architecture which has been shown to increase accuracy by \(1-2\%\) when introduced to separable convolution-based operations in the DARTS framework [15] (which the vanilla inverted bottleneck does not have), as well as an inverted bottleneck structure. We compare the number of weights per block to estimate the parameter size and computational complexity of both networks. Define \(C\) to be the input and output channel size, \(C_{inv}\) to be the inverted bottleneck channel size, and \(K\) to be the kernel size of the depthwise convolution. Similarly, define \(F=C_{inv}/C\) to be the inverted bottleneck ratio for the first pointwise convolution. The total number of weights between the ConvNeXt block (1) and our PIBConv block (2) are com Figure 1: Roadmap of the incremental augmentations described in Section 3, along with their corresponding accuracies and methodologies. Figure 2: Convolution Blocks : **(a)** DARTS Separable Convolution Block; **(b)** Inverted Bottleneck ConvNeXt Convolution Block (\(C_{inv}=C\times 4\)); **(c)** Pseudo-Inverted Bottleneck Cell (\(C_{inv}=C\times 2\)) pared below: \[2FC^{2}+K^{2}C \tag{1}\] \[(F+1)C^{2}+2K^{2}C \tag{2}\] In practice, the dominant variable in both equations is the channel size \(C\), which is initialized to \(16\) and doubled at each reduction cell. Additionally, the conv operation dominates both DARTSV2 and our searched genotypes. Thus, comparing the coefficients of the quadratic term \(C^{2}\) provides an estimate for the difference in parameter size and computational complexity of these networks. Our PIBConv block has approximately \(0.63\) times the number of weights as the ConvNeXt block. We further choose \(F=2\) in the final block topology after experimentation with various values in \(\{1.5,4.5\}\) since it achieved the best accuracy-GMAC trade-off. _The use of the Pseudo-Inverted Bottleneck block boosts the accuracy by \(0.4\%\)_. ## 3 Experiments **Experimental Setup** We present our hyperparameter settings and experimental setup next. Following the DARTSV framework, we search with an initial channel size of \(16\), \(4\) nodes, \(8\) layers, \(50\) epochs, and a batch size of \(64\). We use the SGD optimizer coupled with a cosine-annealing learning rate scheduler (no restarts) [30], \(0.0025\) initial learning rate, \(3e-4\) weight decay, and \(0.9\) momentum. As for the evaluation phase, we train for \(600\) epochs with a batch size of \(96\), cutout augmentation [31], path dropout with probability \(0.2\) and auxiliary towers with \(0.4\) weight. Other hyper-parameter settings remain the same as the search phase. Both our search and evaluation phases are performed on CIFAR-10. **Search Phase** Our final operation set after the incremental changes described previously is comprised of the following \(10\) operations: _none_, _skip_connect_, _pib_conv_3x3_, _pib_conv_5x5_, _pib_conv7x7_, _diated_conv3x3_, _dialated_conv5x5_, _conv7x1_1x7_, _max_pool3x3_, _avg_pool3x3_. We argue that our genotype is trained to convergence with 50 epochs and avoids a common pitfall of falling back on skip-connections in later stages of training [32]. As depicted by Fig. 3, the decision boundary between the favored operation (in this case, pib_conv_5x5) and skip-connection, is not crossed even very late into training. After searching with the mentioned hyperparameters and final operation set, we arrive at the genotype in Fig. 4. **Evaluation Phase** We evaluate our final genotype at multiple evaluation layers to observe the effect of layer count on test accuracy and report the results in Table 1. We observe that the evaluation accuracy of our proposed genotype is significantly less affected by the evaluation layer count compared to DARTSV2. Specifically, at _10 layers_, we achieve a higher test accuracy compared to a \(20\) layer DARTSV2 network. Furthermore, at \(2\) layers, our architectures exceed the DARTSV2 genotype at \(3\) layers by over \(20\%\), while at the same time maintaining similar GMACs. At \(4\) layers, we outperform the DARTSV2 genotype at \(7\) layers (to match the model size for a fair comparison) by \(0.24\%\), while still maintaining lower GFLOPs. Fig. 5 presents a comparison between the GradCAM [21] visualizations produced from the last cell of each network for DARTSV2 at \(20\) layer, Our genotype at \(10\) and \(20\) layers. Our proposed genotype, in a \(10\) cell network, can effectively capture the prominent features of the classification. The increase in the number of cascaded cells leads to the gradual collapse of the heat-map boundaries, onto the outline of the object outperforming DARTS. We argue that this supports our claim that the proposed genotype, is inherently superior to that of DARTS. ## 4 Conclusion In this work, we attempt to revise the DARTS search space. We incrementally augment the convolution operation with micro-changes inspired by ConvNeXt and propose the Pseudo-Inverted Bottleneck block to reduce the number of parameters used in the vanilla Inverted Bottleneck. Our proposed genotype's performance is much less sensitive to evaluation layer count compared to that of DARTSV2. It achieves a higher accuracy at a lower GMAC/ parameter count with \(10\) evaluation layers compared to DARTSV2 evaluated at \(20\) layers. Furthermore, we perform a GradCAM visualization on our genotype and compare it with that of DARTSV2. Our network's high performance at lower layer counts, correspondingly with low GMACs and parameter count, makes it an attractive choice for (a) image processing applications such as sharpening and blurring, as shallow networks suit \begin{table} \begin{tabular}{c c c c} \hline \hline **Genotype** & **Eval. Layers** & **Test Acc. (\%)** & **Params (M)** & **GMAC** \\ \hline \hline DARTSV2 & 20 & 97.24 & 3.30 & 0.547 \\ & 15 & 96.93 & 2.28 & 0.408 \\ & 10 & 96.72 & 1.6 & 0.265 \\ & 8 & 96.32 & 1.15 & 0.207 \\ & 7 & 96.05 & 1.05 & 0.180 \\ & 6 & 95.73 & 0.635 & 0.153 \\ & 5 & 94.56 & 0.605 & 0.121 \\ & 4 & 93.74 & 0.487 & 0.090 \\ & 3 & 71.68 & 0.116 & 0.067 \\ & 2 & 54.52 & 0.082 & 0.035 \\ \hline PIBConv & 20 & 97.76 & 6.06 & 0.969 \\ & 15 & 97.40 & 4.21 & 0.724 \\ & **10** & **97.29** & **3.02** & **0.470** \\ \hline & 8 & 97.15 & 2.26 & 0.369 \\ & 7 & 97.03 & 2.07 & 0.320 \\ & 6 & 96.86 & 1.36 & 0.275 \\ \hline & **96.65** & **1.30** & **0.218** \\ \hline & 4 & 96.24 & 1.10 & 0.166 \\ & 3 & 94.63 & 0.443 & 0.123 \\ & 2 & 92.15 & 0.385 & 0.067 \\ \hline \hline \end{tabular} \end{table} Table 1: Performance comparison of different genotypes on CIFAR-10 dataset: Our genotype evaluated on \(10\) and \(5\) layers are highlighted to be compared with DARTSV2 genotype evaluated with \(20\) layers. these applications best; and (b) designing lightweight network design framework for efficient representation learning on edge devices. Consequently, a potential avenue for future work would be to explore the applications of our genotype/ Pseudo-Inverted Bottleneck block, in both low-level and high-level vision processing tasks. It is worth noting that our aim in this paper was not to combat the SOTA methods related to DARTS (which deems to be the limitation of our work here); but shedding light on the granularity of search space which is commonly shared across many DARTS variants in the literature. We hope our work initiates new ideas to investigate optimum search space designs in DARTS framework to build more robust and generalized models for representational learning problems.
2309.10581
Gateway Station Geographical Planning for Emerging Non-Geostationary Satellites Constellations
Among the recent advances and innovations in satellite communications, Non-Geostationary Orbit (NGSO) satellite constellations are gaining popularity as a viable option for providing widespread broadband internet access and backhauling services. However, a more complex ground segment with multiple ground stations is necessary due to these satellites' high speeds and low altitudes. The complete dimensioning of the ground segment, including gateway optimal placement and the number of ground access points, remains a relevant open challenge. In this article, we provide an overview of the key factors that shall be considered for NGSO gateway station geographical planning. Subsequently, we propose a ground segment dimensioning approach that combines several criteria, such as rain attenuation, elevation angle, visibility, geographical constraints, and user traffic demands. The operational concept is first discussed, followed by a methodology that combines all these constraints into a single map-grid to select the best position for each gateway. Furthermore, a case study is presented, which demonstrates the performance of the proposed methodology, for one example constellation. Finally, we highlight relevant open challenges and key research directions in this area.
Victor Monzon Baeza, Flor Ortiz, Eva Lagunas, Tedros Salih Abdu, Symeon Chatzinotas
2023-09-19T12:43:54Z
http://arxiv.org/abs/2309.10581v2
# Gateway Station Geographical Planning for Emerging Non-Geostationary Satellites Constellations ###### Abstract Among the recent advances and innovations in satellite communications, Non-Geostationary Orbit (NGSO) satellite constellations are gaining popularity as a viable option for providing widespread broadband internet access and backhauling services. However, a more complex ground segment with multiple ground stations is necessary due to these satellites' high speeds and low altitudes. The complete dimensioning of the ground segment, including gateway optimal placement and the number of ground access points, remains a relevant open challenge. In this article, we provide an overview of the key factors that shall be considered for NGSO gateway station geographical planning. Subsequently, we propose a ground segment dimensioning approach that combines several criteria, such as rain attenuation, elevation angle, visibility, geographical constraints, and user traffic demands. The operational concept is first discussed, followed by a methodology that combines all these constraints into a single map-grid to select the best position for each gateway. Furthermore, a case study is presented, which demonstrates the performance of the proposed methodology, for one example constellation. Finally, we highlight relevant open challenges and key research directions in this area. Ground Segment, Gateway Dimensioning, NGSO, Weather Model ## I Introduction Satellite communications (SatComs) have regained attention due to recent advancements in technology and private investments, such as the Non-Geostationary Orbit (NGSO) satellite mega-constellations created for broadband communication services [1]. A new wave of lower-orbit SatCom systems is in the making, such as the IRIS2 constellation, embracing the benefits of lower radiation exposure and reduced latency to provide Internet access to under-served regions, which require a global, scalable, flexible, and resilient solution [2]. Achieving worldwide internet coverage using low-altitude and fast-moving satellites presents certain technical obstacles and challenges on the NGSO SatComs system's ground segment. Several ground stations need to be distributed throughout the Earth's surface to guarantee global connectivity with the components of the mega-constellations. These stations, called Gateways (GW), connect the satellite to the ground via feeder links. The GW, in the absense of Inter-Satellite Links (ISL), must always maintain visibility with some element of the constellation to offer the service continuously, which is challenging in the case of NGSO since the satellites are moving, displaying a dynamic situation. Therefore, geographical planning of the GW elements is quite challenging for NGSO. A conservative approach in which more GWs than necessary are distributed geographically can lead to oversizing, resulting in high operator costs and generally becoming economically impractical. This makes it impossible for small operators to enter the market. To reduce the necessary number of GW, ISLs are currently being introduced, which allows a communication link between each satellite and the neighboring one, thus reducing the need for all satellites to be visible by (at least) one GW. However, this is a hot-line of research even at an early stage [2]. Weather conditions are another limiting factor when it comes to locating a GW since feeder links usually operate on high spectral bands, which are very sensitive to weather impairments like rain fading. Other factors influence channel modeling compared to its geostationary (GEO) counterpart, such as the Doppler effect, which would complicate the control and visibility of Low Earth Orbit (LEO) satellites due to movement. Several channel models are presented in [3] that reflect the variability and lack of consensus in a channel model that allows us to design a unique or standard ground segment. A preliminary analysis of ground segment requirements to face the new space age is presented in [4, 5] for the South American region. However, the GW locations are assumed to be given in [4]. Considering the available works in the literature, we found the need to identify the needs, gaps, and issues in dimensioning the ground segment of emerging NGSO systems. We start this paper by providing an overview of the key factors that shall be considered for NGSO gateway station positioning. Next, we discuss a novel methodology to combine different criteria such as rain attenuation, elevation angle, visibility, geographical constraints, and user traffic demands. The operational concept can be adapted to any input NGSO constellation and criteria. Finally, we present an overview of future research lines and open technical challenges. ## II Ground Segment Design: Criteria One of the fundamental challenges of the NGSO-ground segment (GS) with global coverage is the number and availability of GW locations. Table I provides a list of aspects that make the GS design challenging. The need for wider bandwidths and the increasing demand for capacity are forcing the feeder bands, i.e., Q/V-W bands. Even feeder links in optical bands that are at low technology readiness levels. As discussed in Table I, the Q/V band GS design is not a trivial task. New architectures for the ground segment are required beyond the proposals for band Ka in [8] to support higher bandwidth and capacity. Instead, these frequencies are impaired by higher atmospheric attenuation, such as rain, which in turn causes outages in the services. For this reason, choosing the correct position of the GW is vital to avoid areas with high rain precipitation, among other mitigating factors for the feeder link power. To overcome service interruptions, the primary strategy used so far is site diversity, consisting of redundant GW with backup stations while switching the service in the event of an interruption. An overview of the site diversity concept and different strategies are described in [9] in two representative climatic groups: a temperate region and in a tropical climatic area. To decide the position of a GW with diversity using the rain attenuation criterion, rain prediction methods such as the one proposed in [10] have been considered. The inconvenience of site diversity strategies would increase the development cost of the NGSO ground segment, which, as mentioned, is aggravated for current and upcoming mega-constellations, regardless of the criteria used to create the redundancy GW network (rain, traffic, access, delay). A study shown in [11] examines the optimal selection of GW to reduce the overall installation cost while ensuring an acceptable level of outage probability based on the assumption that weather conditions at each site are independent. The interest is to reduce the number of GW that compose the GS network. The works [8], and [12] already consider minimizing the number of GW but exclusively under a single criterion, the first for atmospheric phenomena, while the second for traffic distribution. Therefore, it is not an optimal solution to provide guarantees on the total availability of the service. Also, the character of the time-varying topology of the NGSO satellites, which determines the real-time satellite visibility, has not been considered. On the other hand, the GW placement further affects the service coverage and access performance of the network to service demands [12, 13]. To avoid loss of service, one can also balance the traffic between the GWs, considering a service data demand distribution. The authors in [12] propose a GW placement method for NGSO networks that identifies the best GW locations that can balance traffic loads based on constraints such as link interference, satellite bandwidth, and the number of satellite antennas. Instead, atmospheric attenuation is not considered. ## III Multi-Criteria Approach This section provides a methodology to determine the best geographical positions to locate GW stations by considering multiple criteria. For this purpose, we have defined a system model based on layers, where each layer is called a _grid_ and represents a choice criterion, as shown in Figure 1. Latitude and longitude define the fundamental grid, by steps of 0.1 degrees for latitude and 0.1 degrees for longitude. The dimension of the coordinate grid determines the dimension of the rest of the grids, which has to be the same for coordinate-to-coordinate mapping. The initialization of coordinates basically depends on the internal operator restrictions in terms of regions for deployment. The selection of the step or subdivision has an important weight on the time and computation complexity required to generate the different grid levels and the overall procedure. Therefore, a balance must be found between accuracy, speed, and complexity. Each coordinate pair is a candidate position for a GW will be one that simultaneously meets the decision thresholds marked in each independent grid. The red box in Fig. 1 represents the selected coordinates that simultaneously meet all the conditions imposed to place a GW. Few of these grids have been mathematically described in our previous work [14], which did not consider the worldwide NGSO coverage area. Weather modeling is statistically represented using the ITU-R (International Telecommunication Union Radiocommunication Sector) model for quantifying rain attenuation in SatCom. Rain attenuation is calculated as a function of satellite frequency, rain rate, and geographical location. A grid with a matrix is defined to represent the weather model, where each position represents a geographical position, and the value represents the rain attenuation in dB [14]. A population density-based traffic model estimates data demand on Satcom systems to generate traffic demand grids. Data demand is estimated using a population density-based traffic model that considers four key variables influencing data demand: the throughput per user, the population density, the penetration rate1, and the concurrency rate2. The product of these four variables gives the throughput density per square kilometer, representing the total amount of data being transmitted or received in a given area. Footnote 1: Refers to the proportion of the population using SatComs services and is usually measured in users per inhabitant. Footnote 2: Refers to the proportion of users simultaneously using SatComs services. Concerning visibility, two sub-grids are calculated: one for the average number of satellites that can be seen from the GW location and one for the visibility over time of a satellite from the GW location. This means that the NGSO constellation is predefined and offered as input. The model also includes a grid to indicate the altitude concerning sea level and another to indicate for each coordinate whether or not it is allowed to place a GW for geo-political reasons. The advantage of this multi-criteria approach is that we can include new grids according to new conditions required by each operator independently for each design. This customisation also includes the grid combination, e.g. the base grid of coordinates can be combined to present it as a triplet in which the third parameter is the station's height. To carry out the design and decide where to place the GW stations, we follow the procedure shown in Fig. 2. First, the grids participating in the multi-criteria decision must be designed and defined. A pair of coordinates (latitude and longitude) is selected, and each grid is traversed, checking if the value for said pair meets the threshold established for the corresponding criterion. No information is exchanged between grids. However, multiple simultaneous grids can be considered if you wish to combine criteria. This depends on the definition of the grid. If one of the grids returns a negative response in the check, that pair is invalid and we must select another pair of coordinates. If all the conditions are met for a pair of coordinates, we will have a candidate position to place a GW. Once all coordinates have been analyzed, we will have a list of candidate positions to place the GWs. Both the list of positions obtained and the definition of the grids may be subject to optimization at a later stage. ## IV Numerical Evaluation To evaluate the proposed multi-criteria approach, we use the criteria established in Fig. 1 (rainfall attenuation, traffic demand, visibility, geo-political constraints) for an NGSO constellation at an altitude of 800 km. The frequencies used are 19.7 GHz, 30 GHz, 40.5 GHz, and 47.2 GHz, representing both traditional and emerging spectral bands for the feeder link. In all cases, the minimum elevation angle is fixed at 10 degrees. The thresholds for the algorithm are selected to exemplify the methodology and process described here. The following thresholds per grid are defined: * Rain threshold (weather grid): all the rain attenuation values available in the grid have been analyzed, the maximum value selected, and based on this number, we establish 25% attenuation as a threshold with respect to the maximum allowed attenuation. * Geo-political threshold: represents a binary threshold between a geographic position with conflict or not. To exemplify the proposed model in this work, conflicting positions have been distributed randomly on the map. Fig. 1: Grid-model for multi-criteria approach. * Visibility threshold: the threshold is established considering at least 3 visible satellites in each position. * Traffic threshold: the threshold is established between 5 types of traffic densities, those with a high traffic density of 33 Mbps/km\({}^{2}\). * Terrain threshold: in this example, we have considered the threshold that determines that there is land, excluding all aquatic areas (seas, oceans, lakes, etc) We have calculated how many positions with respect to the total pairs of coordinates that make up the grid exceed the thresholds established for each criterion. The percentage of positions selected for each criterion is represented in Fig. 3. We follow two cases to select the criteria: on the one hand, we are going to carry out a comparison using two by two grids; that is, we compare by pairs, where only two criteria are analyzed simultaneously. On the other hand, we analyze the general case in which all the selected criteria are considered simultaneously. We can note some criteria have a greater influence on others. The main influence is due to the chosen threshold, which is not optimized. Due to the length of the step between coordinates, we do not obtain exact and isolated positions in each region, but an area with multiple adjacent positions has similar characteristics, and therefore multiple GW in the same region are possible. Among all the GW of a region, we select the one that is in the position that coincides with the geographic mean of that area. Finally, in Fig. 4, the areas resulting from applying the proposed approach and the position for each GW selected in each of them are represented. ## V Discussion and Future Trends ### _Additional criteria_ As mentioned in Section III, additional criteria can be added as new grids to the proposed model, making it a very flexible tool. Below we provide a list of potential criteria that could be considered to enhance the current approach: #### V-A1 **Elevation Angle (EA)** EA is an important variable to consider when placing GW, as it can significantly affect Fig. 3: Percentage of candidate positions to place GW. Fig. 2: Methodology and Procedure Flow. signal propagation and link budget3. This parameter refers to the angle at which GW points to the satellite, which in turn is related to the height of the GW location above sea level. In this work, we have considered a fixed elevation for all coordinates. The higher the EA of the GW, the higher the line-of-sight and link distance available for the signal, which can improve constellation performance. However, EA can also increase signal attenuation due to rain and other weather phenomena, which can negatively affect the quality of service. Therefore, choosing the appropriate GW elevation is a trade-off between performance, service quality, and attenuation due to rain. The methodology for placement should carefully consider the elevation and use advanced modeling and simulation techniques to determine the optimal height at each location. In addition, the methodology should be flexible enough to adapt to different meteorological and geographical conditions in different regions of the world. Footnote 3: Procedure for determining the received power, which ensures that the information is received intelligibly with an adequate signal-to-noise ratio. #### Iii-B2 **Spectrum constraints** Countries have different spectrum allocations for SatCom, which can limit the number of satellites that can be deployed and the frequencies they can use. Environmental and safety regulations must be considered, as well as restrictions on the location of the ground stations used to control and communicate with the satellites. #### Iii-B3 **Regulatory constraints** Deploying NGSO constellations requires a thorough understanding of the policies and regulations in each country and region. It must be done in consultation with the relevant regulators and authorities to ensure compliance with applicable regulations and policies. In addition, the geo-political policy may also impact the economics and cost of constellation deployment, as there may be taxes or tariffs associated with using space and frequencies in certain countries or regions. #### Iii-B4 **GW-Core distance** Refers to the location of the GW in relation to the core of the terrestrial network. The network core is the central part of the communications network that processes and routes data through the network. The distance between the GW and the network core can affect system performance, especially end-to-end latency and network throughput. To ensure optimal system performance, the methodology must consider the location of the GW relative to the network core. If the ground station or GW is too far from the network core, there may be excessive latency in data transmission, affecting service quality. On the other hand, if the GW is too close to the network core, depending on the network hierarchy4, there may be congestion in data traffic that negatively affects performance. Footnote 4: The network architecture has several layers organized hierarchically. This applies in space to multi-orbital constellations of different types. #### Iii-B5 **Existing infrastructures integration** A need exists to ensure that the mega-constellation integrates effectively and efficiently with existing infrastructure, such as other ground stations (teleports, hubs, VSAT terminals) and backhaul connections, which have to be prioritized if some GW is re-used in the planning. Proper integration is crucial to ensure continuity of service and signal quality. Interference between mega-constellation and terrestrial infrastructure, which could lead to the degradation of service quality and negatively affect the user experience, must be avoided. In addition, the availability of infrastructure resources for the mega-constellation must be ensured, which may require optimizing the use of resources and implementing new connectivity solutions. Fig. 4: World GW Positioning for NGSO system at 800 Km. ### _GW equipment_ The GW antenna is the main hardware element that can influence the number of GW needed and their positioning. Different types of antennas are used for gateways: symmetric or prime focus antennas, offset antennas, array antennas, and lens antennas. Among the symmetric and offset antennas, there are some variations using sub-reflectors to obtain antennas with our blockage of the feeder, for instance, Cassegrain or Gregorian, and even multi-frequency operation without defocusing the beams using dichroic sub-reflectors. Suppose the requirements of the antenna involve linking with multiple satellites at the same time with the same antenna infrastructure. In that case, multiple feeders aligned according to the constellation orbit can be cost-efficient. On the other hand, antenna arrays used by GW for communication with the satellite display some drawbacks. For instance, grating lobes in an array of horn antennas can be a problem, with high production costs and high losses in the case of planar antennas. In addition, the authors in [15] discuss antenna array integration concepts for combined receiver and transmitter terminals that can be used for ground-based Ka-band SATCOM. For Q/V/W bands and optical GW have to be extended. Lens antennas offer high bandwidth and, depending on the size, can generate highly focused beams. However, the biggest drawback of these antennas is the losses caused by the dielectric at high frequencies and its size. The design of the gateway antenna depends on the gain needed and the available transmission power, which can be calculated in the link budget calculation based on the satellite altitude, component losses, miss-match losses, antenna pointing loss, and propagation attenuation, just to mention the most relevant factors. For example, the reflector-based antenna can be used. ### _Accurate weather modeling_ At the Q/V- bands, atmospheric attenuation can cause decades of dB magnitude losses, which can be even higher depending on the EA. Regarding the rain attenuation model, [8] investigated the cumulative statistics of total attenuation induced in LEO mega-constellations operating at Q/V bands. The work in [16], as many others in this area, considers the ITU recommendation P.618-13 for the calculation of the exceeding probability of total attenuation for a given EA. These models have to be extended and improved to include the new spectral bands, such as Q/V and W. ### _Multi-criteria optimization_ A multi-criteria optimization approach is required to optimize the placement of GW stations for NGSO satellites. This involves considering various technical and operational factors and constraints, such as traffic demand, rain attenuation, visibility time, geographic location, regulatory requirements, transmission power requirements, gateway processing and storage capacity, and integration with existing infrastructure. The weighting of all factors leads to a complex multi-objective optimization. Advanced mathematical models and optimization techniques, such as mathematical programming, genetic algorithms, and artificial neural networks, are the key to addressing these factors' complex and often conflicting requirements. Candidate methods for solving these optimization problems are classified into analytical optimization, metaheuristic optimization, and machine learning. Analytical optimization provides an optimal or near-optimal solution but may be time-consuming due to the large number of optimization parameters, making it unsuitable for real-time systems. Meta-heuristic optimization, on the other hand, may not guarantee optimality but is well-suited for nonlinear, multi-objective, and hard problems. Machine learning algorithms interact with the environment or data to predict or decide on possible solutions, requiring less computational time than metaheuristic methods. However, they may not guarantee optimal solutions. A trade-off often exists between performance and computational complexity. An example is when formulating the optimization results in a non-convex problem depending on the parameters chosen (capacity, traffic demand, latency, among others). This is often the case due to binary assignment variables, nonlinear expressions, and conflicting optimization variables. Therefore, optimizing the placement of GW stations for NGSO satellites is challenging and requires advanced modeling techniques, interdisciplinary expertise, and optimization methods that balance performance and computational complexity. In addition, the thresholds for each criteria can be optimized to consider different weightings depending on the design needs per operator. ### _Inter-Satellite Links and space-routing_ The joint optimization or definition of the GS planning together with the Inter-Satellite Links (ISL) network can further reduce the number of elements needed both on the ground (number of GWs) and the number of hops between satellites in an ISL network. Such a design can potentially reduce deployment costs (estimated 70% [1]), improve the security of services, reduce latency in communications, and facilitate the integration of future 6G. In addition, low-cost GS is a big opportunity for small operators to enter the new space age with NGSO constellations. Associated with the joint optimization GS-ISL, it is necessary to develop routing5 algorithms for multi-layer NGSO constellations: to address routing strategies, including multi-layer and multi-orbit NGSO/GEO architectures for the proliferation of large NGSO constellations equivalent to large graphs. Footnote 5: Paths of information between the different spatial elements that make up the constellation. From the perspective of the regulators, they demand to know the physical path that the information will follow between two GWs when it passes through ISL to ensure control, security, and management of the information. Extending the GW positioning network for worldwide coverage using ISL requires that a GW not necessarily have to be in the owner's country. Therefore, the regulation of this conjunction is a major concern. ### _Trade-off of different criteria_ The trade-off among different criteria to select the number of GW is crucial and evident to obtain an optimal ground segment for NGSO. Add to this the antenna size, the ISL network, the spectral restrictions, or new additional criteria, all interconnected as represented in Fig. 5 to offer the best positions for the GWs. As an example, there is a clear trade-off between the number of GW (and total feeder link capacity) and the sizing antenna (which determines the single feeder link capacity). In general, assuming a full coverage of the constellation, smaller (antenna) GWs can be replaced by bigger (antenna) GW. A huge dish antenna GW can multiply the capacity of a small dish antenna GW. Therefore, multiple co-located GW can be replaced by a single 'big' one. Feeder links of future systems will most probably operate in Ka and Q/V. Therefore, those are the bands we propose as a priority. Within these frequency bands, Ka-band is at the moment more mature in terms of hardware (HW) components and requires less need for GW diversity. In addition, the deployment cost can limit the maximum number of GW. The EA is associated with the need to include or not diversity or overcome the attenuation imposed by temporal models, rain, or even cloud if we have optical GW. On the other hand, assuming a high capacity and fully-connected ISL network may allow us to reduce in a significant manner the number of GWs and help in establishing them in secure locations. Existing works mainly formulate the GW positioning problem as an integer linear programming, with the objectives typically being the number of GW minimization or maximization of network capacity. Adding the implementation costs to the problem formulation as well as the specific hardware equipment, may render too many degrees of freedom into the problem, which are most of the times intertwined. ## VI Conclusions In this work, we presented an overview of the key factors to consider for NGSO gateway station positioning, proposed a ground segment dimensioning approach that combines several criteria, and discussed a case study demonstrating the performance of the proposed methodology for one sample constellation. The approach is presented from an operational perspective. The paper concludes with a discussion of relevant open research challenges and potential research directions. ## Acknowledgements This work has been supported by the project TRANTOR, which has received funding from the European Union's Horizon Europe research and innovation program under grant agreement No. 101081983.
2309.12396
A Phenomenon Resembling Early Superhumps in a New SU UMa-Type Dwarf Nova with a 2-Hour Orbital Period
We investigate K2BS5, an optical transient that we identified in Campaign 13 of the Kepler/K2 archives by the "K2 Background Survey", and classify it as a new SU UMa-type dwarf nova. Using the light curve generated from Kepler's long-cadence observation mode, we analyze the dwarf nova during quiescence and superoutburst. Following 20 days of quiescence at the start of the observation, the system entered a superoutburst lasting 12 days, after which it experienced at least one rebrightening. K2BS5 clearly meets the criteria for an SU UMa star, but at the peak of the superoutburst, it also shows double-wave oscillations consistent with the spectroscopic orbital period, a phenomenon that closely resembles early superhumps in WZ Sge stars. While we do not classify K2BS5 as a WZ Sge system, we discuss how this phenomenon could complicate efforts to use the suspected detection of early superhumps to distinguish SU UMa-type dwarf novae from the recently recognized class of long-orbital-period WZ Sge systems.
Rebecca Boyle, Colin Littlefield, Peter Garnavich, Ryan Ridden-Harper, Paula Szkody, Patricia Boyd, Krista Lynne Smith
2023-09-21T18:00:06Z
http://arxiv.org/abs/2309.12396v1
A Phenomenon Resembling Early Superhumps in a New SU UMa-Type Dwarf Nova with a 2-Hour Orbital Period ###### Abstract We investigate K2BS5, an optical transient that we identified in Campaign 13 of the _Kepler_/_K2_ archives by the _K2_ Background Survey, and classify it as a new SU UMa-type dwarf nova. Using the light curve generated from _Kepler_'s long-cadence observation mode, we analyze the dwarf nova during quiescence and superoutburst. Following 20 days of quiescence at the start of the observation, the system entered a superoutburst lasting 12 days, after which it experienced at least one rebrightening. K2BS5 clearly meets the criteria for an SU UMa star, but at the peak of the superoutburst, it also shows double-wave oscillations consistent with the spectroscopic orbital period, a phenomenon that closely resembles early superhumps in WZ Sge stars. While we do not classify K2BS5 as a WZ Sge system, we discuss how this phenomenon could complicate efforts to use the suspected detection of early superhumps to distinguish SU UMa-type dwarf novae from the recently recognized class of long-orbital-period WZ Sge systems. cataclysmic variable stars; dwarf novae; stellar accretion disks; SU Ursae Majoris stars; WZ Sagittae stars + Footnote †: journal: Accepted for Publication in the Astronomical Journal ## 1 Introduction Cataclysmic variables (CVs) are a classification of binary star systems consisting of a white dwarf (WD) primary paired most commonly with a red dwarf (RD) secondary. Mass transfer between the two stars occurs when the secondary overflows its Roche lobe, resulting in the formation of an accretion disk around the primary if the WD is not strongly magnetized (for reviews, see Warner, 1995; Hellier, 2001). Often the accretion disk is thermally unstable and subject to recurring outbursts on timescales ranging from days to many years, depending on the mass-transfer rate (Osaki, 1996, and references therein). These systems are known as dwarf novae (DN; for a review, see Osaki, 1996). SU UMa systems are a subcategory of DN with typical orbital periods \(\lesssim\) 2 h that are distinguished by the occasional "superoutbursts" they experience, which are outbursts of longer duration and greater amplitude in comparison to ordinary outbursts. During superoutbursts, SU UMa systems show the presence of superhumps, which are periodic oscillations slightly below the orbital frequency. Superhumps result from tidal instability in the accretion disk, excited when the outer disk expands to the 3:1 resonance with the orbital period of the binary (Whitehurst, 1988; Osaki, 1989). As seen in Kato et al. (2009) and Kato (2022), the superoutburst displays three distinct phases of period evolution, designated as stages A, B, and C. Stage A appears first in the evolutionary progression, characterized by the longest superhump period and no discernible period derivative. Stage B is the middle segment with a positive period derivative, followed by stage C exhibiting a shorter and more stable period. WZ Sge-type systems (reviewed by Kato, 2015) are a subcategory of the SU UMa-type that generally show only superoutbursts whose recurrence times are significantly longer than those of SU UMa stars. Like the SU UMa systems, they exhibit long-duration superoutbursts, but they can often be distinguished by the presence of subsequent rebrightening events and low-amplitude oscillations in the earliest stages of the superoutburst. The rebrightenings are also referred to as echo outbursts. The double-wave feature is referred to in the literature as early superhumps (ESH), which have approximately the same period as the binary orbital period. ESH are thought to be the result of expansion of the outer accretion disk to a 2:1 resonant frequency with the orbital period, which is possible only for very small mass ratios. The subsequent transition from ESH into Stage A takes place as the outer disk at the 3:1 resonance radius becomes eccentric and undergoes apsidal precession (Kato and Osaki, 2013). Observing these short-lived phenomena during superoutbursts has proven to be one of the many accomplishments of the _Kepler_ spacecraft during its original and _K2_ missions, both of which provided continuous light curves of predetermined targets for an extended period of time (often months or longer). However, while previous analyses of _Kepler_ data have focused on known CVs, the _Kepler_ archives contain numerous background pixels that were not studied directly at the time of observation, providing an opportunity to search for previously unknown, interesting objects. Within those background pixels, the _K2_ Background Survey (K2BS; Ridden-Harper et al., 2020) has uncovered transients by systematically identifying potential transients that are then reviewed manually. One of the early successes of the K2BS project was the discovery of the only superoutburst of a WZ Sge system observed by _Kepler_/_K2_ mission (Ridden-Harper et al., 2019). Here we present a photometric and spectral analysis of K2BS5, a new SU UMa-type DN. ## 2 Data Unlike the serendipitous background sources that the K2BS project normally seeks to unearth, K2BS5 was targeted in _K2_ Campaign 13 as part of a program (G013086, P.I. Patricia Boyd) to observe candidate active galactic nuclei identified from a search of archival X-ray sources. In addition to the Chandra detection that led to its inclusion in that _K2_ program, it is listed in the 2XSS Catalog of Swift X-ray Telescope Point Sources and Data Release 8 of the XMM-Newton Serendipitous Source Catalog; its identifiers in these three catalogs are CXOJ043436.4+180243, 2SXPS J043436.5+180243, and 3XMM J043436.6+180245, respectively. We refer to the object here as K2BS5 because it was the fifth transient discovered by the K2BS project, but it has also been detected by the All Sky Automated Survey for SuperNovae (ASAS-SN; Shappee et al., 2014; Kochanek et al., 2017) under the designation ASASSN-19xs. It has Figure 1: The ASAS-SN, Gaia, CRTS, and ATLAS extended lightcurves from February 2005 to April 2022. The more sparse sampling shows evidence of few notable events apart from one superoutburst occurring in September 2019. There are no normal outbursts beyond superoutbursts, a hallmark of WZ Sge systems. The superoutburst observed by _Kepler_ whose observation time is highlighted in red, is not visible because the system was near solar conjunction at the time. two additional identifiers (Gaia19emm and AT 2019sgc) as a result of the detection by Gaia of an outburst in 2019 (Hodgkin et al., 2019). K2BS5 is an under-studied system. The only significant attention that it has received was a spectroscopic measurement of its orbital period (\(123.55\pm 0.09\) min) Thorstensen (2020), who used the identifier Gaia19emm. Its Gaia EDR3 (Gaia Collaboration et al., 2020) position is \(\alpha_{2000}=04\)h34m36.6053s, \(\delta_{2000}=+18^{\circ}02^{\prime}45.025"\). Bailer-Jones et al. (2021) determined the system's distance to be 472 \(\pm\) 31 pc based on the Gaia EDR3 parallax. ### K2 light curve During _K2_, _Kepler_ suffered from a 6 hr periodic drift, causing targets to shift across the detector and degrading the photometric precision. For well-isolated targets, one way of mitigating this problem is to select a sufficiently large photometric aperture that all of the target's flux is captured, regardless of the drift motion. Using the interactive inspection tool in lightkurve (Lightkurve Collaboration et al., 2018), we selected a custom extraction aperture from the target pixel file data to encompass the system's full range of motion over the course of the observation. Additionally, we used lightkurve to visually inspect the images of the source in order to confirm that its brightness variations were attributable to variations in the target (and not from spacecraft systematics or the passage of an asteroid through the photometric aperture). The data were obtained during the _Kepler_/_K2_ Campaign 13, running from 2017 March 8 until 2017 May 27, a duration of \(\approx\) 2.5 months, at a 30 minute cadence. ### LBT Spectrum We obtained spectra of K2BS5 with the Multi-Object Dual Spectrograph (MODS; Pogge et al., 2012) on the Large Binocular Telescope (LBT). Nine individual spectra were obtained on 2020 February 29 (UT) under cloudy conditions. The final three spectra had the strongest signal, and these were averaged to create the final spectrum with a total exposure time of 900s. The Figure 2: The full _Kepler_/_K2_ light curve shown in magnitudes relative to the quiescent magnitude (top panel) and time-resolved power spectrum of K2BS5 (bottom panel). The light curve shows a long-period of quiescence before the steep rise at the beginning of the superoutburst. A decrease in magnitude near the end of the superoutburst precedes an **rebrightening event** before the system resumes its original state of quiescence. The time-resolved power spectrum shows that periodic variability is present only during the superoutburst. dual grating mode for MODS was combined with a 0.8 arcsec slit to provide a spectral resolution of \(R\) = 1860 at H\({}_{\beta}\). Seeing during the exposures varied between 1.1 and 1.5 arcsec. The spectra were extracted and wavelength calibrated using argon and neon emission line arcs. The spectra were flux calibrated using the spectrophotometric standard star Feige 34 obtained on a clear night earlier in the run. ### Krizmanich Photometry We conducted additional ground-based observations of K2BS5 on four nights during the first week of March 2021 using the University of Notre Dame's 0.8m Sarah L. Krizmanich Telescope (SLKT). The time was corrected to Barycentric Julian Date (Eastman et al., 2010) with astropy(Astropy Collaboration et al., 2013). During each of the 2 hour long observations, we obtained unfiltered images using 30 second exposures. The typical signal to noise ratio per exposure was approximately 14.0. ## 3 Analysis ### Survey photometry Figure 1 plots survey photometry of K2BS5 obtained by the Catalina Real-Time Transient Survey (CRTS; Drake et al., 2009), the All-Sky Automated Survey for Supernovae (ASAS-SN; Shappee et al., 2014; Kochanek et al., 2017), Asteroid Terrestrial-impact Last Alert System (ATLAS; Tonry et al., 2018), and Gaia. Perhaps the most striking property of these data is the absence of outbursts; only a single undisputable outburst is present (in 2019), although there is a possible second outburst near the beginning of 2006. Owing to K2BS5's proximity to the ecliptic, there are significant seasonal gaps that could conceal additional superoutbursts. Indeed, the _K2_ light curve, the baseline of which is indicated in Fig. 1, recorded a superoutburst in 2017 during one of those gaps. The 2019 outburst lasted for \(\sim 2\) weeks and has the shape of a superoutburst. The ATLAS data reveal that when K2BS5 emerged from solar conjunction in 2019, it was \(\sim 0.3\) mag fainter than usual and remained so for the next two months, after which it entered a superoutburst. ### K2 Light Curve and Power Spectrum In Figure 2, we show the _K2_ light curve of K2BS5, the most notable features of which are the superoutburst and a subsequent rebrightening event. The superoutburst occurs after at least three weeks of quiescence and begins with a steep rise of over 3 mag near BKJD = 3022, where BKJD is the Barycentric Kepler Julian Date, defined as BJD-2454833). After reaching its peak brightness near BKJD = 3024, the light curve begins to experience a slow fade, and large-amplitude superhumps appear. Near BKJD = 3033, the rate of fading increases dramatically. Notably, in the weeks following the superoutburst, K2BS5 never fully faded to its pre-superoutburst quiescent level, remaining \(\sim 0.3\) mag brighter after the superoutburst than it was before. As we noted in Sec. 3.1, the ATLAS observations of the suspected superoutburst in 2019 also show a \(\sim 0.3\) mag discontinuity in the quiescent brightness level before and after the superoutburst. The first rebrightening event was observed at BKJD=3040.3 for \(\sim 2\) d and showed a prominent superhump signal, while the second rebrightening occurred 20 d later. Unfortunately, the _K2_ campaign terminated while the second rebrightening was underway. Unlike the initial rebrightening, this second event is not accompanied by superhumps. Although the nature of the second event is somewhat ambiguous, both events appear to be causally related to the superoutburst. We base this inference on (1) the conspicuous absence of other outbursts of comparable amplitude in the long-term light curve (Fig. 1) and (2) the fact that K2BS5 was still slightly brighter than its pre-quiescent brightness. To better understand the changes in the light curve, we used astropy(Astropy Collaboration et al., 2013) to create a two-dimensional Lomb-Scargle power spectrum (Lomb, 1976; Scargle, 1982) with a sliding 0.5 d window. The lower panel of Fig. 2 presents the 2D power spectrum for the full dataset, while Fig. 4 shows an enlarged Figure 3: Low-amplitude oscillations appear during the post-superoutburst phase just before the **rebrightening**. These \(\sim\)0.4 mag oscillations are reminiscent of the “mini-rebrightenings” recorded in the Kepler data of V585 Lyr (Kato and Osaki, 2013) and the TESS data of V844 Her (Kato, 2022). version of the 2D power spectrum during the superoutburst. The steep rise of the superoutburst at BKJD = 3024 coincides with the appearance of steady oscillations with a period of \(\sim 2\) h, nearly identical to the orbital period. This behavior is consistent with the expected behavior of ESH in WZ Sge systems--but, for reasons we set forth in Sec. 4, we do not classify them as such. After \(\sim\)1 day of these oscillations, the dominant frequency in the power spectrum quickly drops to a much lower frequency of 10.9 cycles/day, which we identify as the onset of Stage A superhumps. This signal increases in frequency over the two days before leveling off around 11.2 cycles/day. The superhump power fades significantly near BKJD 3034 and reemerges 1.5 d before the rebrightening. The oscillations that redevelop just before the rebrightening are seen most strongly in the second harmonic of the superhump frequency. Fig. 5 plots the detrended light curve near the peak of the superoutburst, and it reveals that there are \(\sim\)15 cycles of the double-wave oscillations before the appearance of Stage A superhumps. This stage transition is rapid, occurring in just several superhump cycles. ### Extinction, Absolute Magnitude, and Superoutburst Amplitude To estimate the absolute magnitude of K2BS5, we use DECam observations taken on January 30, 2020 (Honscheid & DePoy, 2008). K2BS5 was imaged in 3 filters: g, r, and i. In IRAF, aperture photometry was performed on K2BS5 and multiple surrounding stars. We find the apparent magnitude of K2BS5 in quiescence to be \(g=18.01\pm 0.04\), \(r=17.54\pm 0.03\), and \(i=17.19\pm 0.03\). Using the Gaia EDR3 parallax of K2BS5 in Bailer-Jones et al. (2021) and an reddening estimate of \(E(g-r)=0.29\pm 0.02\) based on a 3D dust map modeled by Green et al. (2019), we find the absolute magnitude of K2BS5 to be \(M_{g}=9.36\pm 0.15\), \(M_{r}=8.88\pm 0.15\), and \(M_{i}=8.54\pm 0.15\), calibrated in Figure 4: The shared-axis, detrended lightcurve (top) and time-resolved power spectrum (bottom) of K2BS5 starting just before the maximum of the superoutburst. Frequency units are cycles d\({}^{-1}\). Double-wave oscillations near the orbital period begin slightly before BKJD 3024 and transition rapidly into stage A superhumps, marked by the frequency drop on the power spectrum mid day 3024. Stage A is also relatively short lived, rising rapidly to higher frequency stage B. While the superhumps are not present during the **rebrightening event**, they do reappear just before it, between BKJD 3037 and BKJD 3039. Both the fundamental and second harmonic are visible on the power spectrum. Note: different intensity cuts were used in the second and third panels of the figure to improve signal visibility of the second harmonic. the Gaia-SDSS-PS1 Proper Motion Catalog (Tian et al., 2017). After conversion to Gaia magnitudes 1, we find a luminosity of \(G=8.92\pm 0.13\), slightly brighter than the average CV with an orbital period of 2 hours (Abrahams et al., 2020). Footnote 1: [https://gea.esac.esa.int](https://gea.esac.esa.int) As seen in Figure 2, the superoutburst begins with a steep rise in flux, corresponding to a magnitude increase of \(\sim\)3.3 mag as estimated directly from the _K2_ data. However, given the large aperture needed to mitigate the _K2_ drift and the blending of K2BS5 with nearby stars, it is likely that the _K2_ photometry is overestimating the quiescent brightness of K2BS5. To refine our estimate of the outburst amplitude, we first converted the flux measurement at the peak of the superoutburst to an r-band magnitude. Because the effects of contamination are minimal when K2BS5 is brightest, we can assume that this inferred maximum r-magnitude is accurate. However, during quiescence, the DECam images offer a more accurate measurement of K2BS5's brightness. Using these two measurements, we find the true amplitude of the outburst to be 3.8\(\pm\)0.05 mag. ### LBT Spectrum & Ground-Based Photometry Obtained approximately 150 days after the peak of the 2019 superoutburst, the LBT spectrum of K2BS5 (Figure 6) shows broad, double-peaked hydrogen and helium emission features typical of a quiescent dwarf nova. The continuum is relatively flat except for a significant rise at the Balmer jump. After correcting for dust extinction as described in Sec. 3.3, the continuum rises slightly toward the blue, consistent with a disk dominated CV in quiescence. The full-width at half maximum (FWHM) of the H\({}_{\beta}\) emission line is 2020\(\pm\)20 km s\({}^{-1}\). A weak He II emission feature is seen that was not present in the Thorstensen (2020) spectrum. The presence of He II \(\lambda\)4686A in a CV spectrum can be attributed to high-temperature plasma or photoionization, conditions generally not present in quiescent DN systems. Strong He II emission can be a sign of accretion onto a magnetic white dwarf, although here, the He II line is not especially strong. The presence of a fast-spinning WD can be tested with high cadence optical photometry, which could detect the rotational period of the WD. Because _Kepler_'s 30 min cadence is too slow to search for plausible spin periods, we analyzed the power spectrum of our comparatively fast-cadence SLKT observations for evidence of a short-period periodicity. We found no evidence of any such signal up to a frequency of 1300 cycles d\({}^{-1}\). We conclude that there is no persuasive evidence that the WD is magnetized. ## 4 Discussion ### Mass Ratio Theory and observation show that the stellar mass ratio in SU UMa and WZ Sge binaries is related to the ratio between the binary orbital period and the superhump period. Following the approach of Kato and Osaki (2013), we derive the superhump period from Stage A, and then infer the value of \(\epsilon\)* from the following equation: \[\epsilon^{*}=\frac{\omega_{pr}}{\omega_{orb}}=1-\frac{P_{orb}}{P_{SH}} \tag{1}\] in which \(\omega_{pr}\) represents the apsidal precession rate of the accretion disk, and \(\omega_{orb}\) represents the orbital frequency (Kato and Osaki, 2013). The orbital frequency is not discernible in the power spectrum, but it is already known from the spectroscopic study by Thorstensen (2020). Meanwhile, we measure a Stage A frequency of 10.936\(\pm\)0.045 cycles d\({}^{-1}\) from the power spectrum.2 Combining the Thorstensen (2020) orbital period with our measurement of the Stage A period and following Kato and Osaki (2013),3 we determine the mass ratio of the system to be \(q=0.173\pm 0.035\). This ratio is typical of an SU UMa-type system and would be significantly higher than most known WZ Sge-type systems (Figure 7). In WZ Sge stars, Osaki and Meyer (2002) estimate the upper limit to be \(q\leq 0.08\), and according to Kato (2015) most fall at or below \(q=0.06\). As the calculated mass ratio (q) falls well below the typical evolutionary track, we also calculated the value of q independently with the Stage B frequency to verify our Stage A measurements were not contaminated by Stage B. From the Stage B frequency of 11.160\(\pm\)0.089 cycles d\({}^{-1}\), and with the method provided by Kato (2022), we calculated the mass ratio of the system to be \(q=0.175\pm 0.025\). Thus, the independent calculations of \(q\) are in excellent agreement. The system's modest divergence from the typical evolutionary track in Figure 7 suggests the presence of a heavy white dwarf, which might also account for the presence of He II in Figure 6. Footnote 2: We estimated the uncertainties by injecting into the light curve synthetic sinusoids with the same amplitudes as the superhumps; we then measured the resulting frequency in the power spectrum, calculated the error in frequency, and repeated the procedure using a different frequency. The standard deviations of the resulting distributions are the uncertainties for our superhump period measurements. Figure 5: Top: Transition from double-wave oscillations to Stage A superhumps. The expected times of superhump maxima are indicated with black arrows (for the double-wave oscillations) and blue arrows (for Stage A superhumps). Both the double-wave oscillations and Stage A superhumps show stable, periodic maxima. Double-wave oscillations persisted for \(\sim\)15 cycles before transitioning into Stage A superhumps in just several superhump cycles. Bottom left: Power spectra of the double-wave oscillations and Stage A superhumps. Bottom right: Phase-averaged profile of double-wave oscillations. Figure 6: LBT quiescent spectrum of K2BS5 with no reddening correction (black line) and after correction for a reddening of \(E(g-r)=0.29\) mag (blue line). Wavelength units are given in angstroms and flux density units in erg s\({}^{-1}\) cm\({}^{-2}\) Å\({}^{-1}\). ### The First Rebrightening The power spectrum of leading up to the first rebrightening shows significant power at the superhump frequency and its second harmonic (Fig. 2), which suggests that the disk remained tidally deformed even after the superoutburst faded. One possible explanation for the enhancement of the second superhump harmonic comes from simulations and observations by Wood et al. (2011). They showed that at the conclusion of a superoutburst, the interaction between the accretion stream and the outer disk can boost power at the second harmonic of the superhump frequency because the relative depth of the stream-disk hotspot in the WD's gravitational potential varies across the superhump cycle when the rim of the outer disk is eccentric (Wood et al., 2011). We also see several "mini-rebrightenings," each lasting for \(\sim\)0.3 d with an amplitude of \(\sim\)0.3-0.4 mag, in the trough immediately before the first rebrightening. These mini-rebrightenings seen in Fig. 3 might be identical to the similarly named phenomenon observed in V585 Lyr by Kato and Osaki (2013c). The V585 Lyr mini-rebrightenings were also observed during a dip in the light curve immediately preceding a rebrightening, and their amplitudes, recurrence intervals, and durations were all comparable to what we see in K2BS5. The major difference is that the mini-rebrightenings in K2SB5 are comparatively ill-defined, with only two or three visible. This is far fewer than the nine very obvious mini-rebrightenings in Fig. 7 of Kato and Osaki (2013c). A similar phenomenon was also observed by Kato (2022a) in V844 Her. ### An SU UMa system with some properties of WZ Sge stars The pre-Stage-A oscillations near the superoutburst maximum are the most intriguing feature in the light curve, as they resemble ESH, the presence of which is often considered a defining quality of WZ Sge systems. As summarized in Kato (2015), ESH are low-amplitude, double-peaked modulations that occur within \(\sim 0.1\%\) of the binary orbital period; they appear near the superoutburst maximum and always precede ordinary superhumps. On one hand, the pre-Stage-A oscillations in K2BS5 appear when ESH would be expected, are consistent with the known orbital period, and have a photometric profile compatible with the compilation in Figure 11 of Kato (2015); moreover, their peak-to-peak amplitude of \(\sim 0.04\) mag is in excellent agreement with the histogram of ESH amplitudes in Figure 15 of Kato (2015). However, the period of the oscillations is too uncertain to establish that it matches the Thorstensen (2020) orbital period to within \(\sim 0.1\%\), as is required of ESH (Kato, 2015). As a result, we do not claim these oscillations to be ESH.4 Footnote 4: The large uncertainty of the photometric period is the result of two factors: the short duration of the pre-Stage-A oscillations (\(\sim\)15 cycles) and the low cadence of _Kepler_ (which precludes us from using an O\(-\)C analysis of the maxima to more precisely measure their period). In WZ Sge systems, ESH often last for significantly longer, which facilitates the determination of a highly precise orbital period. Although it might be tempting to dismiss the pre-Stage-A oscillations as a transient peculiarity of just one dwarf nova, Kato (2022a) reported the presence of an apparently similar phenomenon in TESS observations of the SU UMa-type dwarf nova V844 Her. Kato (2022a) noted that it is unclear as to whether double-waved oscillations are a general feature of SU UMa systems and cautioned that this phenomenon should not be confused with ESH; however, that study did not explain how to distinguish the two using photometry alone. Considering the recent recognition of long-period WZ Sge stars (Wakamatsu et al., 2017, and references therein), this point would benefit from elaboration, as the similarities between the two phenomena are sufficiently close that it can complicate classifications of WZ Sge systems based solely on time-series photometry. A full consideration of the criteria of WZ Sge systems provides considerable evidence against K2BS5 being a WZ Sge system, despite several similarities. In addition to ESH at the beginning of the superoutburst, WZ Sge stars are generally characterized by several additional observational properties (Kato, 2015): * one or more rebrightening events at the end of the superoutburst. * absence of a distinct precursor outburst 5. Footnote 5: The absence of a precursor outburst is sometimes observed in SU UMa systems and is therefore not an exclusive property of WZ Sge stars, as seen in Case B of Osaki and Meyer (2003) * extremely long supercycles, defined as the average time between superoutbursts, with a minimum measured duration of 4 years (Kato, 2015). * large outburst amplitudes that typically exceed 7 mag. * orbital periods that are less than 0.065 d. * a mass ratio, \(q\), generally less than 0.1. With K2BS5, it is evident from Fig. 2 that there is no distinct precursor to the superoutburst, which is a property of WZ Sge systems. The interval between supercycles, however, is a bit more ambiguous; there are large seasonal gaps in the ground-based survey photometry, and the K2 superoutburst occured during one of them. The interval between the only observed superoutbursts (in April 2017 and September 2019) suggests a supercycle of roughly 2.5 years (Figures 1 and 2). This interval would be rather short for a typical WZ Sge star, the supercycles of which typically range from 4 years to upwards of 30 years, with a majority of these systems having recurrence times shorter than \(\sim\)20 years. Nonetheless, Table 1 in Wakamatsu et al. (2017) lists two candidate long-period WZ Sge systems, BC UMa 6 and V1251 Cyg, whose supercycles can be as short as 2 or 3 years, respectively. Footnote 6: As Maehara et al. (2007) and Wakamatsu et al. (2017) discuss, BC UMa might be an intermediate object between typical WZ Sge systems and SU UMa systems, so it is not clear whether BC UMa’s properties can be generalized to long-period WZ Sge objects. Indeed, when Wakamatsu et al. (2017) compared the supercycles of candidate long-period WZ Sge stars, they excluded BC UMa for this reason. Another basic characteristic of WZ Sge-type systems is the rarity of normal outbursts during the periods between superoutbursts. The long-term light curve in Fig. 1 is compatible with this criterion, as K2BS5 appears to experience superoutbursts almost exclusively. The argument for a long-period WZ Sge interpretation of K2BS5 begins to fall apart on other grounds. In particular, long-period WZ Sge systems are hypothesized to have unusually low mass-transfer rates at a given orbital period, which enables the outer disk to expand unusually far (Wakamatsu et al., 2017). Several different lines of evidence suggest that K2BS5's mass-transfer rate is too high to be in this regime. First, as we noted earlier, its absolute magnitude is slightly brighter than CVs of comparable orbital period Abrahams et al. (2020). Furthermore, the presence of He II \(\lambda\)4686A in the LBT spectrum underscores that the mass-transfer rate is not especially low. The Wakamatsu et al. (2017) mechanism for producing long-period WZ Sge stars is therefore inapplicable to K2BS5. Another argument against interpreting K2BS5 as a long-period WZ Sge system is that ESH are detectable only above binary inclinations of \(i\gtrsim 40^{\circ}\)(Kato, 2015, 2022c). Thus, if the pre-Stage-A oscillations were ESH, we would expect to detect the binary orbital period in the quiescent light curve. The absence of the orbital frequency in the quiescent power spectrum is consistent with a low orbital inclination. The superoutburst amplitude (3.8\(\pm\)0.05 mag; see Sec. 3.3) is probably the most blatant observational dissimilarity with long-period WZ Sge systems. In their Section 3.3, Kato (2015) reported that 75% of known WZ Sge systems exhibit an outburst of at least 6.9 mag, with a median value of 7.7 mag. Moreover their Figure 3, which presents a histogram of the superoutburst amplitudes of WZ Sge stars, only extends down to 5 magni Figure 7: A distribution of the estimated mass ratio (\(q\)) versus binary orbital period of known WZ Sge stars. The dashed blue line shows the standard CV evolutionary track from Knigge et al. (2011), while the solid blue line represents their optimal binary track. The dashed red line marks the short-period edge of the orbital period gap (Knigge, 2006). Confirmed WZ Sge stars from Kato (2015) and Kato (2022b) are labeled in red alongside short period CV stars from Kato (2022c) in grey. K2BS5 is shown in gold, while the long-period WZ Sge systems RZ Leo and ASASSN-16eg (Wakamatsu et al., 2017) are plotted as blue diamonds. tudes, which only underscores how extraordinarily low K2BS5's amplitude is when compared to typical WZ Sge systems. On balance, K2BS5 is best characterized as an SU UMa system that shows some deceptive observational similarities with WZ Sge systems. The nature of the pre-stage-A oscillations is unclear, but given the presence of a similar phenomenon of unknown origin in V844 Her (Kato, 2022), future _Kepler_- and TESS-based studies of SU UMa systems should search for this phenomenon to ascertain both its prevalence and physical origin. ## 5 Conclusion K2BS5 is an SU UMa-type dwarf nova with infrequent superoutbursts, no observed normal outbursts, and a mass ratio of \(q=0.173\pm 0.035\). Its most notable property is the short-lived appearance of double-peaked oscillations near the peak of the superoutburst, prior to the emergence of ordinary superhumps. The period of these oscillations agrees (within the errors) with the spectroscopic orbital period from Thorstensen (2020) and is significantly shorter than the periods of the subsequent ordinary superhumps. Observationally, this phenomenon could easily mimic the early superhumps observed in WZ Sge systems, but their period cannot be measured with sufficient precision to test whether they agree with the orbital period to within 0.1% (a prerequisite for classifying them as early superhumps). We thank the referee, Taichi Kato, for a swift and detailed report that reshaped our interpretation of the data and significantly improved this paper. astropy (Astropy Collaboration et al., 2013) lightkurve (Lightkurve Collaboration et al., 2018)
2305.19670
A converse Lyapunov-type theorem for control systems with regulated cost
Given a nonlinear control system, a target set, a nonnegative integral cost, and a continuous function $W$, we say that the system is globally asymptotically controllable to the target with W-regulated cost, whenever, starting from any point z, among the strategies that achieve classical asymptotic controllability we can select one that also keeps the cost less than W(z). In this paper, assuming mild regularity hypotheses on the data, we prove that a necessary and sufficient condition for global asymptotic controllability with regulated cost is the existence of a special, continuous Control Lyapunov function, called a Minimum Restraint function. The main novelty is the necessity implication, obtained here for the first time. Nevertheless, the sufficiency condition extends previous results based on semiconcavity of the Minimum Restraint function, while we require mere continuity.
Anna Chiara Lai, Monica Motta
2023-05-31T09:08:59Z
http://arxiv.org/abs/2305.19670v1
# A converse Lyapunov-type theorem ###### Abstract. Given a nonlinear control system, a target set, a nonnegative integral cost, and a continuous function \(W\), we say that the system is _globally asymptotically controllable to the target with \(W\)-regulated cost_, whenever, starting from any point \(z\), among the strategies that achieve classical asymptotic controllability we can select one that also keeps the cost less than \(W(z)\). In this paper, assuming mild regularity hypotheses on the data, we prove that a necessary and sufficient condition for global asymptotic controllability with regulated cost is the existence of a special, continuous Control Lyapunov function, called a _Minimum Restraint function_. The main novelty is the necessity implication, obtained here for the first time. Nevertheless, the sufficiency condition extends previous results based on semiconcavity of the Minimum Restraint function, while we require mere continuity. Key words and phrases:Converse Lyapunov-type theorem, Asymptotic controllability with regulated cost, Optimal control, Nonlinear theory, Viscosity solutions 2020 Mathematics Subject Classification: 93D30, 93B05, 49J15, 93C10, 49L25 *Corresponding author This research is partially supported by the INdAM-GNAMPA Project 2023 CUP E53C22001930001. Introduction Let \(\mathcal{C}\) be a finite set of \(n\)-dimensional subsets of \(\mathcal{C}\). A _\(n\)-dimensional subset_\(\mathcal{C}\) of \(\mathcal{C}\) is a subset of \(\mathcal{C}\), and \(\mathcal{C}\) is a subset of \(\mathcal{C}\). The _\(n\)-dimensional subset_\(\mathcal{C}\) of \(\mathcal{C}\) is a subset of \(\mathcal{C}\), and \(\mathcal{C}\) is a subset of \(\mathcal{C}\). The _\(n\)-dimensional subset_\(\mathcal{C}\) of \(\mathcal{C}\) is a subset of \(\mathcal{C}\), and \(\mathcal{C}\) is a subset of \(\mathcal{C}\). The _\(n\)-dimensional subset_\(\mathcal{C}\) of \(\mathcal{C}\) is a subset of \(\mathcal{C}\), and \(\mathcal{C}\) is a subset of \(\mathcal{C}\). The _\(n\)-dimensional subset_\(\mathcal{C}\) of \(\mathcal{C}\) is a subset of \(\mathcal{C}\), and \(\mathcal{C}\) is a subset of \(\mathcal{C}\). The _\(n\)-dimensional subset_\(\mathcal{C}\) of \(\mathcal{C}\) is a subset of \(\mathcal{C}\), and \(\mathcal{C}\) is a subset of \(\mathcal{C}\). The _\(n\)-dimensional subset_\(\mathcal{C}\) of \(\mathcal{C}\) is a subset of \(\mathcal{C}\), and \(\mathcal{C}\) is a subset of \(\mathcal{C}\). The _\(n\)-dimensional subset_\(\mathcal{C}\) of \(\mathcal{C}\) is a subset of \(\mathcal{C}\), and \(\mathcal{C}\) is a subset of \(\mathcal{C}\). The _\(n\)-dimensional subset_\(\mathcal{C}\) of \(\mathcal{C}\) is a subset of \(\mathcal{C}\), and \(\mathcal{C}\) is a subset of \(\mathcal{C}\). The _\(n\)-dimensional subset_\(\mathcal{C}\) of \(\mathcal{C}\) is a subset of \(\mathcal{C}\), and \(\mathcal{C}\) is a subset of \(\mathcal{C}\). The _\(n\)-dimensional subset_\(\mathcal{C}\) of \(\mathcal{C}\) is a subset of \(\mathcal{C}\), and \(\mathcal{C}\) is a subset of \(\mathcal{C}\). The _\(n\)-dimensional subset_\(\mathcal{C}\) of \(\mathcal{C}\) is a subset of \(\mathcal{C}\), and \(\mathcal{C}\) is a subset of \(\mathcal{C}\). The _\(n\)-dimensional subset_\(\mathcal{C}\) of \(\mathcal{C}\) is a subset of \(\mathcal{C}\), and \(\mathcal{C}\) is a subset of \(\mathcal{C}\). The _\(n\)-dimensional subset_\(\mathcal{C}\) of \(\mathcal{C}\) is a subset of \(\mathcal{C}\), and \(\mathcal{C}\) is a subset of \(\mathcal{C}\). The _\(n\)-dimensional subset_\(\mathcal{C}\) of \(\mathcal{C}\) is a subset of \(\mathcal{C}\), and \(\mathcal{C}\) is a subset of \(\mathcal{C}\). The _\(n\)-dimensional subset_\(\mathcal{C}\) of \(\mathcal{C}\) is a subset of \(\mathcal{C}\), and \(\mathcal{C}\) is a subset of \(\mathcal{C}\). The _\(n\)-dimensional subset_\(\mathcal{C}\) of \(\mathcal{C}\) is a subset of \(\mathcal{C}\), and \(\mathcal{C}\) is a subset of \(\mathcal{C}\). The _\(n\)-dimensional subset_\(\mathcal{C}\) of \(\mathcal{C}\) is a subset of \(\mathcal{C}\), and \(\mathcal{C}\) is a subset of \(\mathcal{C}\). The _\(n\)-dimensional subset_\(\mathcal{C}\) of \(\mathcal{C}\) is a subset of \(\mathcal{C}\), and \(\mathcal{C}\) is a subset of \(\mathcal{C}\). The _\(n\)-dimensional subset_\(\mathcal{C}\) of \(\mathcal{C}\) is a subset of \(\mathcal{C}\), and \(\mathcal{C}\) is a subset of \(\mathcal{C}\). The _\(n\)-dimensional subset_\(\mathcal{C}\) of \(\mathcal{C}\) is a subset of \(\mathcal{C}\), and \(\mathcal{C}\) is a subset of \(\mathcal{C}\). The _\(n\)-dimensional subset_\(\mathcal{C}\) of \(\mathcal{C}\) is a subset of \(\mathcal{C}\), and \(\mathcal{C}\) is a subset of \(\mathcal{C}\). The _\(n\)-dimensional subset_\(\mathcal{C}\) of \(\mathcal{C}\) is a subset of \(\mathcal{C}\), and \(\mathcal{C}\) is a subset of \(\mathcal{C}\). The _\(n\)-dimensional subset_\(\mathcal{C}\) of \(\mathcal{C}\) is a subset of \(\mathcal{C}\), and \(\mathcal{C}\) is a subset of \(\mathcal{C}\). The _\(n\)-dimensional subset_\(\mathcal{C}\) of \(\mathcal{C}\) is a subset of \(\mathcal{C}\), and \(\mathcal{C}\) is a subset of \(\mathcal{C}\). The _\(n\)-dimensional subset_\(\mathcal{C}\) of \(\mathcal{C}\) is a subset of \(\mathcal{C}\), and \(\mathcal{C}\) is a subset of \(\mathcal{C}\). The _\(n\)-dimensional subset_\(\mathcal{C}\) of \(\mathcal{C}\) is a subset of \(\mathcal{C}\), and \(\mathcal{C}\) is a subset of \(\mathcal{C}\). The _\(n\)-dimensional subset_\(\mathcal{C}\) of \(\mathcal{C}\) is a subset of \(\mathcal{C}\), and \(\mathcal{C}\) is a subset of \(\mathcal{C}\). The _\(n\)-dimensional subset_\(\mathcal{C}\) of \(\mathcal{C}\) is a subset of \(\mathcal{C}\), and \(\mathcal{C}\) is a subset of \(\mathcal{C}\). The _\(n\)-dimensional subset_\(\mathcal{C}\) of \(\mathcal{C}\) is a subset of \(\mathcal{C}\), and \(\mathcal{C}\) is a subset of \(\mathcal{C}\). The _\(n\)-dimensional subset_\(\mathcal{C}\) of \(\mathcal{C}\) is a subset of \(\mathcal{C}\), and \(\mathcal{C}\) is a subset of \(\mathcal{C}\). The _\(n\)-dimensional subset_\(\mathcal{C}\) of \(\mathcal{C}\) is a subset of \(\mathcal{C}\), and \(\mathcal{C}\) is a subset of \(\mathcal{C}\). The _\(n\)-dimensional subset_\(\mathcal{C}\) of \(\mathcal{C}\) is a subset of \(\mathcal{C}\), and \(\mathcal{C}\) is a subset of \(\mathcal{C}\). The _\(n\)-dimensional subset_\(\mathcal{C}\) of \(\mathcal{C}\) is a subset of \(\mathcal{C}\), and \(\mathcal{C}\) is a subset of \(\mathcal{C}\). The _\(n\)-dimensional subset_\(\mathcal{C}\) of \(\mathcal{C}\) is a subset of \(\mathcal{C}\), and \(\mathcal{C}\) is a subset of \(\mathcal{C}\). The _\(n\)-dimensional subset_\(\mathcal{C}\) of \(\mathcal{C}\) is a subset of \(\mathcal{C}\), and \(\mathcal{C}\) is a subset of \(\mathcal{C}\), and \(\mathcal{C}\) is a subset of \(\mathcal{C}\). The _\(n\)-dimensional subset_\(\mathcal{C}\) of \(\mathcal{C}\) is a subset of \(\mathcal{C}\), and \(\mathcal{C}\) is a subset of \(\mathcal{C}\), and \(\mathcal{C}\) is a subset of \(\mathcal{C}\). positive definite and proper function \(V:\overline{\mathbb{R}^{n}\setminus\mathcal{C}}\to\mathbb{R}\) solves the decrease condition if and only if it is a viscosity supersolution of the Hamilton-Jacobi-Bellman equation \[\max_{u\in U}\Big{\{}-Dv(z)\,,\,f(z,u)\rangle-[p_{0}(v(z))\,l(z,u)+\gamma(V(z))] \Big{\}}=0\quad\text{ for all }z\in\mathbb{R}^{n}\setminus\mathcal{C},\] we derive both the facts that the value function built in the proof of the necessity implication satisfies the decrease condition and the sufficiency implication, from a viscosity super-optimality principle (Proposition 4.1 below). However, this principle is not included in the known theory, since \(p_{0}\) is merely continuous and we do not assume the usual linear growth hypothesis on the dynamics function \(f\), but \(x\)-local Lipschitz continuity only (see [1, Thm. 2.40] or [10, Thm. 3.3]). We prove it by introducing a slight generalization of a classical comparison principle for infinite horizon problems (Lemma 4.1 below), interesting in itself. We leave for future investigation the issue of the existence of a semiconcave Minimum Restraint function, which, as is well known, plays a key role in the feedback stabilizability of nonlinear systems, both with cost (see [16, 14, 13], and [9, 8] for a notion of degree-\(k\) Minimum Restraint function) and without (see e.g. [22, 23, 12, 15], the survey paper [3] and references therein, and [20, 7] for a notion of degree-\(k\) Control Lyapunov function). The paper is organized as follows. In the remaining part of this section we give some notations. In Section 2 we introduce precisely assumptions and definitions and state our Converse Lyapunov-type theorem. Section 3 is devoted to prove that global asymptotic controllability with \(W\)-regulated cost for some \(W\), implies the existence of a continuous Minimum Restraint function, while the converse implication is obtained in Section 4. ### Notation For \(a,b\in\mathbb{R}\), we set \(a\lor b:=\max\{a,b\}\), \(a\wedge b:=\min\{a,b\}\). Let \(\Omega\subseteq\mathbb{R}^{N}\) for some integer \(N\geq 1\) be a nonempty set. For every \(r\geq 0\), we set \(B_{r}(\Omega):=\{x\in\mathbb{R}^{n}_{\text{ }}|\ \ d(x,\Omega)\leq r\}\), where \(d\) is the usual Euclidean distance. We use \(\overline{\Omega}\), \(\partial\Omega\), and \(\dot{\Omega}\) to denote the closure, the boundary, and the interior of \(\Omega\), respectively. For any interval \(I\subseteq\mathbb{R}\), \(\mathcal{M}(I,\Omega)\), \(AC(I,\Omega)\) are the sets of functions \(x:I\to\Omega\), which are Lebesgue measurable or absolutely continuous, respectively, on \(I\). When no confusion may arise, we simply write \(\mathcal{M}(I)\), \(AC(I)\). As customary, we use \(\mathcal{KL}\) to denote the set of all continuous functions \(\beta:[0,+\infty)\times[0,+\infty)\to[0,+\infty)\) such that: (1) \(\beta(0,t)=0\) and \(\beta(\cdot,t)\) is strictly increasing and unbounded for each \(t\geq 0\); (2) \(\beta(r,\cdot)\) is strictly decreasing for each \(r\geq 0\); (3) \(\beta(r,t)\to 0\) as \(t\to+\infty\) for each \(r\geq 0\). Given an open set \(\Omega\subseteq\mathbb{R}^{N}\), a continuous function \(W:\overline{\Omega}\to[0,+\infty)\) is said _positive definite_ if \(W(x)>0\ \,\forall x\in\Omega\) and \(W(x)=0\ \,\forall x\in\partial\Omega\). It is called _proper_ if the pre-image \(W^{-1}(K)\) of any compact set \(K\subset[0,+\infty)\) is compact. Let \(x\in\Omega\). The set \[D^{-}W(x):=\left\{p\in\mathbb{R}^{n}\ |\ \liminf_{y\to x}\frac{W(y)-W(x)-p(y-x)}{|y-x|} \geq 0\right\},\] is the (possibly empty) _viscosity subdifferential of \(W\) at \(x\)_. We recall that \(p\in D^{-}W(x)\) if and only if there exists \(\varphi\in C^{1}(\Omega)\) such that \(D\varphi(x)=p\) and \(W-\varphi\) has a local minimum at \(x\) (see e.g. [1]). We use \(\partial_{P}W(x)\) to denote the _proximal _subdifferential of \(W\) at \(x\)_ (which may very well be empty). As it is known, \(p\) belongs to \(\partial_{P}W(x)\) if and only if there exist \(\sigma\) and \(\eta>0\) such that \[W(y)-W(x)+\sigma|y-x|^{2}\geq\langle p\,,\,y-x\rangle\qquad\text{ for all }y\in B_{\eta}(\{x\}).\] The _limiting subdifferential \(\partial_{L}W(x)\) of \(W\) at \(x\in\Omega\),_ is defined as \[\partial_{L}W(x):=\Big{\{}\lim_{i\to+\infty}p_{i}\mid\ p_{i}\in\partial_{P}W(x _{i}),\ \lim_{i\to+\infty}x_{i}=x\Big{\}}.\] The set \(\partial_{L}W(x)\) is always closed. If \(W\) is locally Lipschitz continuous on \(\Omega\), \(\partial_{L}W(x)\) is compact, nonempty at every point, the set-valued map \(x\rightsquigarrow\partial_{L}W(x)\) is upper semicontinuous, and the Clarke generalized gradient at \(x\) coincides with \(\operatorname{co}\partial_{L}W(x)\). As sources for nonsmooth analysis we refer e.g. to [2, 6, 27]. Let \(W:\overline{\mathbb{R}^{n}\setminus\mathcal{C}}\to[0,+\infty)\) be a continuous, proper, and positive definite function. We can relate the level sets of \(W\) with the ones of the distance function \(\mathbf{d}\), by introducing the functions \(d_{W^{+}}\), \(d_{W^{-}}:(0,+\infty)\to(0,+\infty)\), given by \[d_{W^{-}}(r) :=\sup\left\{\alpha>0\mid\quad\{\tilde{z}\mid\ W(\tilde{z})\leq \alpha\}\subseteq\{\tilde{z}\mid\ \mathbf{d}(\tilde{z})\leq r\}\right\}, \tag{5}\] \[d_{W^{+}}(r) :=\inf\left\{\alpha>0\mid\quad\{\tilde{z}\mid\ W(\tilde{z})\leq \alpha\}\supseteq\{\tilde{z}\mid\ \mathbf{d}(\tilde{z})\leq r\}\right\}. \tag{4}\] By [13, Lemma 3.6], these functions are well-defined, increasing, and \[\lim_{r\to 0^{+}}d_{W^{+}}(r)=\lim_{r\to 0^{+}}d_{W^{-}}(r)=0,\quad\lim_{r \to+\infty}d_{W^{+}}(r)=\lim_{r\to+\infty}d_{W^{-}}(r)=+\infty. \tag{6}\] Moreover, one has \[d_{W^{-}}(\mathbf{d}(x))\leq W(x)\leq d_{W^{+}}(\mathbf{d}(x))\quad\text{ for all }x\in\mathbb{R}^{n}\setminus\mathcal{C}. \tag{7}\] Approximating \(d_{W^{-}}\) from below and \(d_{W^{+}}\) from above if necessary, we can thus assume the existence of continuous, strictly increasing functions, still denoted \(d_{W^{-}}\) and \(d_{W^{+}}\), satisfying (6) and (7). ## 2. A Converse Theorem for Minimum Restraint functions Throughout the whole paper we assume that: * \(U\subset\mathbb{R}^{m}\) _is a nonempty compact set,_ \(\mathcal{C}\subset\mathbb{R}^{n}\) _is a nonempty, closed subset with compact boundary;_ * _the functions_ \(f:\mathbb{R}^{n}\times U\to\mathbb{R}^{n}\)_,_ \(l:\mathbb{R}^{n}\times U\to[0,+\infty)\) _are continuous on_ \(\mathbb{R}^{n}\times U\)_,_ \(x\mapsto f(x,u)\) _and_ \(x\mapsto l(x,u)\) _are locally Lipschitz continuous, uniformly with respect to_ \(u\in U\)_._ Under these assumptions, given \(z\in\mathbb{R}^{n}\setminus\mathcal{C}\) and \(u\in\mathcal{M}([0,+\infty),U)\), there exist a maximal time \(T^{max}\leq+\infty\) and a unique solution \(x\in AC([0,T^{\max}),\mathbb{R}^{n})\) such that \(x(0)=z\) and \[\dot{x}(t)=f(x(t),u(t)),\qquad\text{a.e. }t\in[0,T^{\max}). \tag{8}\] This solution (or trajectory) will be denoted \(x(\cdot\,,u,z)\). If, in addition, some \(c\geq 0\) is given, we define the corresponding cost \(x^{0}(\cdot\,,u,c,z)\) as \[x^{0}(t,u,c,z):=c+\int_{0}^{t}l(x(t,u,z),u(t))\,dt\qquad\text{ for all }t\in[0,T^{\max}). \tag{9}\] Let us preliminarily introduce some definitions. **Definition 2.1** (Admissible controls, trajectories, and costs).: Given an initial condition \(z\in\mathbb{R}^{n}\setminus\mathcal{C}\), a control \(u\in\mathcal{M}([0,+\infty),U)\) is called _admissible from \(z\)_ if there exists \(0<T_{z}(u)\leq T^{\max}\leq+\infty\) such that \[x(t):=x(t,u,z)\notin\mathcal{C}\quad\text{ for all }t\in[0,T_{z}(u));\quad \lim_{t\to T_{z}^{-}(u)}\mathbf{d}(x(t))=0\quad\text{if }T_{z}(u)<+\infty.\] The set of admissible controls from \(z\) will be denoted \(\mathcal{U}(z)\). Given \(z\in\mathbb{R}^{n}\setminus\mathcal{C}\), \(c\geq 0\), and a control \(u\in\mathcal{U}(z)\), we will extend the corresponding trajectory \(x=x(\cdot\,,u,z)\) and cost \(x^{0}:=x^{0}(\cdot\,,u,c,z)\) to \([0+\infty)\), by setting2 Footnote 2: The limit always exists, as \(\partial\mathcal{C}\) is compact and \((x^{0},x)\) is Lipschitz continuous in any compact set \(\overline{B_{R}(\mathcal{C})\setminus\mathcal{C}}\), \(R>0\). \[(x^{0},x)(t):=\lim_{t\to T_{z}^{-}(u)}(x^{0},x)(t)\qquad\text{for any }t\geq T_{z}(u).\] We will call \((x,u)\) and \((x^{0},x,u)\) (both defined on \([0,+\infty)\)) an _admissible pair from \(z\)_ and an _admissible triple from \((c,z)\),_ respectively. We recall the notion of global asymptotic controllability with regulated cost, first introduced in [19]. In the following, we will refer to any function \(\beta\in\mathcal{KL}\) as a _descent rate_. **Definition 2.2** (GAC with regulated cost).: The control system (8) is _globally asymptotically controllable - in short, GAC - to \(\mathcal{C}\)_ if there exists a descent rate \(\beta\) such that, for any initial point \(z\in\mathbb{R}^{n}\setminus\mathcal{C}\), there is an admissible pair \((x,u)\) from \(z\), satisfying \[\mathbf{d}(x(t))\leq\beta(\mathbf{d}(z),t)\qquad\text{ for all }t\geq 0. \tag{10}\] If, in addition, there exists a continuous, proper, and positive definite function \(W:\overline{\mathbb{R}^{n}\setminus\mathcal{C}}\to[0,+\infty)\), such that there is an admissible triple \((x^{0},x,u)\) from \((0,z)\) associated with a pair \((x,u)\) for which (10) is valid, also satisfies \[x^{0}(t)=\int_{0}^{t}l(x(s),u(s))\,ds\leq W(x)\qquad\text{ for all }t\geq 0, \tag{11}\] we say that (8) with the cost (9) is _globally asymptotically controllable to \(\mathcal{C}\) with \(W\)-regulated cost_. In this case, we will often simply say that (8)-(9) is GAC (to \(\mathcal{C}\)) with regulated cost. Given a descent rate \(\beta\) and a continuous, proper, and positive definite function \(W:\overline{\mathbb{R}^{n}\setminus\mathcal{C}}\to[0,+\infty)\), for any \(z\in\mathbb{R}^{n}\setminus\mathcal{C}\) such that \(\mathcal{U}(z)\neq\emptyset\), we set \[\mathcal{U}_{\beta,W}(z):=\left\{\begin{array}{c}u\in\mathcal{U}(z)\mid\text {the admissible triple }(x^{0},x,u)\text{ from }(0,z)\text{ satisfies}\\ \mathbf{d}(x(t))<\beta(\mathbf{d}(z),t)\text{ and }x^{0}(t)\leq W(z)\text{ for all }t\geq 0 \end{array}\right\}.\] Since in the definition of GAC it is clearly equivalent to replace the "\(\leq\)" in (10) with "\(<\)", when (8)-(9) meet the properties in Def. 2.2 for some \(\beta\), \(W\), we can assume without loss of generality that \(\mathcal{U}_{\beta,W}(z)\neq\emptyset\) for all \(z\in\mathbb{R}^{n}\setminus\mathcal{C}\). Let us now give a slightly extended version of the notion of Minimum Restraint function for (8)-(9), firstly introduced in [19]. To this end, we consider the Hamiltonian \[H(x,p_{0},p):=\min_{u\in U}\big{\{}\langle p\,,\,f(x,u)\rangle+p_{0}\,l(x,u) \big{\}}. \tag{12}\] **Definition 2.3** (Mrf).: Let \(W:\overline{\mathbb{R}^{n}\setminus\mathcal{C}}\to[0,+\infty)\) be a continuous function, which is positive definite and proper. We say that \(W\) is a _Minimum Restraint function - in short, MRF_ - for (8)-(9) if it satisfies the _decrease condition_:3 Footnote 3: This means that \(H(x,p_{0}(W(x)),p)\leq-\gamma(W(x))\) for every \(p\in\partial_{P}W(x)\), where \(\partial_{P}W(x)\) is the proximal differential of \(W\) at \(x\) (see Subsection 1.1). \[H(x,p_{0}(W(x)),\partial_{P}W(x))\leq-\gamma(W(x))\quad\text{ for all }x\in\mathbb{R}^{n}\setminus\mathcal{C}, \tag{13}\] for some continuous, increasing function \(p_{0}:(0,+\infty)\to[0,1]\) and some continuous, strictly increasing function \(\gamma:(0,+\infty)\to(0,+\infty)\). As noted in the introduction, a MRF is a particular Control Lyapunov function, in which the classical decrease condition \[\min_{u\in U}\langle\partial_{P}W(x)\,,\,f(x,u)\rangle\leq-\gamma(W(x))\quad \text{ for all }x\in\mathbb{R}^{n}\setminus\mathcal{C},\] is replaced by the stronger condition (13), also involving the current cost \(l\). Finally, we consider the following integrability condition. **Definition 2.4**.: Let \(p_{0}:(0,+\infty)\to[0,1]\) be an increasing, continuous function. We say that \(p_{0}\) satisfies the _integrability condition_ (IC), when \(1/p_{0}\) is integrable at \(0^{+}\), namely, we can define the \(C^{1}\), strictly increasing function \(P:[0,+\infty)\to[0,+\infty)\), given by \[P(v):=\int_{0}^{v}\frac{dv}{p_{0}(v)}\qquad\text{ for all }v\geq 0, \tag{14}\] and, moreover, \(P\) satisfies \(\lim_{v\to+\infty}P(v)=+\infty\). Clearly, if \(p_{0}\) is a positive constant, as in the original definition of MRF, it trivially satisfies condition (IC). We are now ready to state our main result. **Theorem 2.5** (Converse MRF Thm.).: _The following properties are equivalent:_ 1. _system (_8_) with cost (_9_) is GAC to_ \(\mathcal{C}\) _with regulated cost;_ 2. _there exists a continuous MRF for (_8_)-(_9_), for some_ \(p_{0}\) _and_ \(\gamma\) _such that_ \(p_{0}\) _satisfies the integrability condition_ (IC)_._ The rest of the paper is devoted to the proof of Theorem 2.5. _Remark 2.6_.: Starting with the assumption that the system is GAC with regulated cost, in the proof below we will explicitly build a MRF with \(p_{0}\equiv 1\), as unique continuous viscosity solution of the Hamilton-Jacobi-Bellman equation (in short, HJB) associated with an exit-time problem with vanishing lagrangian. As is well known, these HJB equations are highly degenerate and have in general multiple solutions, for which the continuity on the target does not propagate to the whole domain (see [21] and [26, 17, 18]). The proof technique thus consists in showing the continuity of the solution and establishing an ad hoc comparison principle. In addition to allowing us to extend the main result of [24] to the case with cost, this technique also provides an alternative approach to obtaining the classical result. _Remark 2.7_.: In this paper we continue the study, begun with [19], aimed at constructing a unified theory, which has as extreme situations asymptotic controllability (with cost \(l\equiv 0\)) on the one hand, and the minimum time problem (with cost \(l\equiv 1\)) on the other, and for which the \(l\geq 0\) case represents, in a sense, the intermediate stage. In the original notion of MRF in [19], \(p_{0}\) was a positive constant. Extending the definition by considering \(p_{0}\) an increasing function, possibly vanishing at the origin but satisfying the integrability condition (IC), generalizes the cost bound obtained in [19], as it implies GAC with \(\bar{W}\)-regulated cost, where \(\bar{W}(x)=4\,P(W(x)/2)\) for \(P\) as in (14) (see estimate (43) below).4 In the special case of the minimum time problem, this extension finally provides a result which is entirely consistent with the existing literature. Indeed, let \(l\equiv 1\) and assume that the distance function \(\mathbf{d}\) is a MRF for some functions \(p_{0}\) and \(\gamma\) such that (IC) holds true. Then, from the decrease condition (13), it follows that \(\mathbf{d}\) satisfies Footnote 4: Actually, refining some estimates, we could likely get \(\bar{W}(x)=P(W(x))\), as a consequence of the fact that \(\bar{W}(x)\leq(1+2\varepsilon)(P(W(x)/(1+\varepsilon)))\) for every \(\varepsilon>0\). \[\min_{u\in U}\langle\partial_{P}\mathbf{d}(x)\,,\,f(x,u)\rangle\leq-\tilde{ \gamma}(\mathbf{d}(z))\qquad\text{ for all }z\in\mathbb{R}^{n}\setminus\mathcal{C}, \tag{15}\] where \(\tilde{\gamma}(r):=p_{0}(r)+\gamma(r)\). As it is well-known, this condition combined with the integrability property (IC) for \(\tilde{\gamma}\) -sometimes called _weak Petrov condition_- guarantees the small time local controllability of system (8) to \(\mathcal{C}\), and implies for the minimum time function \(T\) the estimate \(T(z)\leq\int_{0}^{\mathbf{d}(z)}(1/\tilde{\gamma}(r))\,dr\), in line with our result (43) (see e.g. [2]). By the expression "weak", we mean that \(\tilde{\gamma}\) can be \(0\) at \(0\), to distinguish it from the classical Petrov condition, in which \(\tilde{\gamma}\) is replaced by a positive constant. Notice that, considering only \(p_{0}\equiv\bar{p}_{0}>0\) constant, we would have \(\tilde{\gamma}\geq\bar{p}_{0}>0\), so our conditions would include just the ordinary, i.e. non-weak, Petrov condition, which implies local Lipschitz continuity of \(T\) (see e.g. [1]). Furthermore, we think that considering \(p_{0}\) not constant could play a role in regularizing a MRF, in the fashion of [22]. ## 3. Proof of implication (i) \(\Longrightarrow\) (ii) Suppose that (8)-(9) is GAC to \(\mathcal{C}\) with \(W\)-regulated cost, for a cost bound \(W\) and a descent rate \(\beta\). We split the proof into several lemmas. In particular, in Lemma 3.1, we build new functions \(\bar{\beta}\) and \(\bar{W}\) and an admissible triple \((\hat{x}^{0},\hat{x},\hat{u})\) from \((0,z)\) such that \(\hat{u}\in\mathcal{U}_{\bar{\beta},\bar{W}}(z)\), for every \(z\in\mathbb{R}^{n}\setminus\mathcal{C}\). These objects play a key role in the construction of a (larger) cost functional \(J\), in Lemma 3.2. Furthermore, in Lemmas 3.3, 3.4 we show that the value function \(V\) associated with the control system (8) and the new cost \(J\), is a continuous MRF. Lemmas 3.1, 3.2 are inspired by [24, Lemmas 3.8, 3.17], where, however, no cost is considered and relaxed controls are used. As already observed, differently from [24], the present results are formulated in terms of (explicitly built) new descent rate and cost bound, and are obtained by mixing nonsmooth analysis and viscosity methods, which incidentally allow us to disregard relaxed controls. We begin by introducing some definitions. We consider a bilateral sequence \((r_{i})_{i\in\mathbb{Z}}\), given by 5 Footnote 5: As \(\beta^{-1}(r,0)\), we mean the inverse of the strictly increasing function \(r^{\prime}\mapsto r=\beta(r^{\prime},0)\). \[r_{0}:=1,\qquad r_{i}:=\min\left\{\beta^{-1}(r_{i-1},0),d_{W^{+}}^{-1}\left( \frac{1}{4}d_{W^{-}}(r_{i-1})\right)\right\}\quad\text{ for all }i\in\mathbb{Z}. \tag{16}\] Clearly, \((r_{i})_{i\in\mathbb{Z}}\) is positive, strictly decreasing, so that \(r_{1}<1\), and satisfies \[\lim_{i\rightarrow-\infty}r_{i}=+\infty,\qquad\lim_{i\rightarrow+\infty}r_{i}=0.\] Hence, we have \[\beta(r_{i},0)\leq r_{i-1},\quad d_{W^{+}}(r_{i})\leq\frac{1}{4}d_{W^{-}}(r_{i-1} )\quad\text{ for all }i\in\mathbb{Z}\] and, consequently, recalling that \(d_{W^{-}}\leq d_{W^{+}}\) (see (4) and (5)), \[d_{W^{+}}(r_{i+N})\leq\frac{1}{4^{N}}d_{W^{-}}(r_{i})\leq\frac{1}{4^{N}}d_{W^{+ }}(r_{i})\quad\text{ for all }i\in\mathbb{Z},\ \text{ for all }N\in\mathbb{N},\ N\geq 1. \tag{17}\] For any \(i\in\mathbb{Z}\), let \[\mathcal{B}_{i}:=\{z\in\mathbb{R}^{n}\setminus\mathcal{C}\ |\ \ \mathbf{d}(z)\in[r_{i},r_{i-1}]\}, \tag{18}\] so that \(\mathbb{R}^{n}\setminus\mathcal{C}=\cup_{i\in\mathbb{Z}}\mathcal{B}_{i}\). Finally, we define the _\(i\)-th \((\beta,W)\)-strip_\(\mathcal{A}_{i}\), as \[\mathcal{A}_{i}:=\left\{\begin{array}{c}(x^{0},x,u,z)\mid(x^{0},x,u)\text{ admissible triple from }(0,z),\\ u\in\mathcal{U}_{\beta,W}(z),\ z\in\mathcal{B}_{i}\end{array}\right\}.\] **Lemma 3.1**.: _There exist a \(\mathcal{K}\mathcal{C}\) function \(\bar{\beta}\geq\beta\), a continuous, unbounded, strictly increasing map \(\Phi:[0,+\infty)\to[0,+\infty)\) with \(\Phi(0)=0\), and a function \(T:(0,+\infty)\to[0,+\infty)\) with \(T(R)=0\) for all \(R\leq r_{1}\),6 such that for any \(z\in\mathbb{R}^{n}\setminus\mathcal{C}\) there exists an admissible triple \((\tilde{x}^{0},\hat{x},\hat{u})\) from \((0,z)\) enjoying the following properties:_ Footnote 6: The value \(r_{1}\) is defined as in (16). * \(\mathbf{d}(\hat{x}(t))\leq\bar{\beta}(\mathbf{d}(z),t)\) _for all_ \(t\geq 0\)_;_ * \(\hat{x}^{0}(t)\leq\bar{W}(z):=\Phi(\mathbf{d}(z))\) _for all_ \(t\geq 0\)_;_ * \(\mathbf{d}(\hat{x}(t))\leq\bar{\beta}(1,t-T(\mathbf{d}(z)))\) _for all_ \(t\geq T(\mathbf{d}(z))\)_._ Proof.: _Step 1 (Properties of the \((\beta,W)\)-strips)._ Fix \(i\in\mathbb{Z}\) and let \((u^{0},x,u,z)\in\mathcal{A}_{i}\). From the definitions of \((r_{i})_{i\in\mathbb{Z}}\), \(d_{W^{-}}\), and \(d_{W^{+}}\), it follows that \[\mathbf{d}(x(t))<\beta(\mathbf{d}(z),t)\leq\beta(r_{i-1},0)<r_{i-2}\qquad \text{for all }t\geq 0, \tag{19}\] and \[W(z)\leq d_{W^{+}}(r_{i-1})\leq\frac{1}{4}d_{W^{-}}(r_{i-2}). \tag{20}\] Define \[T_{i,z}:=\inf\left\{t\geq 0\mid\mathbf{d}(x(t))=\frac{r_{i}+r_{i+1}}{2}\right\}.\] Clearly, \(0<T_{i,z}<T_{z}(u)\), where \(T_{z}(u)\) is as in Def. 2.1. Set \[\tilde{\varepsilon}_{i,z}:=\inf\left\{\frac{1}{2}\big{(}\beta(\mathbf{d}(z), t)-\mathbf{d}(x(t))\big{)}\mid t\in[0,T_{i,z}]\right\}\] Note that by the continuity of \(\beta\) and of \(x\), \(\tilde{\varepsilon}_{i,z}\) is actually a minimum and \(\tilde{\varepsilon}_{i,z}>0\). Also define \(\tilde{\varepsilon}_{i}:=\frac{1}{4}d_{W^{+}}(r_{i-1})\), \(\bar{\varepsilon}_{i}:=\frac{r_{i}-r_{i+1}}{4}\) and, finally, set \[\varepsilon_{i,z}:=\min\{\tilde{\varepsilon}_{i,z},\hat{\varepsilon}_{i},\bar{ \varepsilon}_{i}\}.\] Since by assumption \(f(\cdot,u)\) and \(l(\cdot,u)\) are locally Lipschitz continuous, uniformly with respect to \(u\in U\), then there exists \(\delta_{i,z}>0\) such that, for all \(\bar{z}\in\mathbb{R}^{n}\setminus\mathcal{C}\) satisfying \(|\bar{z}-z|<\delta_{i,z}\), the cost \(x^{0}(\cdot\,,u,0,\bar{z})\) and the trajectory \(x(\cdot\,,u,\bar{z})\) are defined on \([0,T_{i,z}]\) and satisfy \[|x(t)-x(t,u,\bar{z})|\leq\varepsilon_{i,z},\quad|x^{0}(t)-x^{0}(t,u,0,\bar{z})| \leq\varepsilon_{i,z}\qquad\text{for all }t\in[0,T_{i,z}].\] Hence, for all \(t\in[0,T_{i,z}]\), using (20), we get \[x^{0}(t,u,0,\bar{z}) \leq x^{0}(t)+\varepsilon_{i,z}\leq W(z)+\hat{\varepsilon}_{i}\leq W (z)+\frac{1}{4}d_{W^{+}}(r_{i-1})\] \[\leq\frac{1}{4}d_{W^{-}}(r_{i-2})+\frac{1}{4}d_{W^{+}}(r_{i-1}) \leq\frac{1}{2}d_{W^{+}}(r_{i-2}).\] Moreover, in view of the definition of \(\varepsilon_{i,z}\), we also have \[\mathbf{d}(x(t,u,\bar{z})) \leq\mathbf{d}(x(t))+\frac{1}{2}\big{(}\beta(\mathbf{d}(z),t)- \mathbf{d}(x(t))\big{)}=\frac{1}{2}\beta(\mathbf{d}(z),t)+\frac{1}{2}\mathbf{ d}(x(t))\] \[<\beta(\mathbf{d}(z),t)\leq\beta(r_{i-1},0)\leq r_{i-2},\] whereas the definition of \(T_{i,z}\) implies \[\mathbf{d}(x(t,u,\bar{z})) \geq\mathbf{d}(x(t))-\bar{\varepsilon}_{i}\geq\mathbf{d}(x(T_{i,z}))-\bar{\varepsilon}_{i}\] \[\geq\frac{r_{i}+r_{i+1}}{2}-\frac{r_{i}-r_{i+1}}{4}=\frac{r_{i}} {4}+\frac{3}{4}r_{i+1}>r_{i+1},\] and \[\mathbf{d}(x(T_{i,z},u,\bar{z})) \leq\mathbf{d}(x(T_{i,z}))+\bar{\varepsilon}_{i}=\frac{r_{i}+r_{ i+1}}{2}+\frac{r_{i}-r_{i+1}}{4}\] \[=\frac{3}{4}r_{i}+\frac{1}{4}r_{i+1}<r_{i}.\] Summarizing the above results, we can conclude that, for every \(i\in\mathbb{Z}\) and \(z\in\mathcal{B}_{i}\) (\(\mathcal{B}_{i}\) as in (18)), with which we can consider an element \((x^{0},x,u,z)\in\mathcal{A}_{i}\) to be associated, by the axiom of choice, there exists \(\delta_{i,z}>0\) such that, for all \(\bar{z}\in\mathbb{R}^{n}\setminus\mathcal{C}\) with \(|z-\bar{z}|<\delta_{i,z}\), one has 1. \(\mathbf{d}(x(t,u,\bar{z}))\in(r_{i+1},r_{i-2})\) for all \(t\in[0,T_{i,z}]\); 2. \(\mathbf{d}(x(T_{i,z},u,\bar{z}))\in(r_{i+1},r_{i})\), i.e. \(x(T_{i,z},u,\bar{z})\in\mathring{\mathcal{B}}_{i+1}\); 3. \(x^{0}(T_{i,z},u,\bar{z})\leq\frac{1}{2}d_{W^{+}}(r_{i-2})\). _Step 2 (Construction of a suitable admissible triple)_ Preliminarily, observe that, since \(\partial\mathcal{C}\) is compact, for every \(i\in\mathbb{Z}\) the set \(\mathcal{B}_{i}\) is compact. Therefore, the cover of \(\mathcal{B}_{i}\) given by the open balls \(\mathring{B}_{\delta_{i,z}}(\{z\})\), \(z\in\mathcal{B}_{i}\), admits a finite subcover corresponding to the points \(z\in Z_{i}\), for some finite subset \(Z_{i}\) of \(\mathcal{B}_{i}\). Fix a positive bilateral sequence \((T_{i})_{i\in\mathbb{Z}}\) such that \[T_{i}\geq\max\{T_{i,z}\mid z\in Z_{i}\},\qquad\quad\sum_{j=0}^{\infty}T_{i+j}=+ \infty,\quad\text{for every }i\in\mathbb{Z}. \tag{21}\] Furthermore, thanks to the properties of the bilateral sequence \((r_{i})_{i\in\mathbb{Z}}\), we can define the map \(i:(0,+\infty)\to\mathbb{Z}\), given by \[i(r)=i\qquad\text{if }r\in(r_{i},r_{i-1}]. \tag{22}\] Fix now \(\bar{z}\in\mathbb{R}^{n}\setminus\mathcal{C}\) and let \(i:=i(\mathbf{d}(\bar{z}))\). Then, \(\bar{z}\in\mathring{B}_{\delta_{i,z_{0}}}(\{z_{0}\})\) for some \(z_{{}_{0}}\in Z_{i}\subset\mathcal{B}_{i}\). Let \((x_{{}_{0}}^{0},x_{{}_{0}},u_{{}_{0}},z_{{}_{0}})\in\mathcal{A}_{i}\) be the associated process from \((0,z_{{}_{0}})\). Since \(|\bar{z}-z_{{}_{0}}|<\delta_{i,z_{{}_{0}}}\), from Step 1 it follows that \(\hat{x}^{0}:=(\,\cdot\,,u_{{}_{0}},0,\bar{z})\) and \(\hat{x}:=x(\,\cdot\,,u_{{}_{0}},\bar{z})\) are defined on the interval \([0,\hat{t}_{{}_{0}}]\), \(\hat{t}_{{}_{0}}:=T_{i,z_{{}_{0}}}\leq T_{i}\), and satisfy 1. \(\mathbf{d}(\hat{x}(t))\in(r_{i+1},r_{i-2})\) for all \(t\in[0,\hat{t}_{{}_{0}}]\); 2. \(\hat{x}(\hat{t}_{{}_{0}})\in\mathring{\mathcal{B}}_{i+1}\); 3. \(\hat{x}^{0}(\hat{t}_{{}_{0}})\leq\frac{1}{2}d_{W^{+}}(r_{i-2})\). Repeating the above procedure with the initial conditions \(\bar{z}_{1}:=\hat{x}(\hat{t}_{0})\in\mathcal{B}_{i+1}\) and \(\bar{c}_{1}:=\hat{x}_{0}(\hat{t}_{0})\), we get the existence of an admissible control \(u_{1}\in\mathcal{U}_{\beta,W}(\bar{z}_{1})\) and of a time \(\hat{t}_{1}\leq T_{i+1}\), such that extending \(\hat{x}^{0}\) and \(\hat{x}\) by setting \(\hat{x}^{0}(t)=x^{0}(t-\hat{t}_{{}_{0}},u_{1},\bar{c}_{1},\bar{z}_{1})\) and \(\hat{x}(t)=x(t-\hat{t}_{{}_{0}},u_{1},\bar{z}_{1})\) for \(t\in[\hat{t}_{{}_{0}},\hat{t}_{{}_{0}}+\hat{t}_{1}]\), respectively, one has 1. \(\mathbf{d}(\hat{x}(t))\in(r_{i+2},r_{i-1})\) for all \(t\in[\hat{t}_{{}_{0}},\hat{t}_{{}_{0}}+\hat{t}_{1}]\); 2. \(\hat{x}(\hat{t}_{{}_{0}}+\hat{t}_{1})\in\mathcal{\dot{B}}_{i+2}\); 3. \(\hat{x}^{0}(\hat{t}_{{}_{0}}+\hat{t}_{1})\leq\frac{1}{2}d_{W^{+}}(r_{i-2})+ \frac{1}{2}d_{W^{+}}(r_{i-1})\leq\frac{1}{2}d_{W^{+}}(r_{i-2})+\frac{1}{8}d_{W ^{+}}(r_{i-2})\). Of course, setting \(\hat{u}(t):=u_{{}_{0}}(t)\chi_{{}_{[0,\hat{t}_{0}]}}(t)+u_{1}(t-\hat{t}_{{}_{0 }})\chi_{(\hat{t}_{{}_{0}},\hat{t}_{{}_{0}}+\hat{t}_{1}]}(t)\), we have \(\hat{x}^{0}=x(\cdot\,,\hat{u},0,\bar{z})\) and \(\hat{x}=x(\cdot\,,\hat{u},\bar{z})\). In this way, we can recursively construct sequences of controls \((u_{N})_{N\in\mathbb{N}}\) and times \((\hat{t}_{N})_{N\in\mathbb{N}}\) with \(\hat{t}_{N}\leq T_{i+N}\) for all \(N\), such that, setting \[\hat{T}_{-1}:=0,\quad\hat{T}_{N}:=\sum_{j=0}^{N}\hat{t}_{j},\quad\hat{T}_{ \infty}:=\sum_{j=0}^{+\infty}\hat{t}_{j}\] and \[\hat{u}(t):=u_{N}(t-\hat{T}_{N-1})\qquad\text{for all }t\in(\hat{T}_{N-1}, \hat{T}_{N}]\] we have \(u\in\mathcal{M}([0,\hat{T}_{\infty}),U)\) and the corresponding solution \((\hat{x}^{0},\hat{x})\) from \((0,\bar{z})\) is defined on the whole interval \([0,\hat{T}_{\infty})\) and satisfies 1. \(\mathbf{d}(\hat{x}(t))\in(r_{i+N+1},r_{i+N-2})\) for all \(t\in[\hat{T}_{N-1},\hat{T}_{N}]\); 2. \(\hat{x}(\hat{T}_{N})\in\mathcal{\dot{B}}_{i+N+1}\); 3. \(\hat{x}^{0}(\hat{T}_{N})\leq\hat{x}_{0}(\hat{T}_{N-1})+\frac{1}{2}d_{W^{+}}(r _{i+N-2})\leq Cd_{W^{+}}(r_{i-2})\), for \(C:=\frac{1}{2}\sum_{j=0}^{\infty}\frac{1}{4j}\). (The latter inequality can be easily proved by induction.) From these relations, it follows immediately that \(\hat{T}_{\infty}=T_{z}(\hat{u})\), \(\lim_{t\to\hat{T}_{\infty}^{-}}\mathbf{d}(\hat{x}(t))=0\), and \(\hat{x}^{0}(\hat{T}_{\infty})\leq Cd_{W^{+}}(r_{i-2})\) (actually, even if \(\hat{T}_{\infty}=+\infty\)). Hence, if in case \(\hat{T}_{\infty}<+\infty\) we extend the process \((\hat{x}^{0},\hat{x},\hat{u})\) to \([0,+\infty)\) by setting \(\hat{u}(t)=w\) (\(w\in U\) arbitrary), and \((\hat{x}^{0},\hat{x})(t)=\lim_{t\to\hat{T}_{\infty}^{-}}(\hat{x}^{0},\hat{x}) (t)\) for any \(t\geq\hat{T}_{\infty}\), we can conclude that \((\hat{x}^{0},\hat{x},\hat{u})\) is an admissible triple from \((0,\bar{z})\). _Step 3 (Construction of \(\bar{\beta}\), \(\Phi\), and \(T\))._ For all \(i\in\mathbb{Z}\) and \(N\in\mathbb{N}\), set \[\bar{T}_{i,-1}:=0,\qquad\bar{T}_{i,N}:=\sum_{j=0}^{N}T_{i+j},\] where the times \(T_{i}\) are as in (21), so that for any \(i\in\mathbb{Z}\), \(\bar{T}_{i,N}\to+\infty\) as \(N\to+\infty\). Then, we define the function \(T:(0,+\infty)\to[0+\infty)\), as \[T(R):=\bar{T}_{i(R),-1\vee(-i(R)+1)}\qquad\text{for all }R>0,\] where \(i(\cdot)\) is as in (22). Note that if \(R\leq r_{1}<1\) then \(i(R)\geq 2\) and this implies \(T(R)=\bar{T}_{i(R),-1}=0\). Since \(d_{W}^{+}\) as well as \(R\mapsto r_{i(R)}\) are increasing functions, the composition \(R\mapsto d_{W}^{+}(r_{i(R)-2})\) is a positive, piecewise constant, increasing function, such that \(d_{W}^{+}(r_{i(R)-2})\to 0\) as \(R\to 0^{+}\) and \(d_{W}^{+}(r_{i(R)-2})\to+\infty\) as \(R\to+\infty\). There is thus a continuous, strictly increasing approximation from above \(\Phi:[0,+\infty)\to[0,+\infty)\) of this composition times \(C\) (\(C\) as in (c.N) above), vanishing at zero and unbounded, namely \[C\,d_{W}^{+}(r_{i(R)-2})\leq\Phi(R)\qquad\text{ for all }R>0.\] Finally, we introduce the function \(b:[0,+\infty)\times[0,+\infty)\to[0,+\infty)\), given by \[\begin{cases}b(R,t):=r_{i+N-2}&\text{if }(R,t)\in(r_{i},r_{i-1}]\times[\bar{T}_{i,N-1}, \bar{T}_{i,N})\quad\text{ for all }i\in\mathbb{Z},\ N\in\mathbb{N}\\ b(0,t):=0&\text{for all }t\geq 0.\end{cases}\] Note that \(b(\cdot,t)\) is increasing and \(b(R,t)\to+\infty\) as \(R\to+\infty\), for all \(t\geq 0\). Similarly, \(b(R,\cdot)\) is positive, decreasing and \(b(R,t)\to 0\) as \(t\to+\infty\) for all \(R>0\). Using e.g. a linear interpolation procedure, it is not difficult to show that the discontinuous function \(b\) can be approximated from above by some \(\mathcal{KL}\) function \(\bar{\beta}\), which is \(\geq\beta\) by construction. Let \(\bar{z}\in\mathbb{R}^{n}\setminus\mathcal{C}\) and set \(i:=i(\mathbf{d}(\bar{z}))\). Then, since \(\hat{T}_{N}\leq\bar{T}_{i,N}\) for all \(N\geq-1\), the admissible triple \((\hat{x}^{0},\hat{x},\hat{u})\) from \((0,\bar{z})\) built in Step 2, satisfies \[\mathbf{d}(\hat{x}(t))<r_{i+N-2}\cdot\chi_{|T_{i,N-1},T_{i,N}}(t)\leq b( \mathbf{d}(\bar{z}),t)\leq\bar{\beta}(\mathbf{d}(\bar{z}),t)\qquad\text{ for all }t\geq 0, \tag{23}\] and \[\hat{x}^{0}(t)\leq\bar{W}(\bar{z}):=\Phi(\mathbf{d}(\bar{z}))\qquad\text{ for all }t\geq 0,\] so that \(\hat{u}\in\mathcal{U}_{\bar{\beta},\bar{W}}(\bar{z})\). Notice that \(\bar{W}:\overline{\mathbb{R}^{n}\setminus\mathcal{C}}\to[0,+\infty)\) is a continuous, proper and positive definite function, in view of the properties of \(\Phi\) and \(\mathbf{d}\). This concludes the proof of statements (i) and (ii). In order to prove (iii), let us first suppose \(\mathbf{d}(\bar{z})\leq r_{1}<r_{0}=1\). In this case, \(i(\mathbf{d}(\bar{z}))\geq 2\) and \(T(\mathbf{d}(\bar{z}))=\bar{T}_{i(\mathbf{d}(z)),-1}=0\). Hence (iii) follows from (23), because \(\bar{\beta}(\cdot,t)\) is strictly increasing for every \(t\geq 0\), so that \[\mathbf{d}(\hat{x}(t))<\bar{\beta}(\mathbf{d}(\bar{z}),t)\leq\bar{\beta}(1,t) =\bar{\beta}(1,t-T(\mathbf{d}(\bar{z})))\quad\text{ for all }t\geq T(\mathbf{d}(\bar{z})=0.\] If instead \(\mathbf{d}(\bar{z})>r_{1}\), we have \(i:=i(\mathbf{d}(\bar{z}))\leq 1\). Set \(N:=-i(\mathbf{d}(z))+2\quad(\geq 1)\) and \(\bar{z}_{N}:=\hat{x}(\hat{T}_{N-1})\), so that, by property (b.N) of Step 2, \(r_{2}<\mathbf{d}(\bar{z}_{N})<r_{1}<r_{0}=1\). Note that, applying the above construction from the initial condition \((0,\bar{z}_{N})\), the obtained admissible triple, say \((\hat{x}^{0}_{N},\hat{x}_{N},\hat{u}_{N})\) from \((0,\bar{z}_{N})\), satisfying \[\mathbf{d}(\hat{x}_{N}(s))<\bar{\beta}(\mathbf{d}(\bar{z}_{N}),s),\quad\hat{ x}^{0}_{N}(s)\leq\Phi(\mathbf{d}(\bar{z}_{N}))\quad\text{ for all }s\geq 0,\] is simply given by \[\hat{u}_{N}(\cdot)=\hat{u}(\cdot\,+\hat{T}_{N-1}),\quad\hat{x}^{0}_{N}=x^{0}( \cdot\,,\hat{u}_{N},0,\bar{z}_{N}),\quad\hat{x}_{N}=x(\cdot\,,\hat{u}_{N},\bar {z}_{N}).\] This fact is crucial for property (iii) to hold. Indeed, using the monotonicities of \(\bar{\beta}\) and the inequality \(\hat{T}_{N-1}\leq\bar{T}_{i,N-1}=T(\mathbf{d}(\bar{z}))\), it implies that \[\mathbf{d}(\hat{x}(t)) =\mathbf{d}(\hat{x}_{N}(t-\hat{T}_{N-1}))<\beta(\mathbf{d}(\bar{ z}_{N}),t-\hat{T}_{N-1})\] \[<\bar{\beta}(1,t-\hat{T}_{N-1})\leq\bar{\beta}(1,t-T(\mathbf{d}( \bar{z})))\qquad\text{ for all }t\geq T(\mathbf{d}(\bar{z})).\] **Lemma 3.2**.: _There exist two continuous, strictly increasing functions \(\ell\), \(\Psi:[0,+\infty)\to[0,+\infty)\), with \(\ell(0)=0\), \(\lim\limits_{R\to+\infty}\ell(R)=+\infty\), such that the functional \(J:(\mathbb{R}^{n}\setminus\mathcal{C})\times\mathcal{M}([0,+\infty),U)\to[0,+ \infty)\cup\{+\infty\}\), defined as_ \[J(z,u):=\left\{\begin{array}{ll}\int_{0}^{T_{z}(u)}\left[\ell( \mathbf{d}(x(t,z,u)))+l(x(t,z,u),u(t))\right]dt&\text{if }u\in\mathcal{U}(z) \text{,}\\ +\infty&\text{if }u\in\mathcal{M}([0,+\infty),U)\setminus\mathcal{U}(z) \text{,}\end{array}\right.\] _enjoys the properties_ (i)_-_(v) _below._ _For every_ \(z\in\mathbb{R}^{n}\setminus\mathcal{C}\)_, let_ \((\hat{x}^{0},\hat{x},\hat{u})\) _be the admissible triple from_ \((0,z)\) _built in Lemma_ 3.1_. Using the notations of Lemma_ 3.1_, we have_ 1. \(J(z,\hat{u})<+\infty\)_;_ 2. _if_ \(\mathbf{d}(z)<r_{1}\)_, then_ \(J(z,\hat{u})\leq\bar{\beta}(\mathbf{d}(z),0)+\Phi(\mathbf{d}(z))\)_;_ 3. _for any_ \(u\in\mathcal{M}([0,+\infty),U)\) _such that_ \(J(z,u)\leq J(z,\hat{u})\)_, (_\(u\in\mathcal{U}(z)\) _and)_ \(\mathbf{d}(x(t,u,z))\leq\Psi(\mathbf{d}(z))\) _for all_ \(t\geq 0\)_;_ 4. _for all_ \(\alpha>0\) _there exists_ \(\Theta>0\) _such that for any_ \(u\in\mathcal{M}([0,+\infty),U)\) _with_ \(J(z,u)<\alpha\)_, then_ \(\mathbf{d}(z)\leq\Theta\)_;_ 5. _for all_ \(\alpha>0\) _there exists_ \(\delta>0\) _such that, if_ \(\mathbf{d}(z)>\alpha\)_, then_ \(J(z,u)>\delta\) _for all_ \(u\in\mathcal{M}([0,+\infty),U)\)_._ Proof.: Let \(\bar{\beta}\) be a \(\mathcal{KL}\) function as in Lemma 3.1. Extend the continuous strictly decreasing function \([0,+\infty)\ni t\mapsto\bar{\beta}(1,t)\) to a continuous strictly decreasing function defined on \(\mathbb{R}\) and tending to \(+\infty\) as \(t\to-\infty\). Let \(\tau:(0,+\infty)\to\mathbb{R}\) be the inverse of this map, so that, in particular, \(\tau\) is continuous, strictly decreasing, \(\tau(R)\to+\infty\) as \(R\to 0^{+}\) and \(\tau(R)\to-\infty\) as \(R\to+\infty\). _Step 1. (Construction of a function \(\ell_{1}\))_ Define the continuous, strictly increasing, and unbounded function \(\ell_{1}:[0,+\infty)\to[0,+\infty)\), given by \[\ell_{1}(R):=Re^{-\tau(R)}\quad\text{for all }R>0,\qquad\ell_{1}(0):=0.\] For any \(0<b<c\), we claim that there exists a function \(\varkappa(b,c)\) such that, if \(z_{1},z_{2}\in\mathbb{R}^{n}\setminus\mathcal{C}\), \(u\in\mathcal{M}([0,+\infty),U)\), and \(T>0\) satisfy \(z_{2}=x(T,u,z_{1})\) and \(b\leq\mathbf{d}(x(t,u,z_{1}))\leq c\) for all \(t\in[0,T]\), then \[\int_{0}^{T}\ell_{1}(\mathbf{d}(x(t,u,z_{1})))dt\geq\varkappa(b,c)|z_{1}-z_{ 2}|\geq\varkappa(b,c)|\mathbf{d}(z_{1})-\mathbf{d}(z_{2})|. \tag{24}\] Indeeed, if \(z_{1}=z_{2}\), (24) is trivially satisfied. If instead \(z_{1}\neq z_{2}\), then \(\bar{M}(b,c):=\max\{|f(x,u)|\mid b\leq\mathbf{d}(x)\leq c,\,u\in U\}>0\) and \(|z_{1}-z_{2}|\leq T\bar{M}(b,c)\). Setting \(\varkappa(b,c):=\ell_{1}(b)\bar{M}(b,c)^{-1}\), since \(\ell_{1}\) is increasing, we have \(\ell_{1}(\mathbf{d}(x(t,u,z_{1})))\geq\ell_{1}(b)\) for all \(t\in[0,T]\), and consequently \[\int_{0}^{T}\ell_{1}(\mathbf{d}(x(t,u,z_{1})))dt\geq T\ell_{1}(b)\geq\varkappa (b,c)|z_{1}-z_{2}|.\] _Step 2 (Recursive definition of a sequence \(\ell_{j}\))_ Starting from \(\ell_{1}\) introduced in Step 1, for every \(j\geq 1\) we will recursively define an increasing sequence of functions \(\ell_{j}:[0,+\infty)\to\mathbb{R}\) with the following property: \[0\leq R\leq\bar{\beta}(j,0)\quad\Longrightarrow\quad\ell_{j+1}(R)=\ell_{j}(R). \tag{25}\] Then, we will prove that the pointwise limit \(\ell:=\lim\limits_{j\to+\infty}\ell_{j}\) (finite, because of (25)) together with the map \(\Psi\) defined below, satisfies (i)-(v). Further, all the functions \(\ell_{j}\), (hence \(\ell\) itself) will be continuous. We begin by assuming that, for some fixed \(j\geq 2\), we already defined the functions \(\ell_{i}\) for all \(i\leq j\), satisfying (25) for all \(i=1,\ldots,j-1\). Given \(z\in\mathbb{R}^{n}\setminus\mathcal{C}\) and any control \(u\in\mathcal{U}(z)\), let us set \[J_{j}(z,u):=\int_{0}^{T_{z}(u)}\ell_{j}(\mathbf{d}(x(t,u,z)))+l(x(t,u,z),u(t))dt.\] Observe that, if we consider the admissible triple \((x^{0},x,u)\) from \((0,z)\) defined on \([0,+\infty)\) (even in case \(T_{z}(u)<+\infty\), as in Def. 2.1), we can equivalently write \[J_{j}(z,u)=\int_{0}^{+\infty}\ell_{j}(\mathbf{d}(x(t)))+\lim\limits_{t\to T_{ z}^{-}(u)}x^{0}(t),\] as \(\mathbf{d}(x(t))=0\) for all \(t\geq T_{z}(u)\), whenever \(T_{z}(u)<+\infty\). Assume \(\mathbf{d}(z)\leq j\). Let \(T(\cdot)\) and \((\hat{x}^{0},\hat{x},\hat{u})\) be the uniform time and the admissible triple from \((0,z)\) built in Lemma 3.1, respectively. Then, we have \[\mathbf{d}(\hat{x}(t))\leq\bar{\beta}(1,t-T(j))\leq\bar{\beta}(1,0)\qquad\text {for all }t\geq T(j).\] In particular, (25) (together with the fact that \(\bar{\beta}(\cdot,0)\) is increasing) implies \[\ell_{j}(\mathbf{d}(\hat{x}(t)))=\ell_{1}(\mathbf{d}(\hat{x}(t)))\qquad\text{ for all }t\geq T(j).\] If otherwise \(t\leq T(j)\), we get \[\mathbf{d}(\hat{x}(t))<\bar{\beta}(\mathbf{d}(z),0)\leq\bar{\beta}(j,0).\] Hence, we have \[\int_{0}^{+\infty}\ell_{j} (\mathbf{d}(\hat{x}(t)))dt=\int_{0}^{T(j)}\ell_{j}(\mathbf{d}(( \hat{x}(t))))+\int_{T(j)}^{+\infty}\ell_{j}(\mathbf{d}(\hat{x}(t)))dt\] \[\leq\ell_{j}(\bar{\beta}(j,0))T(j)+\int_{T(j)}^{+\infty}\ell_{1} (\mathbf{d}(\hat{x}(t)))dt\] \[=\ell_{j}(\bar{\beta}(j,0))T(j)+\int_{T(j)}^{+\infty}\mathbf{d}( \hat{x}(t))e^{-t(\mathbf{d}(\hat{x}(t)))}dt\] \[\leq\ell_{j}(\bar{\beta}(j,0))T(j)+\int_{T(j)}^{+\infty}\bar{ \beta}(1,(t-T(j)))\,e^{-\tau(\bar{\beta}(1,(t-T(j)))}dt\] \[\leq\ell_{j}(\bar{\beta}(j,0))T(j)+\bar{\beta}(1,0)\int_{T(j)}^{ +\infty}e^{-t+T(j)}dt\] \[=\ell_{j}(\bar{\beta}(j,0))T(j)+\bar{\beta}(1,0):=L_{j}.\] Therefore, from Lemma 3.1,(ii) it follows that \[J_{j}(z,\hat{u})\leq L_{j}+\Phi(j)\quad\text{for all }z\in\mathbb{R}^{n}\setminus \mathcal{C}\text{ such that }\mathbf{d}(z)\leq j. \tag{26}\] Now, set \(\varkappa_{j}:=\varkappa(\bar{\beta}(j,0)+1,\bar{\beta}(j,0)+2)\) and consider a continuous function \(\rho_{j}:[0,+\infty)\to[0,+\infty)\) such that \[\rho_{j}(R)=\begin{cases}0&\text{if }R\leq\bar{\beta}(j,0)\text{ or }R\geq\bar{\beta}(j,0)+3,\\ \dfrac{L_{j}+\Phi(j)}{\varkappa_{j}}&\text{if }\bar{\beta}(j,0)+1\leq R \leq\bar{\beta}(j,0)+2.\end{cases}\] Finally, define \[\ell_{j+1}(R):=(1+\rho_{j}(R))\ell_{j}(R)\qquad\text{for all }R\geq 0.\] Clearly, \(\ell_{j+1}\) satisfies (25). _Step 3. (Definition of \(\ell\) and \(\Psi\))_ As anticipated above, we define \(\ell\) as the pointwise limit of the increasing sequence \((\ell_{j})\). Incidentally notice that, by construction, \(\ell\) is continuous and \(\ell(R)\geq\ell_{j}(R)\) for all \(R\in[0,+\infty)\) and all \(j\geq 1\). Furthermore, let \(\Psi:[0,+\infty)\to[0,+\infty)\) be any continuous strictly increasing function, satisfying \[R\in[j-1,j]\quad\Rightarrow\quad\Psi(R)\geq\bar{\beta}(j,0)+2\quad\text{ for all }j\geq 1.\] We show that \(\ell\) and \(\Psi\) have the required properties. Let \(z\in\mathbb{R}^{n}\setminus\mathcal{C}\) be given. In order to prove (i), it suffices to observe that there exists an integer \(j\geq 1\) such that \(j-1<\mathbf{d}(z)\leq j\). Then, (26) implies that \[J(z,\hat{u})=J_{j}(z,\hat{u})\leq L_{j}+\Phi(j)<+\infty. \tag{27}\] Let us now prove (ii). Assume \(\mathbf{d}(z)\leq r_{1}<1\). Then, \(T(\mathbf{d}(z))=0\), \(\ell(\mathbf{d}(\hat{x}(t)))=\ell_{1}(\mathbf{d}(\hat{x}(t)))\) for all \(t\geq 0\), and, arguing as in Step 2, we obtain \[J(z,\hat{u}) =\int_{0}^{+\infty}\left[\ell_{1}(\mathbf{d}(\hat{x}(t))+l(\hat{ x}(t),\hat{u}(t))\right]dt\leq\int_{0}^{+\infty}\bar{\beta}(\mathbf{d}(z),t)\,e^{-t}\, dt+\Phi(\mathbf{d}(z))\] \[\leq\bar{\beta}(\mathbf{d}(z),0)+\Phi(\mathbf{d}(z))\qquad\text{ for all }t\geq 0.\] To prove (iii), consider \(u\in\mathcal{M}([0,+\infty),U)\) satisfying \(J(z,u)\leq J(z,\hat{u})\). Actually, \(u\in\mathcal{U}(z)\), so let \((x^{0},x,u)\) be the corresponding admissible triple from \((0,z)\). As above, let \(j\) be the integer \(\geq 1\) such that \(j-1<\mathbf{d}(z)\leq j\). By the properties of \(\bar{\beta}\), we have \(\bar{\beta}(j,0)>j\), so that \(\mathbf{d}(z)<\bar{\beta}(j,0)\). In contradiction to claim (iii), suppose that there exists \(t>0\) such that \[\mathbf{d}(x(t))>\Psi(\mathbf{d}(z))>\bar{\beta}(j,0)+2.\] Then, there exist \(0<t_{1}<t_{2}<T_{z}(u)\) such that \[\mathbf{d}(x(t_{1}))=\bar{\beta}(j,0)+1,\qquad\mathbf{d}(x(t_{2}) )=\bar{\beta}(j,0)+2,\] \[\bar{\beta}(j,0)+1\leq\mathbf{d}(x(t))\leq\bar{\beta}(j,0)+2 \qquad\text{ for all }t\in[t_{1},t_{2}]. \tag{28}\] Therefore, in view of (27), (24), and the definition of \((\ell_{j})_{j}\), we have \[J(z,u) =\int_{0}^{T_{z}(u)}\left[\ell(\mathbf{d}(x(t)))+l(x(t),u(t)) \right]dt\geq\int_{0}^{T_{z}(u)}\ell(\mathbf{d}(x(t)))dt\] \[>\int_{t_{1}}^{t_{2}}\ell_{j+1}(\mathbf{d}(x(t)))dt\geq(1+\rho_{j }(\bar{\beta}(j,0)+1))\int_{t_{1}}^{t_{2}}\ell_{1}(\mathbf{d}(x(t)))dt\] \[\geq(1+\rho_{j}(\bar{\beta}(j,0)+1))\varkappa_{j}=L_{j}+\Phi(j) \geq J_{j}(z,\hat{u})=J(z,\hat{u}).\] This provides the required contradiction and the proof of (iii) is complete. As in claim (iv), let us now suppose that \(u\in\mathcal{U}(z)\) is a control satisfying \(J(z,u)<\alpha\), for some \(\alpha>0\). Let \((x^{0},x,u)\) be the corresponding admissible triple from \((0,z)\), and let \(j\geq 1\) be the smallest integer such that \(L_{j}+\Phi(j)>\alpha\). We want to show that \[\mathbf{d}(z)<\Theta:=\bar{\beta}(j,0)+2.\] Indeed, assume instead that \(\mathbf{d}(z)\geq\bar{\beta}(j,0)+2\). If there exists a time \(t\geq 0\) such that \(\mathbf{d}(x(t))<c:=\bar{\beta}(j,0)+1\), then there are \(0<t_{1}<t_{2}<T_{z}(u)\) as in (28). Thus, arguing as in the proof of claim (iii) we can deduce the inequality \(J(z,u)>L_{j}+\Phi(j)>\alpha\), in contradiction with the hypothesis \(J(z,u)<\alpha\). If instead \(\mathbf{d}(x(t,z,u))\geq c\) for all \(t\leq T_{z}(u)\), then \(T_{z}(u)=+\infty\) and we get the contradiction \[J(z,u)\geq\int_{0}^{+\infty}\ell_{1}(\mathbf{d}(x(t)))dt\geq\int_{0}^{+\infty} \ell_{1}(c)dt=+\infty.\] Let finally prove (v). Let \(\alpha>0\) and assume \(\mathbf{d}(z)>\alpha\). For any \(u\in\mathcal{M}([0,+\infty),U)\setminus\mathcal{U}(z)\), the cost \(J(z,u)=+\infty\), so (v) is trivially true. If \(u\in\mathcal{U}(z)\), by the last part of the proof of (iv) (for \(c:=\alpha/2\)) there exists a positive, finite time \(t_{2}:=\inf\{t\in[0,T_{z}(u))\mid\mathbf{d}(x(t,u,z))\leq\alpha/2\}\). Let \(0<t_{1}<t_{2}\) be a time such that \(\alpha/2\leq\mathbf{d}(x(t,u,z)\leq\alpha\) for all \(t\in[t_{1},t_{2}]\). Then, in view of (24), we have \[J(z,u)\geq\int_{t_{1}}^{t_{2}}\ell_{1}(\mathbf{d}(x(t,u,z)))dt\geq\frac{\alpha} {2}\varkappa(\alpha/2,\alpha)=:\delta>0.\] Let \(J\) be as in Lemma 3.2. For any \(z\in\overline{\mathbb{R}^{n}\setminus\mathcal{C}}\), we define the value function \[V(z):=\inf_{u\in\mathcal{M}([0,+\infty),U)}J(z,u)\quad\text{for all }z\in \mathbb{R}^{n}\setminus\mathcal{C},\qquad V(z):=0\quad\text{for all }z\in \partial\mathcal{C}.\] In the following two lemmas we show that \(V\) is a MRF for (8)-(9). **Lemma 3.3**.: _The function \(V:\overline{\mathbb{R}^{n}\setminus\mathcal{C}}\to[0,+\infty)\) has the following properties:_ 1. _dom_ \(V:=\{z\in\overline{\mathbb{R}^{n}\setminus\mathcal{C}}\mid V(z)<+\infty\}= \overline{\mathbb{R}^{n}\setminus\mathcal{C}}\)_;_ 2. \(V\) _is positive definite;_ 3. \(V\) _is proper;_ 4. \(V\) _is continuous._ Proof.: Property (i) holds, because, if \(z\in\partial\mathcal{C}\), then \(V(z)=0<+\infty\), while, if \(\mathbf{d}(z)>0\), by Lemma 3.2, (i) we have \(V(z)\leq J(z,\hat{u})<+\infty\). In order to prove (ii), observe that given \(z\in\mathbb{R}^{n}\setminus\mathcal{C}\), then \(\mathbf{d}(z)>\alpha\) for some \(\alpha>0\). Hence, by Lemma 3.2, (v) there exists \(\delta>0\), depending only on \(\alpha\), such that \(J(z,u)>\delta\) for all \(u\in\mathcal{M}([0,+\infty),U)\). As a consequence, \(V(z)\geq\delta>0\), that is \(V\) is positive outside the target. The function \(V\) satisfies (iii) whenever, for all \(\alpha>0\), the sublevel set \(E_{\alpha}:=\{z\in\overline{\mathbb{R}^{n}\setminus\mathcal{C}}\mid V(z)<\alpha\}\) is bounded. If \(V(z)<\alpha\), then by definition there exists \(u\in\mathcal{U}(z)\) such that \(J(z,u)<\alpha\), as well. Hence, by Lemma 3.2, (iv) we deduce that \(\mathbf{d}(z)<\Theta\) for some \(\Theta>0\) (depending only on \(\alpha\)) and, consequently, the set \(E_{\alpha}\) is bounded, as \(E_{\alpha}\subset\overline{B_{\Theta}(\mathcal{C})\setminus\mathcal{C}}\). Let us finally prove (iv), i.e. the continuity of \(V\). Fix \(\varepsilon>0\) and let us first consider \(\bar{z}\in\partial\mathcal{C}\). By virtue of Lemma 3.2, (ii), for any \(z\) such that \(\mathbf{d}(z)<r_{1}\), we have \(V(z)\leq J(z,\hat{u})\leq\hat{\Phi}(\mathbf{d}(z)):=\bar{\beta}(\mathbf{d}(z ),0)+\Phi(\mathbf{d}(z))\), where, in particular, \(\hat{\Phi}\) is continuous, strictly increasing and equal to \(0\) at \(0\). Hence, choosing \[0<\delta_{\varepsilon}<\frac{1}{2}\left(r_{1}\wedge\hat{\Phi}^{-1}( \varepsilon)\right), \tag{29}\] we get the continuity of \(V\) at \(\bar{z}\), as \[|V(z)-V(\bar{z})|=V(z)\leq J(z,\hat{u})\leq\hat{\Phi}(\mathbf{d}(z))< \varepsilon\quad\text{for all }z\in B_{2\delta_{\varepsilon}}(\bar{z}). \tag{30}\] Assume now \(\bar{z}\in\mathbb{R}^{n}\setminus\mathcal{C}\). Setting \(\tilde{\delta}_{\bar{z},\varepsilon}:=\delta_{\varepsilon}\wedge\frac{ \mathbf{d}(\bar{z})}{4}\), we have \(B_{2\delta_{\varepsilon,\varepsilon}}(\bar{z})\subset\mathbb{R}^{n}\setminus \mathcal{C}\). We claim that for any \(z\in B_{\delta_{\varepsilon,\varepsilon}/2}(\{\bar{z}\})\) there exists an admissible triple \((x^{0},x,u)\) from \((0,z)\) such that \[\mathbf{d}(x(t))\leq M_{\bar{z}}:=\max\Big{\{}\Psi(\mathbf{d}(z) )\mid z\in\overline{B_{r_{1}/4}(\{\bar{z}\})}\Big{\}}\quad\text{for all }t\geq 0,\] \[J(z,u)\leq V(z)+\varepsilon. \tag{31}\] Indeed, an \(\varepsilon\)-optimal triple \((x^{0},x,u)\) satisfying \(J(z,u)\leq V(z)+\varepsilon\) always exists, as \(V(z)<+\infty\) by (i) above. Furthermore, we can clearly assume \(J(z,u)\leq J(z,\hat{u})\), where \((\hat{x}^{0},\hat{x},\hat{u})\) is the admissible triple from \((0,z)\) built in Lemma 3.2. But then the first of the inequalities above follows from Lemma 3.2, (iii). Set \[\tilde{T}_{z}:=\inf\{t\geq 0\mid\mathbf{d}(x(t))<\tilde{\delta}_{\bar{z}, \varepsilon}\}. \tag{32}\] Note that \(\tilde{T}_{z}>0\), since \(\mathbf{d}(z)>\mathbf{d}(\bar{z})-\frac{\tilde{\delta}_{\bar{z},\varepsilon}}{2 }>\frac{3}{2}\tilde{\delta}_{\bar{z},\varepsilon}\). Let \(j=j_{\bar{z},\varepsilon}\) be an integer \(\geq 1\) such that \(j\geq\mathbf{d}(\bar{z})+\frac{\tilde{\delta}_{\bar{z},\varepsilon}}{2}> \mathbf{d}(z)\). Then, using (27) (which is valid for every \(j\geq\mathbf{d}(z)\), in view of (26)) and setting \(m_{\bar{z},\varepsilon}:=\min_{R\in[\bar{\delta}_{z,\varepsilon},M_{z}]}\ell_{1}( R)>0\), we get \[L_{j}+\Phi(j)\geq J(z,\hat{u})\geq J(z,u)\geq\int_{0}^{\tilde{T}_{z}}\ell_{1}( \mathbf{d}(x(t)))dt\geq\tilde{T}_{z}m_{\bar{z},\varepsilon}.\] Thus, there exists an uniform upper bound for the times \(\tilde{T}_{z}\). Precisely, we have \[\tilde{T}_{z}\leq T_{\bar{z},\varepsilon}:=\frac{L_{j_{z,\varepsilon}}+\Phi(j_ {\bar{z},\varepsilon})}{m_{\bar{z},\varepsilon}}\qquad\text{for all }z\in B_{\bar{\delta}_{z, \varepsilon}/2}(\{\bar{z}\}).\] To prove the continuity of \(V\) at \(\bar{z}\), consider arbitrary points \(z_{1},z_{2}\in B_{\bar{\delta}_{z,\varepsilon}/2}(\{\bar{z}\})\) and suppose, for instance, \(V(z_{1})\leq V(z_{2})\). Let \((x_{1}^{0},x_{1},u_{1})\) be an admissible triple from \((0,z_{1})\) satisfying (31) (for \(z=z_{1}\)). In particular, this implies that \(x_{1}(t)\) lies in a compact set \(\mathcal{K}\) depending only on \(\bar{z}\) for all \(t\geq 0\). Hence, if \(L_{\bar{z}}\) denotes the Lipschitz constant in \(x\) of the dynamics function \(f\) on the compact set \(\overline{B_{1}(\mathcal{K})}\), by a standard cut-off technique we can derive that the trajectory \(x(\cdot,u_{1},z_{2})\) is defined for all \(t\in[0,T_{\bar{z},\varepsilon}]\) and satisfies \[\sup_{t\in[0,T_{x,\varepsilon}]}|x_{1}(t)-x(t,u_{1},z_{2})|\leq|z_{1}-z_{2}|\, e^{L_{x}\,T_{x,\varepsilon}},\] as soon as \(|z_{1}-z_{2}|<e^{-L_{x}\,T_{x,\varepsilon}}\). Actually, from this inequality it also follows that, setting \[\bar{\delta}_{\bar{z},\varepsilon}:=\bar{\delta}_{\bar{z},\varepsilon}\wedge \left(\frac{\delta_{\varepsilon}}{2}\wedge 1\right)e^{-L_{x}\,T_{x,\varepsilon}}\] (\(\delta_{\varepsilon}\) as in (29)), and assuming \(z_{1}\), \(z_{2}\in B_{\bar{\delta}_{z,\varepsilon}/2}(\{\bar{z}\})\), we have that \((|z_{1}-z_{2}|<\bar{\delta}_{\bar{z},\varepsilon}\) and) \(|x(\tilde{T}_{z_{1}},u_{1},z_{2})-x_{1}(\tilde{T}_{z_{1}})|\leq\frac{\delta_ {\varepsilon}}{2}\) (\(\tilde{T}_{z_{1}}\) is as in (32), for \(x=x_{1}\)). Since \(\mathbf{d}(x_{1}(\tilde{T}_{z_{1}}))\leq\delta_{\varepsilon}\), this implies that \(\bar{z}_{2}:=x(\tilde{T}_{z_{1}},u_{1},z_{2})\in B_{2\delta_{\varepsilon}}( \bar{z})\). At this point, if \(\hat{u}_{2}\in\mathcal{U}(z_{2})\) denotes an admissible control from \((0,\bar{z}_{2})\) as in Lemma 3.2, the last part of (30) implies that \(J(\bar{z}_{2},\hat{u}_{2})<\varepsilon\). Therefore, the control \(u_{2}\) given by \[u_{2}(t):=\begin{cases}u_{1}(t)&t\in[0,\tilde{T}_{z_{1}}]\\ \hat{u}_{2}(t-\tilde{T}_{z_{1}})&t\in(\tilde{T}_{z_{1}},+\infty)\end{cases}\] belongs to \(\mathcal{U}(z_{2})\), \(x_{2}(t):=x(t,u_{2},z_{2})\) belongs to \(\overline{B_{1}(\mathcal{K})}\) for all \(t\in[0,\tilde{T}_{z_{1}}]\), and denoting with \(\omega_{\bar{z}}\) the modulus of continuity of \(\overline{B_{1}(\mathcal{K})}\ni x\mapsto\ell(\mathbf{d}(x))+l(x,u)\) (uniform w.r.t. the control, because of the assumptions on \(l\)), we finally obtain \[0 \leq V(z_{2})-V(z_{1})\leq J(z_{2},u_{2})-J(z_{1},u_{1})+\varepsilon\] \[\leq\int_{0}^{\tilde{T}_{z_{1}}}\left[|\ell(\mathbf{d}(x_{2}(t))) -\ell(\mathbf{d}(x_{1}(t)))|\,dt+|l(x_{2}(t),u_{1}(t))-l(x_{1}(t),u_{1}(t))| \right]dt\] \[\qquad+J(\bar{z}_{2},\hat{u}_{2})+\varepsilon\leq T_{\bar{z}, \varepsilon}\,\omega_{\bar{z}}(\bar{\delta}_{\bar{z},\varepsilon})+2 \varepsilon\leq 3\varepsilon,\] where the last inequality holds by replacing \(\bar{\delta}_{\bar{z},\varepsilon}\) with \(\bar{\delta}_{\bar{z},\varepsilon}\wedge\omega_{\bar{z}}^{-1}\left(\frac{ \varepsilon}{T_{z,\varepsilon}}\right)\). The continuity of \(V\) at \(\bar{z}\) hence follows by the arbitrariness of \(\varepsilon\). **Lemma 3.4**.: _The value function \(V\) satisfies the decrease condition (13)._ Proof.: We divide the proof in three steps. _Step 1._ Let us first show that, if \(V\) is a viscosity supersolution of the following Hamilton-Jacobi-Bellman equation \[\max_{u\in U}\{-\langle DV(z),f(z,u)\rangle-l(z,u)\}=\ell(\mathbf{d}(z))\qquad \text{for all }z\in\mathbb{R}^{n}\setminus\mathcal{C}, \tag{33}\] then it satisfies the decrease condition (13), characterizing MRFs. Indeed, (33) implies that (see e.g. the survey paper [3]) \[\max_{u\in U}\{-\langle p,f(z,u)\rangle-l(z,u)\}\geq\ell(\mathbf{d}(z))\qquad \text{for all }z\in\mathbb{R}^{n}\setminus\mathcal{C},\quad\text{for all }p\in \partial_{P}V(z),\] so that, for \(H\) defined as in (12) and \(p_{0}\equiv 1\), one has \[H(z,1,\partial_{P}V(z))\leq-\ell(\mathbf{d}(z))\qquad\text{for all }z\in \mathbb{R}^{n}\setminus\mathcal{C},\quad\text{for all }p\in\partial_{P}V(z),\] Setting \(\gamma:=\ell\circ d_{V^{+}}^{-1}\) and recalling that \(L\) is increasing as well as \(d_{V^{+}}\), by (7) we finally obtain that \(V\) satisfies condition (13) for \(p_{0}\equiv 1\) and such a \(\gamma\), namely \[H(z,1,\partial_{P}V(x))\leq-\gamma(V(z))\quad\text{ for all }z\in\mathbb{R}^{n} \setminus\mathcal{C}.\] Thus, the next two steps will be devoted to prove that \(V\) is a viscosity supersolution of (33). This proof is not completely standard, because we do not have the usual growth hypotheses on \(f\), \(\ell\) and \(l\) (see e.g. [1, Ch. III]). Actually, these assumptions can be avoided here thanks to the results in Lemma 3.2. _Step 2._ Let us show that, for every \(T>0\) and every \(z\in\mathbb{R}^{n}\setminus\mathcal{C}\), one has \[V(z)\geq\inf_{u\in\hat{\mathcal{U}}(z)}\left\{\int_{0}^{T_{z}(u)\wedge T}[ \ell(\mathbf{d}(x(t)))+l(x(t),u(t))]dt+V(x(T_{z}(u)\wedge T))\right\}, \tag{34}\] where \(x:=x(\cdot\,,u,z)\), \(\hat{\mathcal{U}}(z):=\{u\in\hat{\mathcal{U}}(z)\ |\ \ J(z,u)\leq J(z,\hat{u})\}\), and \(\hat{u}\) is as in Lemma 3.2. In view of Lemma 3.2,(i), the set \(\hat{\mathcal{U}}(z)\neq\emptyset\) and \[V(z)=\inf_{u\in\hat{\mathcal{U}}(z)}\int_{0}^{T_{z}(u)}[\ell(\mathbf{d}(x(t))) +l(x(t),u(t))]dt<+\infty.\] Let us refer to the right-hand side of (34) as \(v_{T}(z)\). Given \(u\in\hat{\mathcal{U}}(z)\), if \(T_{z}(u)\leq T\) we have \(V(x(T_{z}(u)\wedge T))=0\) and \(J(x,u)\geq v_{T}(z)\). If instead \(T_{z}(u)>T\), in view of the definition of \(v_{T}\), we get \[J(z,u)=\int_{0}^{T}[\ell(\mathbf{d}(x(t)))+l(x(t),u(t))]dt+\int_ {T_{z}(u)}^{T_{z}(u)}[\ell(\mathbf{d}(x(t)))+l(x(t),u(t))]dt\] \[=\int_{0}^{T}[\ell(\mathbf{d}(x(t)))+l(x(t),u(t))]dt+\int_{0}^{T_ {x_{T}(0)}(u_{T})}[\ell(\mathbf{d}(x_{T}(t)))+l(x_{T}(t),u_{T}(t))]dt\] \[\geq\int_{0}^{T}[\ell(\mathbf{d}(x(t)))+l(x(t),u(t))]dt+V(x(T)) \geq v_{T}(z),\] where \(x_{T}(t):=x(t+T)\), \(u_{T}(t):=u(t+T)\), and \(T_{x_{T}(0)}(u_{T})=T_{z}(u)-T\). Therefore, \(V(z)=\inf_{u\in\hat{\mathcal{U}}(z)}J(z,u)\geq v_{T}(z)\), that is, the relation (34) is proven. _Step 3._ Let us now deduce from (34) that \(V\) is a viscosity supersolution of (33). To this aim, fixed \(z\in\mathbb{R}^{n}\setminus\mathcal{C}\), we preliminarily observe that, in view of Lemma 3.2,(iii), every admissible trajectory-control pair \((x,u)\) with \(u\in\hat{\mathcal{U}}(z)\), satisfies \[\mathbf{d}(x(t))\leq\Psi(\mathbf{d}(z))=:R_{z}\qquad\text{for all }t\geq 0.\] Furthermore, on the compact set \(\overline{B(\mathcal{C},R_{z})\setminus\mathcal{C}}\), depending only on \(z\), the functions \(f\) and \(l\) are Lipschitz continuous in \(x\), uniformly w.r.t. the control, and we can fix a modulus of continuity for \(\ell\) and a bound \(M_{z}>0\) for \(|f|\), \(\ell\), and \(l\). Hence, choosing e.g. \(\bar{T}_{z}=\frac{\mathbf{d}(z)}{2M_{z}}\), for any \(u\in\hat{\mathcal{U}}(z)\) the trajectory \(x(\cdot\,,u,z)\) is defined on \([0,\bar{T}_{z}]\) and satisfies \(0<\mathbf{d}(x(t,u,z))\leq R_{z}\) for all \(t\in[0,\bar{T}_{z}]\). For arbitrary \(\varepsilon>0\) and \(0<T<\bar{T}_{z}\), (34) implies that there exists some \(\bar{u}=\bar{u}_{\varepsilon,T}\in\hat{\mathcal{U}}(z)\), such that \[\int_{0}^{T}[\ell(\mathbf{d}(\bar{x}(t)))+l(\bar{x}(t),\bar{u}(t))]dt+V(\bar{ x}(T))\leq V(z)+\varepsilon T,\] where \(\bar{x}:=x(\cdot,\,\bar{u},z)\). In view of the above considerations, from now on, taken a test function \(\varphi\in C^{1}(\mathbb{R}^{n})\) such that \(\varphi(z)=V(z)\) and \(\varphi(\tilde{z})\leq V(\tilde{z})\) for all \(\tilde{z}\in B(z,r)\), for some \(r>0\), the proof that \(V\) is a viscosity supersolution of (33) proceeds as usual, hence we omit it (see e.g. [1, Prop. III, 2.8]). Surveying the results on \(V\) in Lemmas 3.3 and 3.4, we see that the proof of implication (i)\(\implies\)(ii) is concluded. ## 4. Proof of implication (ii)\(\implies\)(i) The proof that the existence of a MRF implies GAC to \(\mathcal{C}\) with regulated cost relies on the following result, establishing a super-optimality principle satisfied by any MRF. **Proposition 4.1**.: _Let \(W:\overline{\mathbb{R}^{n}\setminus\mathcal{C}}\to[0,+\infty)\) be a continuous MRF for (8)-(9) for some continuous and increasing function \(p_{0}:(0,+\infty)\to[0,1]\) and some continuous and strictly increasing function \(\gamma:(0,+\infty)\to(0,+\infty)\). Then, for any \(z\in\mathbb{R}^{n}\setminus\mathcal{C}\), we have_ \[W(z)\geq\inf_{u\in\mathcal{U}(z)}\sup_{0\leq T<T_{z}(u)}\left\{\begin{array} []{c}\int_{0}^{T}[p_{0}(W(x(t)))l(x(t),u(t))+\gamma(W(x(t)))]\,dt\\ +W(x(T))\end{array}\right\}, \tag{35}\] _where \(x(\cdot):=x(\cdot\,,u,z)\)._ The proof of this proposition relies on the definite positiveness and properness of \(W\) coupled with an extension of results which are already known only under more restrictive assumptions than ours (see e.g. [1, Thm. 2.40] and [10, Thm. 3.3]), and it will be given at the end of the section. Let \(W:\overline{\mathbb{R}^{n}\setminus\mathcal{C}}\to[0,+\infty)\) be a continuous MRF for (8)-(9) for some functions \(p_{0}:(0,+\infty)\to[0,1]\) and \(\gamma:(0,+\infty)\to(0,+\infty)\) as in Def. 2.3. Moreover, assume that \(p_{0}\) satisfies the integrability condition (IC). Fix \(z\in\mathbb{R}^{n}\setminus\mathcal{C}\). From (35) it follows that, for \(\varepsilon_{1}:=\frac{W(z)}{2}>0\) there exists some admissible control \(u_{1}\in\mathcal{U}(z)\) such that, for any \(0\leq T<\bar{T}_{z}(u_{1})\), we have \[\int_{0}^{T}[p_{0}(W(x_{1}(t)))l(x_{1}(t),u_{1}(t))+\gamma(W(x_{ 1}(t)))]dt +W(x_{1}(T))\] \[\leq W(z)+\frac{W(z)}{2}, \tag{36}\] where \(x_{1}(\cdot):=x(\cdot\,,u_{1},z)\). We set \[t_{1}:=\inf\left\{t\in[0,T_{z}(u_{1}))\ |\ \ W(x_{1}(t))\leq\frac{W(z)}{2}\right\}, \qquad z_{1}:=x_{1}(t_{1}).\] Clearly, \(t_{1}\) is \(>0\) and is actually a minimum. Hence, (36) implies that \[\left\{\begin{array}{l}\frac{1}{2}\,W(z)\leq W(x_{1}(t))\leq\frac{3}{2}\,W(z )\qquad\mbox{for all }t\in[0,t_{1}],\\ W(z_{1})=W(x_{1}(t_{1}))=\frac{1}{2}\,W(z),\end{array}\right. \tag{37}\] so that \[p_{0}(W(z_{1}))\,\int_{0}^{t_{1}} l(x_{1}(t),u_{1}(t))\,dt\leq\int_{0}^{t_{1}}p_{0}(W(x_{1}(t)))\,l(x_{1}( t),u_{1}(t))\,dt\] \[\leq W(z)+\frac{W(z)}{2}-W(z_{1})=W(z)=2[W(z)-W(z_{1})],\] which yields the cost bound \[\int_{0}^{t_{1}}l(x_{1}(t),u_{1}(t))\,dt\leq 2\frac{W(z)-W(z_{1})}{p_{0}(W(z_{1}) )}. \tag{38}\] From (36), using the functions \(d_{W^{-}}\), \(d_{W^{+}}\) introduced in (7), we also obtain \[\gamma\left(\frac{1}{2}\,d_{W^{-}}(\mathbf{d}(z))\right)\,t_{1}\leq\gamma(W(z _{1}))\,t_{1}\leq\int_{0}^{t_{1}}\gamma(W(x_{1}(t)))\,dt\leq\frac{3}{2}\,W(z) \leq\frac{3}{2}\,d_{W^{+}}(\mathbf{d}(z)).\] Define now the continuous function \(\bar{T}_{1}:(0,+\infty)\to(0,+\infty)\), given by \[\bar{T}_{1}(R):=\frac{3d_{W^{+}}(R)}{2\gamma\left(\frac{1}{2}\,d_{W^{-}}(R) \right)}\qquad\mbox{for all }R>0.\] Hence, the latter inequality yields the following uniform time bound \[t_{1}\leq\bar{T}_{1}(\mathbf{d}(z)). \tag{39}\] Starting from \(z_{1}\) and choosing \(\varepsilon_{2}:=\frac{W(z_{1})}{2}=\frac{W(z)}{4}\), arguing as above we can deduce from (36) the existence of a control \(u_{2}\in\mathcal{U}(z_{1})\) and a time \(t_{2}>0\), such that, denoting by \(x_{2}\) the trajectory \(x(\cdot\,,u_{2},z_{1})\) and setting \(z_{2}:=x_{2}(t_{2})\), we get relations (37)-(39) with \(z\), \(z_{1}\), \(u_{1}\), \(x_{1}\), and \(t_{1}\) replaced by \(z_{1}\), \(z_{2}\), \(u_{2}\), \(x_{2}\), and \(t_{2}\), respectively. Set \(z_{0}:=z\). In a recursive way, for any integer \(N\geq 1\), we can thus choose \(\varepsilon_{N}:=\frac{W(z)}{2^{N}}\) and construct \(z_{N}\), \(u_{N}\), \(x_{N}\), and \(t_{N}>0\), such that \(u_{N}\in\mathcal{U}(z_{N-1})\), and \(x_{N}(\cdot):=x(\cdot\,,u_{N},z_{N-1})\), \(z_{N}:=x_{N}(t_{N})\), satisfy \[\left\{\begin{array}{l}\frac{1}{2}\,W(z_{N-1})\leq W(x_{N}(t)) \leq\frac{3}{2}\,W(z_{N-1})\qquad\mbox{for all }t\in[0,t_{N}],\\ W(z_{N})=\frac{1}{2}\,W(z_{N-1})=\frac{1}{2^{N}}\,W(z),\end{array}\right. \tag{40}\] \[\int_{0}^{t_{N}}l(x_{N}(t),u_{N}(t))\,dt\leq 2\,\frac{W(z_{N-1})-W(z_{N})}{p_{0}( W(z_{N}))}=4\,\frac{W(z_{N})-W(z_{N+1})}{p_{0}(W(z_{N}))}, \tag{41}\] and \[t_{N}\leq\bar{T}_{N}(\mathbf{d}(z)), \tag{42}\] where \[\bar{T}_{N}(R):=\frac{3d_{W^{+}}(R)}{2^{N}\gamma\left(\frac{1}{2^{N}}\,d_{W^{ -}}(R)\right)}\qquad\mbox{for all }R>0.\] Set now \(T_{0}:=0\), \(T_{N}:=\sum_{j=1}^{N}t_{j}\), and \(T_{\infty}:=\sum_{j=1}^{+\infty}t_{j}\) and, for every integer \(N\geq 1\), define the control \(u\in\mathcal{M}([0,+\infty),U)\) given by \[u(t) :=u_{N}(t-T_{N-1})\qquad\text{for all }t\in[T_{N-1},\ T_{N}),\] \[u(t) :=w\qquad\text{for all }t\geq T_{\infty},\quad\text{if }T_{\infty}<+\infty,\] (for \(w\in U\) arbitrary). Recalling that \(W\) is proper and positive definite, from (40) it is easy to deduce that \(\lim_{t\to T_{\infty}^{-}}\mathbf{d}(x(t,u,z))=0\), so \(u\in\mathcal{U}(z)\). Let \((x^{0},x,u)\) be the corresponding admissible triple from \((0,z)\) (defined on \([0,+\infty)\)). In view of (41) and recalling that \(1/p_{0}\) is decreasing, the cost \(x^{0}\) satisfies \[x^{0}(t)\leq\int_{0}^{T_{\infty}}l(x(t),u(t))\,dt \leq\sum_{N=1}^{+\infty}4\,\frac{W(z_{N})-W(z_{N+1})}{p_{0}(W(z_{N }))}\] \[\leq 4\,\int_{0}^{W(z)/2}\frac{dv}{p_{0}(v)}=\bar{W}(z)\qquad \text{for every }t\geq 0, \tag{43}\] as soon as we set \(\bar{W}(z):=4P(W(z)/2)\) for all \(z\in\overline{\mathbb{R}^{n}\setminus\mathcal{C}}\) (\(P\) as in (14)). Notice that this function \(\bar{W}\) is continuous, proper and positive definite, by the integrability assumption (IC). So, the triple \((x^{0},x,u)\) satisfies the cost bound condition (11) with regulation function \(\bar{W}\). To conclude the proof that control system (8) with cost (9) is GAC to \(\mathcal{C}\) with regulated cost, it remains only to prove the existence of a descent rate \(\beta\), such that \[\mathbf{d}(x(t))\leq\beta(\mathbf{d}(z),t)\qquad\text{for every }t\geq 0. \tag{44}\] First of all, we claim that there exist a strictly increasing, unbounded, continuous function \(\Gamma:\mathbb{R}_{\geq 0}\to\mathbb{R}_{\geq 0}\) with \(\Gamma(0)=0\), and a function \(\mathbf{T}:\mathbb{R}_{>0}^{2}\to\mathbb{R}_{>0}\), such that, for any \(0<r<R\), for every \(z\in\mathbb{R}^{n}\setminus\mathcal{C}\) with \(\mathbf{d}(z)\leq R\), the trajectory \(x\) from \(z\) considered above satisfies the following conditions: \[\begin{split}\text{(a)}\quad\mathbf{d}(x(t))&\leq \boldsymbol{\Gamma}(R)\qquad\quad\text{for all }t\geq 0,\\ \text{(b)}\quad\mathbf{d}(x(t))&\leq r\qquad\quad \quad\quad\text{for all }t\geq\mathbf{T}(R,r).\end{split}\] Condition (a) follows from (40), because by (7) we have \[\mathbf{d}(x(t))\leq d_{W^{-}}^{-1}(W(x(t)))\leq d_{W^{-}}^{-1}\left(\frac{3} {2}\,d_{W^{+}}(\mathbf{d}(z))\right)\leq\Gamma(R)\quad\text{for all }t\geq 0,\] as soon as we choose \(\Gamma(R):=d_{W^{-}}^{-1}\left(\frac{3}{2}\,d_{W^{+}}(R)\right)\), \(R\geq 0\). This \(\Gamma\) has all the required properties in view of the properties of \(d_{W^{-}}\) and \(d_{W^{+}}\). In order to derive (b), we observe that (40) implies \[\mathbf{d}(x(t))\leq d_{W^{-}}^{-1}(W(x(t)))\leq d_{W^{-}}^{-1}\left(\frac{3} {2^{N}}\,d_{W^{+}}(R)\right)\quad\text{for all }t\geq T_{N},\] so, if \(N(R,r)\) is the smallest integer \(\geq\log_{2}\left(3\frac{d_{W^{+}}(R)}{d_{W^{-}}(r)}\right)\), we get \[\mathbf{d}(x(t))\leq r\qquad\text{ for all }t\geq T_{N(R,r)}.\] The time \(T_{N(R,r)}\) depends on \(z\), but, by (42), the value \[\mathbf{T}(R,r):=\sum_{j=1}^{N(R,r)}\bar{T}_{j}(R)\] is a uniform upper bound for \(T_{N(R,r)}\). Hence, also condition (b) is valid. Now, arguing as in [25, Sec. 5], for any \(R>0\) let us introduce a strictly increasing, diverging sequence of positive times \((t_{j})_{j\geq 1}\) depending on \(R\), such that \[t_{j}\geq\mathbf{T}\left(R,\frac{R}{j+1}\right)\] and define the function \(b:[0,+\infty)\times[0,+\infty)\to[0,+\infty)\), given by \[b(R,t):=\begin{cases}\Gamma(R)&\quad\text{for all $t\in[0,t_{1})$},\\ \frac{R}{j+1}&\quad\text{for all $t\in[t_{j},t_{j+1})$},\ \ j\geq 1.\end{cases}\] From (a) and (b) it follows that \[\mathbf{d}(x(t))\leq b(\mathbf{d}(z),t)\qquad\text{for all $t\geq 0$}.\] As already noticed in the proof of Lemma 3.1, Step 3, it is actually a routine exercise to find a \(\mathcal{KL}\) function \(\beta\geq b\). Therefore, the proof of implication (ii)\(\implies\)(i) is thus complete. ### A Comparison Principle and the proof of Proposition 4.1 In the proof of Proposition 4.1, we will use the slightly modified version below of the classical Comparison Principle for the infinite horizon problem. In particular, in the known results it is basically necessary to assume the unilateral Lipschitz continuity hypothesis (i) below7 (see e.g. the comments after [1, III. Thm. 2.12]). In the following lemma we show that, when the state has two components and the dynamics \(F\) depends on only one of them, \(x\)-Lipschitz continuity of the \(F\)-component that corresponds to the missing variable is not necessary. Footnote 7: An alternative hypothesis to (i) is local Lipschitz continuity and at most linear growth in the state, uniformly w.r.t. the control. **Lemma 4.1** (Comparison Principle).: _Let \(U\subset\mathbb{R}^{m}\), \(A\subset\mathbb{R}^{m^{\prime}}\) be compact control sets (not both empty), let \(L:\mathbb{R}^{n}\times\mathbb{R}^{n^{\prime}}\times U\times A\to\mathbb{R}\) be a continuous running cost, and let \(F_{1}:\mathbb{R}^{n}\times U\times A\to\mathbb{R}^{n}\), \(F_{2}:\mathbb{R}^{n}\times U\times A\to\mathbb{R}^{n^{\prime}}\) be continuous dynamics components, such that_ 1. _for some_ \(C>0\)_,_ \(\langle F_{1}(x,u,a)-F_{1}(x^{\prime},u,a),x-x^{\prime}\rangle\leq C|x-x^{ \prime}|^{2}\) _for all_ \(x\)_,_ \(x^{\prime}\in\mathbb{R}^{n}\) _and_ \(u\in U\)_,_ \(a\in A\)_;_ 2. _for some_ \(\bar{K}>0\) _,_ \(|F_{2}(x,u,a)|\leq\bar{K}(1+|x|)\) _for all_ \((x,u,a)\in\mathbb{R}^{n}\times U\times A\) _and_ \(x\mapsto F_{2}(x,u,a)\) _is uniformly continuous, uniformly w.r.t. the controls;_ 3. \(L\) _is bounded and_ \((x,z)\mapsto L(x,z,u,a)\) _is uniformly continuous, uniformly w.r.t. the controls._ _Let \(H:\mathbb{R}^{n}\times\mathbb{R}^{n^{\prime}}\times\mathbb{R}^{n}\times \mathbb{R}^{n^{\prime}}\to\mathbb{R}\) be the Hamiltonian defined as_ \[H(x,z,p_{1},p_{2}):=\min_{a\in A}\max_{u\in U}\Big{\{}-\langle(p_{1},p_{2}),( F_{1}(x,u,a),F_{2}(x,u,a))\rangle-L(x,z,u,a)\Big{\}}.\] _Given \(\sigma>0\), if \(v_{1}\), \(v_{2}:\mathbb{R}^{n}\times\mathbb{R}^{n^{\prime}}\to\mathbb{R}\), bounded and continuous, are, respectively, a viscosity sub- and supersolution of_ \[\sigma\,v(x,z)+H(x,z,Dv(x,z))=0\qquad\text{for all $(x,z)\in\mathbb{R}^{n} \times\mathbb{R}^{n^{\prime}}$},\] _then \(v_{1}(x,z)\leq v_{2}(x,z)\) for all \((x,z)\in\mathbb{R}^{n}\times\mathbb{R}^{n^{\prime}}\)._ Proof.: The proof is a careful adaptation of the proof of [1, III. Thm. 2.12], which we give in detail for the sakes of clarity and selfconsistency. If we divide the equation by \(\sigma>0\), the functions \(F_{1}/\sigma\), \(F_{2}/\sigma\) and \(L/\sigma\) satisfy the same structural hypotheses as above. Hence, we can assume \(\sigma=1\) without loss of generality. In view of [1, III. Rem. 2.13] we can also assume \(v_{1}\) and \(v_{2}\) uniformly continuous. Set \(\langle x\rangle:=(1+|x|^{2})^{\frac{1}{2}}\). Take the map \(\Phi:\mathbb{R}^{n}\times\mathbb{R}^{n^{\prime}}\times\mathbb{R}^{n}\times \mathbb{R}^{n^{\prime}}\to\mathbb{R}\), given by \[\Phi(x,z,y,w):=v_{1}(x,z)-v_{2}(y,w)-\frac{|x-y|^{2}}{2\varepsilon }-\frac{|z-w|^{2}}{2\rho}\] \[-\beta(\langle x\rangle^{N}+\langle z\rangle^{N}+\langle y \rangle^{N}+\langle w\rangle^{N})\] where \(\varepsilon\), \(\rho\), \(\beta\), \(N\) are positive parameters to be chosen conveniently. Suppose by contradiction that there exist \(\delta>0\) and \((\tilde{x},\tilde{z})\in\mathbb{R}^{n}\times\mathbb{R}^{n^{\prime}}\) such that \(v_{1}(\tilde{x},\tilde{z})-v_{2}(\tilde{x},\tilde{z})=\delta\). Choose \(\beta>0\) such that \(\beta\langle\tilde{x}\rangle\leq\delta/8\) and \(\beta\langle\tilde{z}\rangle\leq\delta/8\), so that for all \(0<N\leq 1\) we have \(2\beta(\langle\tilde{x}\rangle^{N}+\langle\tilde{z}\rangle^{N})\leq 2\left( \frac{\delta}{8}+\frac{\delta}{8}\right)=\frac{\delta}{2}\), and \[\frac{\delta}{2}\leq\delta-\frac{\delta}{2}\leq v_{1}(\tilde{x},\tilde{z})-v_ {2}(\tilde{x},\tilde{z})-2\beta(\langle\tilde{x}\rangle^{N}+\langle\tilde{z} \rangle^{N})=\Phi(\tilde{x},\tilde{z},\tilde{x},\tilde{z})\leq\sup\Phi.\] Since \(\Phi\) is continuous and tends to \(-\infty\) as \(|x|+|z|+|y|+|w|\to+\infty\), there exists a maximum point \((\bar{x},\bar{z},\bar{y},\bar{w})\), for which, in particular, we get \[0<\frac{\delta}{2}\leq\Phi(\bar{x},\bar{z},\bar{y},\bar{w})=\sup\Phi. \tag{45}\] The obvious inequality \(\Phi(\bar{x},\bar{z},\bar{x},\bar{z})+\Phi(\bar{y},\bar{w},\bar{y},\bar{w}) \leq 2\Phi(\bar{x},\bar{z},\bar{y},\bar{w})\), yields \[\frac{|\bar{x}-\bar{y}|^{2}}{\varepsilon}+\frac{|\bar{z}-\bar{w}|^{2}}{\rho} \leq v_{1}(\bar{x},\bar{z})-v_{1}(\bar{y},\bar{w})+v_{2}(\bar{x},\bar{z})-v_{ 2}(\bar{y},\bar{w}),\] so the boundedness of \(v_{1}\) and \(v_{2}\) implies that \[|\bar{x}-\bar{y}|\leq c\sqrt{\varepsilon},\qquad|\bar{z}-\bar{w}|\leq c\sqrt {\rho}, \tag{46}\] for some \(c>0\). From the inequality \(\Phi(\bar{x},\bar{z},\bar{x},\bar{w})+\Phi(\bar{y},\bar{z},\bar{y},\bar{w}) \leq 2\Phi(\bar{x},\bar{z},\bar{y},\bar{w})\) and thanks to the uniform continuity of \(v_{1}\), \(v_{2}\), we can thus derive that \[\frac{|\bar{x}-\bar{y}|^{2}}{\varepsilon}\leq v_{1}(\bar{x},\bar{z})-v_{1}( \bar{y},\bar{z})+v_{2}(\bar{x},\bar{w})-v_{2}(\bar{y},\bar{w})\leq\omega(| \bar{x}-\bar{y}|)\leq\omega(c\sqrt{\varepsilon}), \tag{47}\] for some modulus of continuity \(\omega\). Consider now the \(C^{1}\) test functions \[\varphi(x,z):=v_{2}(\bar{y},\bar{w})+\frac{|x-\bar{y}|^{2}}{2 \varepsilon}+\frac{|z-\bar{w}|^{2}}{2\rho}+\beta(\langle x\rangle^{N}+\langle z \rangle^{N}+\langle\bar{y}\rangle^{N}+\langle\bar{w}\rangle^{N}),\] \[\psi(y,w):=v_{1}(\bar{x},\bar{z})-\frac{|\bar{x}-y|^{2}}{2 \varepsilon}-\frac{|\bar{z}-w|^{2}}{2\rho}-\beta(\langle\bar{x}\rangle^{N}+ \langle\bar{z}\rangle^{N}+\langle y\rangle^{N}+\langle w\rangle^{N}).\] By definition of \((\bar{x},\bar{z},\bar{y},\bar{w})\), the function \(v_{1}(x,z)-\varphi(x,z)=\Phi(x,z,\bar{y},\bar{w})\) obtains its maximum at \((\bar{x},\bar{z})\), while \(v_{2}(y,w)-\psi(y,w)=-\Phi(\bar{x},\bar{z},y,w)\) obtains its minimum at \((\bar{y},\bar{w})\). As it is easy to see, we have \[D\varphi(\bar{x},\bar{z})=(D_{x}\varphi,D_{z}\varphi)(\bar{x}, \bar{z})=\left(\frac{\bar{x}-\bar{y}}{\varepsilon}+\gamma_{1}\bar{x},\frac{ \bar{z}-\bar{w}}{\rho}+\gamma_{2}\bar{z}\right),\] \[D\psi(\bar{y},\bar{w})=(D_{x}\psi,D_{z}\psi)(\bar{y},\bar{w})= \left(\frac{\bar{x}-\bar{y}}{\varepsilon}+\tau_{1}\bar{y},\frac{\bar{z}-\bar{w }}{\rho}+\tau_{2}\bar{w}\right),\] if \(\gamma_{1}:=\beta N\langle\bar{x}\rangle^{N-2}\), \(\gamma_{2}:=\beta N\langle\bar{z}\rangle^{N-2}\), \(\tau_{1}:=\beta N\langle\bar{y}\rangle^{N-2}\), \(\tau_{2}:=\beta N\langle\bar{w}\rangle^{N-2}\). The definition of viscosity sub- and supersolution yields \[v_{1}(\bar{x},\bar{z})+H\left(\bar{x},\bar{z},D\varphi(\bar{x},\bar{z})\right) \leq 0\leq v_{2}(\bar{y},\bar{w})+H\left(\bar{y},\bar{w},D\psi(\bar{y},\bar{w })\right),\] so that, for some \(u\in U\) and \(a\in A\), we have \[v_{1}(\bar{x},\bar{z})-v_{2}(\bar{y},\bar{w})\] \[\quad\leq H\left(\bar{y},\bar{w},\frac{\bar{x}-\bar{y}}{\varepsilon }+\tau_{1}\bar{y},\frac{\bar{z}-\bar{w}}{\rho}+\tau_{2}\bar{w}\right)-H\left( \bar{x},\bar{z},\frac{\bar{x}-\bar{y}}{\varepsilon}+\gamma_{1}\bar{x},\frac{ \bar{z}-\bar{w}}{\rho}+\gamma_{2}\bar{z}\right)\] \[\quad\quad\quad-\left\langle\frac{\bar{x}-\bar{y}}{\varepsilon}+ \tau_{1}\bar{y},F_{1}(\bar{y},u,a)\right\rangle+\left\langle\frac{\bar{z}- \bar{w}}{\rho}+\tau_{2}\bar{w},F_{2}(\bar{y},u,a)\right\rangle-L(\bar{y},\bar{ w},u,a).\] Hence, by standard calculations (see also [1, Lemma 2.11]), using the definitions of \(\gamma_{1}\), \(\gamma_{2}\), \(\tau_{1}\), and \(\tau_{2}\), (46) and (47), we get \[v_{1}(\bar{x},\,\bar{z})-v_{2}(\bar{y},\bar{w})\leq C\frac{|\bar {x}-\bar{y}|^{2}}{\varepsilon}+\frac{|\bar{z}-\bar{w}|}{\rho}\,\omega_{2}(| \bar{x}-\bar{y}|)+\omega_{L}(|\bar{x}-\bar{y}|+|\bar{z}-\bar{w}|)\] \[\qquad\qquad\qquad+\gamma_{1}K(1+|\bar{x}|^{2})+\tau_{1}K(1+| \bar{y}|^{2})+\gamma_{2}K(1+|\bar{z}|^{2})+\tau_{2}K(1+|\bar{w}|^{2})\] \[\leq C\omega(c\sqrt{\varepsilon})+\frac{c\,\omega_{2}(c\sqrt{ \varepsilon})}{\sqrt{\rho}}+\omega_{L}(c\sqrt{\varepsilon}+c\sqrt{\rho})+K \beta N(\langle\bar{x}\rangle^{N}+\langle\bar{z}\rangle^{N}+\langle\bar{y} \rangle^{N}+\langle\bar{w}\rangle^{N}),\] where \(C\) is as in (i) above, \(K\) is a suitable positive constant (depending on \(C\) and \(\bar{K}\), \(\bar{K}\) as in (ii)), and \(\omega_{2}\), \(\omega_{L}\) are the moduli of continuity of \(F_{2}\) and \(L\), respectively. At this point, by choosing \(N:=1\wedge\frac{1}{K}\), \(\rho:=\omega_{2}(c\sqrt{\varepsilon})\), and using (45), we obtain \[\frac{\delta}{2} \leq\Phi(\bar{x},\bar{z},\bar{y},\bar{w})\leq v_{1}(\bar{x},\bar{ z})-v_{2}(\bar{y},\bar{w})-\beta(\langle\bar{x}\rangle^{N}+\langle\bar{z} \rangle^{N}+\langle\bar{y}\rangle^{N}+\langle\bar{w}\rangle^{N})\] \[\leq C\omega(c\sqrt{\varepsilon})+c\,\omega_{2}^{1/2}(c\sqrt{ \varepsilon})+\omega_{L}(c\sqrt{\varepsilon}+c\,\omega_{2}(c\sqrt{\varepsilon} )),\] which leads to a contradiction as soon as we make \(\varepsilon>0\) small enough. This comparison principle is interesting in itself. For instance, it implies that the lipschitzianity conditions on the current cost under which the optimality principles in [26, 10, 18, 21] were obtained, can be replaced by mere continuity plus growth assumptions. Proof of Proposition 4.1.: From [4, Thm. 8.1] it follows immediately that, given a MRF \(W\) for some \(p_{0}\) and \(\gamma\), the decrease condition (13) is equivalent to the viscosity supersolution condition \[\max_{u\in U}\left\{-\langle D^{-}W(z),f(z,u)\rangle-p_{0}(W(z))l(z,u)-\gamma(W (z))\right\}\geq 0 \tag{48}\] for all \(z\in\mathbb{R}^{n}\setminus\mathcal{C}\). For any \(0<b<c\), we define the sets \[\mathcal{S}_{b}:=\{z\in\overline{\mathbb{R}^{n}\setminus\mathcal{C}}\,|\quad W( z)<b\},\quad\mathcal{S}_{(b,c)}:=\{z\in\overline{\mathbb{R}^{n}\setminus \mathcal{C}}\,|\quad b<W(z)<c\}.\] Since \(W\) is continuous, proper, and positive definite and \(\partial\mathcal{C}\) is compact, these sets are open, bounded, and nonempty. _Step 1._ Fix \(M>0\). Then, the set \(\mathcal{S}:=\overline{\mathcal{S}}_{M+1}\) is contained in the interior of \(\mathcal{S}^{\prime}:=\overline{\mathcal{S}}_{M+2}\) and we can define a \(C^{1}\) function \(\tilde{\eta}:\mathbb{R}^{n}\to[0,1]\) such that \[\tilde{\eta}(z):=\begin{cases}1&\quad\text{if }z\in\mathcal{S},\\ 0&\quad\text{if }z\in\mathbb{R}^{n}\setminus\mathcal{S}^{\prime}.\end{cases}\] For all \((z,u)\in\mathbb{R}^{n}\times U\), we set \[\tilde{f}(z,u):=\tilde{\eta}(z)f(z,u),\quad\tilde{\ell}(z,u):=\tilde{\eta}(z) \left[p_{0}(W(z))l(z,u)+\gamma(W(z))\right]\ (\geq 0).\] The functions \(\tilde{f}\), \(\tilde{\ell}\) are continuous, bounded, \(x\mapsto\tilde{f}(x,u)\) is (globally) Lipschitz continuous and \(x\mapsto\tilde{\ell}(x,u)\) is uniformly continuous, uniformly w.r.t. the control. Since \(\tilde{\eta}\geq 0\), from (48) it follows that \(W\) also satisfies \[\max_{u\in U}\left\{-\langle D^{-}W(z),\tilde{f}(z,u)\rangle-\tilde{\ell}(z,u )\right\}\geq 0\quad\text{for all }z\in\mathbb{R}^{n}\setminus\mathcal{C}. \tag{49}\] _Step 2._ Fix \(\varepsilon\in(0,1)\) and let \(\eta_{\varepsilon}:\mathbb{R}^{n}\to[0,1]\) be a \(C^{1}\) function such that \[\eta_{\varepsilon}(z):=\begin{cases}1&\quad\text{if }z\in\mathcal{S}_{\left( \varepsilon,\frac{1}{\varepsilon}\right)},\\ 0&\quad\text{if }z\in\mathbb{R}^{n}\setminus\mathcal{S}_{\left(\frac{ \varepsilon}{2},\frac{2}{\varepsilon}\right)}.\end{cases}\] Setting, for all \((z,u)\in\mathbb{R}^{n}\times U\), \[\tilde{f}_{\varepsilon}(z,u):=\eta_{\varepsilon}(z)\tilde{f}(z,u),\quad \tilde{\ell}_{\varepsilon}(z,u):=\eta_{\varepsilon}(z)\tilde{\ell}(z,u),\] we finally obtain that \(W\) satisfies \[\max_{u\in U}\left\{-\langle D^{-}W(z),\tilde{f}_{\varepsilon}(z,u)\rangle- \tilde{\ell}_{\varepsilon}(z,u)\right\}\geq 0\quad\text{for all }z\in\mathbb{R}^{n}. \tag{50}\] Let \(\lambda:\mathbb{R}\to(0,1)\) be a \(C^{1}\) function such that \(0<\dot{\lambda}\leq\bar{C}\) for some \(\bar{C}>0\), \(\lambda(s)\to 0\) as \(s\to-\infty\) and \(\lambda(s)\to 1\) as \(s\to+\infty\), as, for instance, \(\lambda(s)=\frac{1}{\pi}\left(\arctan(s)+\frac{\pi}{2}\right)\). Hence, by [1, Prop. 2.5], the function \[V(z,r):=\lambda(W(z)+r)\] turns out to be a bounded, continuous, and nonnegative viscosity supersolution of the Hamilton-Jacobi-Bellman equation \[\max_{u\in U}\left\{-\langle Dv(z,r)\,,\,\tilde{F}_{\varepsilon}(z,u)\rangle \right\}=0\quad\text{for all }(z,r)\in\mathbb{R}^{n+1},\] where \(\tilde{F}_{\varepsilon}(z,u):=(\tilde{f}_{\varepsilon}(z,u),\tilde{\ell}_{ \varepsilon}(z,u))\). Set \(\Psi(z,r):=V(z,r)\). Since \(V\geq 0\), given \(\sigma>0\), \(V\) is also a viscosity supersolution of the obstacle equation \[\min\left\{\sigma\,v(z,r)+\max_{u\in U}\left\{-\langle Dv(z,r)\,,\,\tilde{F}_{ \varepsilon}(z,u)\rangle\right\},v(z,r)-\Psi(z,r)\right\}=0\text{ on }\mathbb{R}^{n+1}. \tag{51}\] By introducing a new control \(a\in\{0,1\}\) and defining \(\hat{F}_{\varepsilon}(z,u,a):=a\,\tilde{F}_{\varepsilon}(z,u)\), \(\hat{L}(z,r,u,a):=(1-a)\sigma\Psi(z,r)\), we can reformulate (51) as \[\sigma\,v(z,r)+\min_{a\in\{0,1\}}\max_{u\in U}\left\{-\langle Dv(z,r)\,,\,\hat {F}_{\varepsilon}(z,u,a)\rangle-\hat{L}(z,r,u,a)\right\}=0\text{ on }\mathbb{R}^{n+1}, \tag{52}\] where all the assumptions of Lemma 4.1 are met. Thus, (52) (equivalently, (51)) satisfies the comparison principle for bounded viscosity sub- and supersolutions. In particular, by a standard dynamic programming procedure, the unique bounded, continuous solution of (51) is the value function \[V_{\sigma}^{\varepsilon}(z,r):=\inf_{u\in\mathcal{M}([0,+\infty),U)}\sup_{T\geq 0 }e^{-\sigma T}\Psi(\tilde{x}_{\varepsilon}(T),\tilde{x}_{\varepsilon}^{0}(T)) \qquad\text{for all }(z,r)\in\mathbb{R}^{n+1},\] where \((\tilde{x}_{\varepsilon},\tilde{x}_{\varepsilon}^{0})\) is the unique solution of the control system \((\dot{x},\dot{x}^{0})=\tilde{F}_{\varepsilon}(x,u)\) with initial condition \((z,r)\). From the comparison principle, \[V(z,r)\geq V_{\sigma}^{\varepsilon}(z,r)\qquad\text{for all }(z,r)\in\mathbb{R}^{n +1}. \tag{53}\] Given \(z\in\mathcal{S}_{\left(\varepsilon,\frac{1}{\varepsilon}\right)}\) and for every \(u\in\mathcal{M}([0,+\infty),U)\), set \[\tilde{T}_{z}^{\varepsilon}(u):=\inf\left\{t\geq 0\mid\ W(\tilde{x}_{ \varepsilon}(t))\geq\frac{1}{\varepsilon}\text{ or }W(\tilde{x}_{\varepsilon}(t))\leq \varepsilon\right\}\leq+\infty.\] Clearly, \(\tilde{T}_{z}^{\varepsilon}(u)>0\), as \(\varepsilon<W(z)<\frac{1}{\varepsilon}\). Furthermore, for all \(t\in[0,\tilde{T}_{z}^{\varepsilon}(u))\), the solution \((\tilde{x}_{\varepsilon},\tilde{x}_{\varepsilon}^{0})\) corresponding to \(u\) coincides with the solution, say \((\tilde{x},\tilde{x}^{0})\), of the control system \((\dot{x},\dot{x}^{0})=(\tilde{f},\tilde{\ell})(x,u)\) with \((\tilde{x},\tilde{x}^{0})(0)=(z,r)\). Hence, letting \(\sigma\) tend to zero in (53), for every \((z,r)\in\mathcal{S}_{\left(\varepsilon,\frac{1}{\varepsilon}\right)}\times \mathbb{R}\), we get the inequality \[V(z,r)\geq\inf_{u\in\mathcal{M}([0,+\infty),U)}\sup_{T\in\left[0,\tilde{T}_{z} ^{\varepsilon}(u)\right)}V(\tilde{x}(T),\tilde{x}^{0}(T)),\] that, by a recursive procedure as in [26], finally implies \[V(z,r)\geq\inf_{u\in\mathcal{M}([0,+\infty),U)}\sup_{T\in\left[0,\tilde{T}_{z }(u)\right)}V(\tilde{x}(T),\tilde{x}^{0}(T))\quad\text{for all }(z,r)\in \mathbb{R}^{n+1}, \tag{54}\] if \(\tilde{T}_{z}(u)\) is the first time at which \(\tilde{x}\) reaches the target \(\mathcal{C}\) (possibly equal to \(+\infty\)). _Step 3._ Let \(z\in\overline{\mathcal{S}}_{M}\setminus\mathcal{C}\), namely \(0<W(z)\leq M\), and fix \(0<\rho\leq-\lambda(M)+\lambda(M+1)\). From (54) with initial point \((z,0)\), it follows that there exists a control \(u\in\mathcal{M}([0,+\infty),U)\) such that, for any \(T\in[0,\tilde{T}_{z}(u))\), we get \[V(\tilde{x}(T),\tilde{x}^{0}(T))=\lambda\left(W(\tilde{x}(T))+\int_{0}^{T} \tilde{\ell}(\tilde{x}(t),u(t))\,dt\right)\leq V(z,0)+\rho\leq\lambda(M+1),\] namely, \[W(\tilde{x}(T))+\int_{0}^{T}\tilde{\ell}(\tilde{x}(t),u(t))\,dt\leq M+1.\] Therefore, \(\tilde{x}(t)\) belongs to \(\mathcal{S}\) for all \(t\in[0,\tilde{T}_{z}(u))\), so that \(\tilde{\eta}\) as in Step 1 is identically \(1\). As a consequence, we have \(\tilde{f}(x,u)\equiv f(x,u)\), \(\tilde{\ell}(x,u)\equiv p_{0}(W(x))l(x,u)+\gamma(W(x))\), \(\tilde{x}(\cdot)\equiv x(\cdot\,,u,z)\), and \(\tilde{T}_{z}(u)\equiv T_{z}(u)\) as in Def. 2.1, so that, in particular \(u\in\mathcal{U}(z)\). By the arbitrariness of \(M>0\), the above considerations imply that \(W\) satisfies (35) for all \(z\in\mathbb{R}^{n}\setminus\mathcal{C}\). The proof of Proposition 4.1 is thus complete.
2309.07943
Eigenvalue attraction in open quantum systems, biophysical systems, and Parity-Time symmetric materials
We investigate eigenvalue attraction for open quantum systems, biophysical systems, and for Parity-Time symmetric materials. To determine whether an eigenvalue and its complex conjugate of a real matrix attract, we derive expressions for the second derivative of eigenvalues, which is dependent upon contributions from inertial forces, attraction between an eigenvalue and its complex conjugate, as well as the force of the remaining eigenvalues in the spectrum.
Pete Rigas
2023-09-14T08:02:14Z
http://arxiv.org/abs/2309.07943v3
Eigenvalue attraction in open quantum systems, biophysical systems, and Parity-Time symmetric materials ###### Abstract We investigate eigenvalue attraction for open quantum systems, biophysical systems, and for Parity-Time symmetric materials. To determine whether an eigenvalue and its complex conjugate of a real matrix attract, we derive expressions for the second derivative of eigenvalues, which is dependent upon contributions from inertial forces, attraction between an eigenvalue and its complex conjugate, as well as the force of the remaining eigenvalues in the spectrum. 1 Footnote 1: _Keywords_: Real matrices, PT symmetric materials, biophysical systems, time evolution ## 1 Introduction ### Overview Random matrices have emerged as an intense field of study in probability theory, with efforts devoted towards quantifying spectra [6], distribution of eigenvalues [3], information theory [1], black holes [2], the circular law [7], and expected norms of random matrices [10]. To further explore some directions of interest relating to random matrices that are raised in [4], we make use of a previously developed framework, from [4], which provides three possible behaviors for attraction between eigenvalues of a real in the complex plane. First, we provide several expressions for the force of the complex conjugate of an eigenvalue on an eigenvalue, the expectation of this force, which has contributions from an inertial component, as well as contributions from interactions between eigenvalues of the spectrum. In the context of biophysical systems discussed in [5], ### Matrix objects For an \(n\times n\) real matrix \(M\big{(}t\big{)}\in\mathbf{R}^{n\times n}\) parametrized at time \(t\), the force \(F\) between an eigenvalue and its complex conjugate takes the form, \[\underset{j\in\mathbf{N}}{\sum}F\big{(}\bar{\lambda_{j}}\longrightarrow \lambda_{j}\big{)}=-i\underset{j\in\mathbf{N}}{\sum}\frac{\big{|}u_{j}^{\rm T} \dot{M}\big{(}t\big{)}v_{j}\big{|}}{\text{Im}\big{(}\lambda_{j}\big{(}t\big{)} \big{)}}\enspace,\] for the standard right, and left, eigenvectors \(v_{i}\) and \(u_{i}\) of \(\lambda_{i}\). Under the representation of \(M\big{(}t\big{)}\) as a time-dependent stochastic process, \(M\big{(}t_{i}+\delta t\big{)}=M\big{(}t_{i}\big{)}+\delta tP\big{(}t_{i}\big{)}\), for \(\delta t\in\big{[}0,t_{i+1}-t_{i}\big{]}\), where \(P\big{(}t_{i}\big{)}\) is a diagonal matrix at \(t_{i}\). From the force between an eigenvalue and its complex conjugate, the expected value, \[\mathbf{E}\big{[}\underset{j\in\mathbf{N}}{\sum}F\big{(}\bar{ \lambda_{j}}\longrightarrow\lambda_{j}\big{)}\big{]}\equiv\underset{j\in \mathbf{N}}{\sum}\mathbf{E}\big{[}F\big{(}\bar{\lambda_{j}}\longrightarrow \lambda_{j}\big{)}\big{]}\equiv-i\underset{m,l\in\mathbf{N}:m,l\neq j}{\sum} \frac{\mathbf{E}\big{[}p_{m}^{2}\big{]}\big{|}u_{i}^{*m}\big{|}^{2}\big{|}v_{ i}^{l}\big{|}^{2}}{2\text{Im}\big{(}\lambda_{j}\big{(}t\big{)}\big{)}}\] \[\overset{(\text{\rm id})}{=}-i\underset{j\in\mathbf{N}}{\sum} \frac{\mathbf{E}\big{[}p^{2}\big{]}\big{|}\big{|}u_{j}\big{|}_{2}^{2}}{2\text{ Im}\big{(}\lambda_{j}\big{(}t\big{)}\big{)}}\enspace,\] with respect to the probability measure \(\mathbf{P}\big{(}\cdot\big{)}\) of standard normal random variables. For \(\epsilon\) sufficiently small, as the parameter approaches \(0\), \[M\big{(}t\big{)}\equiv\underset{\epsilon\longrightarrow 0}{\lim}M_{\epsilon} \big{(}t\big{)}\enspace.\] Following the discussion of [6], the eigenvalue equations are, \[M(t)v_{i}(t)=\lambda_{i}(t)v_{i}(t)\ \,\] \[u_{i}^{*}\big{(}t\big{)}M\big{(}t\big{)}=\lambda_{i}\big{(}t\big{)} u_{i}^{*}\big{(}t\big{)}\ \,\] in which, as in the previous objects, the left and right eigenvectors, and eigenvalues, of the time-dependent, real matrix \(M\big{(}t\big{)}\) appear. From the eigenvalue equations, the matrix of eigenvectors, \[V(t)\equiv\bigcup_{1\leq i\leq n}\text{span}\big{\{}v_{i}(t)\big{\}}\ \,\] the Kronecker delta is obtained from the product, \[u_{j}^{*}\big{(}t\big{)}v_{i}\big{(}t\big{)}=\delta_{ij}\ \.\] Differentiating the equations above involving \(M\big{(}t\big{)}\), and the left and right eigenvectors, yields, \[M\big{(}t\big{)}v_{i}\big{(}t\big{)}+M\big{(}t\big{)}v_{i}\big{(}t\big{)}= \lambda_{j}\big{(}t\big{)}v_{i}(t)\ \,\] \[u_{i}^{*}\big{(}t\big{)}+M\big{(}t\big{)}+u_{i}^{*}\big{(}t\big{)} M\big{(}t\big{)}=\lambda_{i}\big{(}t\big{)}u_{j}^{*}\big{(}t\big{)}+\lambda_{i} \big{(}t\big{)}u_{i}^{*}\big{(}t\big{)}\ \,\] which implies that the velocity, and second derivative with respect to time, of the eigenvalues are, \[\lambda_{j}\big{(}t\big{)}=u_{j}^{*}\big{(}t\big{)}M\big{(}t\big{)}v_{j}(t)\ \,\] and, \[\lambda_{j}^{\ ### Paper organization To further expand upon the eigenvalue attraction framework presented in the previous section, in the next section we present an overview of characteristics of eigenvalues of real matrices arising in descriptions of biophysical systems [5]. In each setting, we provide closed expressions for \(\lambda_{j}\tilde{\left(t\right)}\), which are each provided in the following _Main Result_: _Main Result_, \(\lambda_{j}\tilde{\left(t\right)}\) for open quantum systems, Biophysical systems, and Parity-time symmetric materials. * _Case one_, _second derivative of the eigenvalues for state matrices of open quantum systems_. For open quantum systems, \(\lambda_{j}\tilde{\left(t\right)}\) reads, \[\langle\bar{\Psi}_{j}|\,\widetilde{H}\tilde{\left(t\right)}+\sum_{i\neq j} \frac{\big{(}\,\langle\bar{\Psi}_{i}|\,\widetilde{H}\tilde{\left(t\right)}\,| \,\Psi_{j}\rangle\,\big{)}\,\big{(}\,\langle\bar{\Psi}_{j}|\,\widetilde{H} \tilde{\left(t\right)}\,|\,\Psi_{i}\rangle\,\big{)}}{h_{i}-h_{j}}\enspace.\] * _Case two_, _second derivative of the eigenvalues for state matrices of biophysical systems_. For biophysical systems, \(\lambda_{j}\tilde{\left(t\right)}\) reads, \[\exp\big{(}-\frac{\big{|}\bar{r}-\bar{r}_{j}\big{|}}{\xi_{n}}\big{)}\Omega \tilde{\mathrm{L}}^{\mathrm{LE}}\big{(}t\big{)}+\sum_{i\neq j}\frac{\big{(} \exp\big{(}-\frac{\big{|}\bar{r}-\bar{r}_{j}\big{|}}{\xi_{n}}\big{)}\Omega \tilde{\mathrm{L}}^{\mathrm{LE}}\big{(}t\big{)}\,\exp\big{(}-\frac{\big{|}\bar {r}-\bar{r}_{j}\big{|}}{\xi_{n}}\big{)}\big{)}(\exp\big{(}-\frac{\big{|}\bar{ r}-\bar{r}_{j}\big{|}}{\xi_{n}}\big{)})\Omega\tilde{\mathrm{L}}^{\mathrm{LE}} \big{(}t\big{)}\,\exp\big{(}-\frac{\big{|}\bar{r}-\bar{r}_{j}\big{|}}{\xi_{n}} \big{)}}{\Lambda_{i}-\Lambda_{j}}\enspace.\] * _Case three_, _second derivative of the eigenvalues for state matrices of parity-time symmetric materials_. For parity-time symmetric materials, \(\lambda_{j}\tilde{\left(t\right)}\) reads, \[\frac{\dot{\mathcal{P}}\big{(}j,i\big{)}}{v_{j}\big{(}t\big{)}}+\sum_{i\neq j }\frac{\mathcal{P}\big{(}i,j\big{)}\mathcal{P}(j,i)}{\frac{1\pm\sqrt{1-M_{11} \left(i\right)M_{22}\left(i\right)}}{M_{22}\left(i\right)}-\frac{1\pm\sqrt{1- M_{11}\left(j\right)M_{22}\left(j\right)}}{M_{22}\left(j\right)}}\enspace.\] For each of the cases above, \(\widetilde{H}\), \(\Omega^{\mathrm{LE}}\), and \(M\) denote the states matrices, \(\dot{\widetilde{H}}\), \(\Omega^{\mathrm{LE}}\), and \(\dot{M}\) denote the first time derivatives of the states matrices, \(\dot{\widetilde{H}}\), \(\Omega^{\mathrm{LE}}\), and \(\ddot{M}\) denote the second time derivatives of the states matrices for open quantum systems, biophysical systems, and parity-time symmetric materials, respectively. For the remaining quantities, \(\langle\widetilde{\Psi}_{j}|\), and \(\exp\big{(}-\frac{\big{|}\bar{\tilde{r}}-\bar{r}_{j}\big{|}}{\xi_{n}}\big{)}\) denote the \(j\) th left eigenvectors, \(\langle\widetilde{\Psi_{i}}|\), and \(\exp\big{(}-\frac{\big{|}\bar{\tilde{r}}-\bar{r}_{j}\big{|}}{\xi_{n}}\big{)}\), denote the \(i\) th left eigenvectors, \(|\Psi_{j}\rangle\), and \(\exp\big{(}-\frac{\big{|}\bar{\tilde{r}}-\bar{r}_{j}\big{|}}{\xi_{n}}\big{)}\) denote the \(j\) th right eigenvectors, and \(|\Psi_{i}\rangle\), and \(\exp\big{(}-\frac{\big{|}\bar{\tilde{r}}-\bar{r}_{j}\big{|}}{\xi_{n}}\big{)}\) denote the \(i\) th right eigenvectors, for open quantum systems, and for biophysical systems, respectively. For parity-time symmetric materials, \(\mathcal{P}\big{(}i,j\big{)}\) and \(\mathcal{P}\big{(}j,i\big{)}\) respectively denote \(u_{i}^{*}\big{(}t\big{)}M\dot{\left(}t\big{)}v_{j}\big{(}t\big{)}\), and \(u_{j}^{*}\big{(}t\big{)}M\dot{\left(}t\big{)}v_{i}\big{(}t\big{)}\). Finally, from each of the three cases above, \(h_{i}\), \(\Lambda_{i}\), and \(\frac{1\pm\sqrt{1-M_{11}\left(i\right)M_{22}\left(i\right)}}{M_{22}\left(i\right)}\) denote the \(i\) th eigenvalue of open quantum systems, biophysical systems, and parity-time symmetric materials, while, similarly, \(h_{j}\), \(\Lambda_{j}\) and \(\frac{1\pm\sqrt{1-M_{11}\left(j\right)M_{22}\left(j\right)}}{M_{22}\left(j\right)}\) denote the \(j\) th eigenvalues. ## 2 Eigenvalue attraction We apply the framework described in [4] to Open Quantum systems, Biophysical systems, and Parity-Time symmetric materials below. ### Open quantum systems The first application is to open quantum systems. #### 2.1.1 Description In the open quantum system setting, [9], from the time evolution of \(M\big{(}t\big{)}\), the procedure for obtaining the eigenvalues, and eigenvectors, of a real matrix differs. Instead, for Hamiltonians that are not Hermitian, the eigenvalues and eigenvectors are similarly observed to abruptly change, as we have described for Biophysical systems. To determine locality constraints for pure state preparation without undesired decoherence and/or interference, quantum dynamical semigroups are generated by the Markovian Master Equation, \[\frac{\mathrm{d}\rho\big{(}t\big{)}}{\mathrm{d}t}=-i\big{[}H,\rho\big{(}t\big{)} \big{]}+\sum_{k}\big{(}L_{k}\rho\big{(}t\big{)}L_{k}^{\dagger}-\frac{1}{2} \big{\{}L_{k}^{\dagger}L_{k},\rho\big{(}t\big{)}\big{\}}\big{)}\ \,\] for \(H=H^{\dagger}\), where the time evolution of the density \(\rho\big{(}t\big{)}\) at \(t\) is equivalent to contributions from an imaginary term of the Lie bracket between \(H\) and \(\rho\big{(}t\big{)}\), as well as a summation over \(k\) of the Lindblad operators \(\big{\{}L_{k}\big{\}}\). From this relationship between the time evolution and Hamiltonian, from a decomposition of the Hilbert space over \(n\) components, \(\mathcal{H}_{\mathcal{I}}=\otimes_{a=1}^{n}\mathcal{H}_{a}\), the Hamiltonian is said to be _QL_ if, \[H=\sum_{j}H_{j}\ \,\] with each \(H_{j}=H_{\mathcal{N}_{j}}\otimes I_{\widetilde{\mathcal{N}_{j}}}\), for, \[I_{\widetilde{\mathcal{N}_{j}}}=\otimes_{a\not\in\mathcal{N}_{j}}I_{a}\ \,\] and \(\mathcal{N}_{j}\subseteq\big{\{}1,\cdots,n\big{\}}\), for \(j=1,\cdots,M\), and identity operators \(I\) and \(I_{a}\), under the partition, \[\mathcal{H}_{\mathcal{I}}=\mathcal{H}_{S}\oplus\mathcal{H}_{S}^{\perp}\ \,\] for the block representation, \[X\equiv\begin{bmatrix}X_{S}&X_{P}\\ X_{Q}&X_{R}\end{bmatrix}\ \,\] From this notion of the Hamiltonian being _QL_, the Markov Master Equation is said to be _QL_ if both the Hamiltonian and \(\big{\{}L_{k}\big{\}}\), the noise operators, are _QL_. The Hamiltonian and noise operators are said to be local if each can be expressed with an identity operator with the exception of at most one subsystem. Next, the related notion of how a pure quantum state can be efficiently prepared asymptotically, termed the globally asymptotically stable state, entails that for some initial configuration \(\rho_{0}\), the limit in infinite time, \[\lim_{t\longrightarrow+\infty}\exp\bigl{(}\mathcal{L}t\big{)}\big{(}\rho_{0} \big{)}=\rho\ \,\] stabilizes to \(\rho\), independently of the initial condition, for, \[\exp\bigl{(}\mathcal{L}t\big{)}\equiv\overset{n}{\underset{a=1}{\otimes}}\exp \bigl{(}\mathcal{L}_{a}t\bigr{)}\ \.\] If the state that we wish to prepare in an asymptotically, global, manner is not stable, it is said to otherwise be dissipatively quasi-locally stabilizable, (**Definition 2.1**, [9]), if, \[\frac{\mathrm{d}\rho}{\mathrm{d}t}=\sum_{k}\big{(}D_{k}\rho D_{k}^{\dagger}- \frac{1}{2}\big{\{}D_{k}^{\dagger}D_{k},\rho\big{\}}\big{)}\ \,\] is satisfied for the collection of _QL_ operators \(\big{\{}D_{k}\big{\}}\). #### 2.1.2 Eigenvalue attraction statement _Main result, case one_. To characterize eigenvalue attraction for random matrices of Open Quantum systems, consider eigenvalues of the form, [9], \[h\approx\widetilde{H}_{S}\ \,\] which is related to the following block representation for \(H\), similar to that introduced in _2.2.1_, from, \[H\equiv\begin{bmatrix}H_{S}&H_{P}\\ H_{Q}&H_{R}\end{bmatrix}\ \,\] which holds from the fact that the Markovian Master Equation, \[\frac{\mathrm{d}\rho\big{(}t\big{)}}{\mathrm{d}t}=-i\big{[}H,\rho\big{(}t \big{)}\big{]}+\sum_{k}\big{(}L_{k}\rho\big{(}t\big{)}L_{k}^{\dagger}-\frac{1}{ 2}\{L_{k}^{\dagger}L_{k},\rho\big{(}t\big{)}\}\big{)}\ \,\] is invariant under \(\widetilde{H}\) and \(\widetilde{L_{k}}\), in which, \[\frac{\mathrm{d}\rho\big{(}t\big{)}}{\mathrm{d}t}=-i\big{[}\widetilde{H}, \rho\big{(}t\big{)}\big{]}+\sum_{k}\big{(}\widetilde{L_{k}}\rho\big{(}t\big{)} \widetilde{L_{k}^{\dagger}}-\frac{1}{2}\{\widetilde{L_{k}^{\dagger}} \widetilde{L_{k}},\rho\big{(}t\big{)}\}\big{)}\ \,\] also holds, for, \[\widetilde{H}\equiv H+\frac{i}{2}\sum_{k}\big{(}l_{k}^{\intercal}L_{k}-l_{k} L_{k}^{\dagger}\big{)}\ \,\] for \(\widetilde{L_{k}}=L_{k}-l_{k}I\), with block diagonal \(\widetilde{H}\), in which, for an arbitrary number of blocks along the diagonal, \[\widetilde{H_{S}}\equiv\begin{bmatrix}\text{Block}&0\\ \ddots&\ddots\\ 0&\text{Block}\end{bmatrix}\ \,\] With the eigenvectors being \(\ket{\Psi}\), this quantum state is used to construct pure states \(\ket{\Psi}\bra{\Psi}\) with the operators \(\big{\{}D_{k}\big{\}}\). Equipped with the eigenvalues and eigenvectors, \(\breve{\lambda_{j}}\big{(}t\big{)}\) reads, \[\breve{\lambda_{j}}\big{(}t\big{)}=\bra{\Psi_{j}}\widetilde{H}\breve{t} \big{(}t\big{)}+\sum_{i\neq j}\underbrace{\big{(}\bra{\widetilde{\Psi}_{i}} \widetilde{H}\breve{t}\big{(}t\ket{\Psi_{j}}\big{)}\big{)}\big{(}\bra{ \widetilde{\Psi}_{j}}\widetilde{H}\breve{t}\breve{t}\breve{t}\ket{\Psi_{i}} \big{)}}_{h_{i}-\bar{h}_{j}}\ \,\] where \(h_{i}\) and \(h_{j}\) respectively denote the \(i\) th, and \(j\) th, eigenvalues of \(\widetilde{H}\), \(\dot{\widetilde{H}}\) denotes the first time derivative of the state matrix \(\widetilde{H}\), \[\dot{\widetilde{H}}\equiv\dot{H}+\frac{i}{2}\sum_{k}\big{(}l_{k}^{\intercal} \dot{L_{k}}-l_{k}\dot{L_{k}^{\dagger}}\big{)}\ \,\] and the state \(\bra{\widetilde{\Psi}_{i}}\) denotes the conjugate-transform of the state \(\ket{\Psi_{i}}\). ### Biophysical systems The second applicaton is to biophysical systems. #### 2.2.1 Description From the time evolution of \(M\) with respect to \(t\) that is defined in the previous section, in biophysical systems, as discussed in [5], one encounters abrupt changes in the eigenvalues of real matrices, in which eigenvalues with positive real part can transition to have vanishing imaginary part. From standard models of diffusion underlying several biological processes, the linearized evolution matrix, \[\Omega\equiv\begin{bmatrix}a-2D&D&\cdots&\cdots&D\\ D&a-2D&D&\cdots&0\\ 0&D&\cdots&\cdots&0\\ \vdots&0&\vdots&\vdots&D\\ D&\cdots&0&D&a-2D\end{bmatrix}\ \,\] arises from the approximation, \[\frac{\mathrm{d}c_{n}\big{(}t\big{)}}{\mathrm{d}t}\approx\sum_{m\in\mathbf{N} }\Omega_{mn}c_{m}\big{(}t\big{)}\ \,\] where the function \(c_{n}\big{(}t\big{)}\), the micro organism concentration per volume at time \(t\), satisfies the discretization, \[\frac{\mathrm{d}c_{n}\big{(}t\big{)}}{\mathrm{d}t}=D\big{(}c_{n+1}+c_{n-1}-2c_{ n}\big{)}+ac_{n}-bc_{n}^{2}\ \,\] obtained from the diffusion PDE, \[\frac{\partial c\big{(}x,t\big{)}}{\partial t}=D\triangledown^{2}c\big{(}x,t \big{)}+ac\big{(}x,t\big{)}-bc\big{(}x,t\big{)}^{2}\ \,\] for the spatial diffusion constant \(D\), which is taken to be strictly positive. Given some positive concentration of micro organisms per volume, over a strictly positive number \(M\) of sites over the lattice, the linearized evolution matrix similarly reads, \[\Omega^{\mathrm{LE}}\equiv\begin{bmatrix}a-2D+U_{1}&D\mathrm{exp}\big{(}h \big{)}&\ldots&\ldots&D\mathrm{exp}\big{(}-h\big{)}\\ D\mathrm{exp}\big{(}-h\big{)}&a-2D+U_{2}&D\mathrm{exp}\big{(}h\big{)}&\ldots&0 \\ 0&D\mathrm{exp}\big{(}-h\big{)}&\ldots&\ldots&0\\ \vdots&0&\vdots&\vdots&D\mathrm{exp}\big{(}h\big{)}\\ D\mathrm{exp}\big{(}h\big{)}&\ldots&0&D\mathrm{exp}\big{(}-h\big{)}&a-2D+U_{N }\end{bmatrix}\ \,\] for fluctuations \(\big{\{}U_{i}\big{\}}_{1\leq i\leq N}\) in the growth rate at site \(i\), \(b>0\), and the velocity flow field \(\vec{v_{0}}\propto\vec{h}\). #### 2.2.2 Eigenvalue attraction statement _Main result, case two._ To characterize eigenvalue attraction for random matrices of Biophysical systems, consider the collection of \(n\) eigenvalues, [5], \(\Lambda_{n}\), with eigenfunctions, \[\psi\big{(}r,t\big{)}\equiv\sum_{n}c_{n}\psi_{n}\big{(}r\big{)}\mathrm{exp} \big{(}\Lambda_{n}t\big{)}\ \,\] which are, given localization of each eigenfunction for \(\vec{r}\) near \(\vec{r}_{n}\), and inversely proportional to the localization length \(\xi_{n}\), \[\psi_{n}\big{(}\vec{r},\vec{r}_{n}\big{)}\stackrel{{\vec{r}_{n} \models\vec{r}_{n}}}{{=}}\psi_{n}\big{(}\vec{r}\big{)}\equiv\psi_{n}\big{(} \vec{r}_{n}\big{)}\sim\mathrm{exp}\big{(}-\frac{\big{|}\vec{r}-\vec{r}_{n} \big{|}}{\xi_{n}}\big{)}\ \,\] and, for a potential \(U\). If one gathers data pertaining to the correlation length for each component, with \(\big{(}\xi_{n}^{1},\cdots,\xi_{n}^{n}\big{)}\), set \(\xi_{n}\equiv\mathrm{sup}_{i}\xi_{n}^{i}\). Over all \(n\), \(\Lambda_{n}\) is equivalent to the union, \[\Lambda\equiv\big{\{}\text{set of eigenvalues of the operator }D\triangledown^{2}+U\big{(}r\big{)}\big{\}}\equiv\bigcup_{n}\Lambda_{n}\ \,\] where, \[\Lambda_{n}\equiv\big{\{}n\ \text{th \ eigenvalue of the operator }D\triangledown^{2}+U\big{(}r\big{)}\big{\}}\ \.\] Equipped with the eigenvalues and eigenvectors, \(\lambda_{j}^{\bar{\phantom{j}}}\big{(}t\big{)}\) reads, for the first term, and, similarly, \[\sum_{i\neq j}\frac{\big{[}\exp\big{(}-\frac{\big{|}\bar{r_{1}}- \bar{r_{j}^{\bar{\phantom{j}}}}\big{|}}{\xi_{n}}\big{)},\stackrel{{ n-2}}{{\cdots}},\exp\big{(}-\frac{\big{|}\bar{r_{n}}-\bar{r_{i}^{\bar{ \phantom{j}}}}\big{|}}{\xi_{n}}\big{)}\big{]}\Omega^{\text{LE}}\big{(}t\big{)} \big{[}\exp\big{(}-\frac{\big{|}\bar{r_{1}}-\bar{r_{i}^{\bar{\phantom{j}}}} \big{|}}{\xi_{n}}\big{)},\stackrel{{ n-2}}{{\cdots}},\exp\big{(}- \frac{\big{|}\bar{r_{j}}-\bar{r_{j}^{\bar{\phantom{j}}}}\big{|}}{\xi_{n}} \big{)}\big{]}}{\Lambda_{i}-\Lambda_{j}}\times\cdots\] \[\bigg{(}\big{[}\exp\big{(}-\frac{\big{|}\bar{r_{1}}-\bar{r_{j}^{ \bar{\phantom{j}}}}\big{|}}{\xi_{n}}\big{)},\stackrel{{ n-2}}{{\cdots}},\exp\big{(}-\frac{\big{|}\bar{r_{n}}-\bar{r_{j}^{ \bar{\phantom{j}}}}\big{|}}{\xi_{n}}\big{)}\big{]}\Omega^{\text{LE}}\big{(}t \big{)}\big{[}\exp\big{(}-\frac{\big{|}\bar{r_{1}}-\bar{r_{i}^{\bar{\phantom{j }}}}\big{|}}{\xi_{n}}\big{)},\stackrel{{ n-2}}{{\cdots}},\exp\big{(}- \frac{\big{|}\bar{r_{n}}-\bar{r_{i}^{\bar{\phantom{j}}}}\big{|}}{\xi_{n}} \big{)}\big{]}\bigg{)}\ \,\] from the summation, for the second term, where \(\Lambda_{i}\) and \(\Lambda_{j}\), respectively denote the \(i\) th and \(j\) th eigenvalues, \(\Omega^{\text{LE}}\) is given by the matrix, \[\Omega^{\text{LE}}\equiv\begin{bmatrix}\dot{U}_{1}&D\text{exp}\big{(}h\big{)} &\cdots&\cdots&D\text{exp}\big{(}-h\big{)}\\ D\text{exp}\big{(}-h\big{)}&\dot{U}_{2}&D\text{exp}\big{(}h\big{)}&\cdots&0\\ 0&D\text{exp}\big{(}-h\big{)}&\cdots&\cdots&0\\ \vdots&0&\vdots&\vdots&D\text{exp}\big{(}h\big{)}\\ D\text{exp}\big{(}h\big{)}&\cdots&0&D\text{exp}\big{(}-h\big{)}&\dot{U}_{N} \end{bmatrix}\ \,\] \(\Omega^{\text{LE}}\) denotes the second time derivative of the \(\Omega^{\text{LE}}\), which is given by the matrix, \[\frac{\partial}{\partial t}\Omega^{\text{LE}}\big{(}t\big{)}\equiv\Omega^{ \text{LE}}\big{(}t\big{)}\equiv\Omega^{\text{LE}}\ \,\] and the \(i\) th and \(j\) th eigenvectors are constructed from the basis functions \(\psi_{n}\big{(}\vec{r},\vec{r}_{n}\big{)}\). Altogether, the second derivative of \(\lambda_{j}\big{(}t\big{)}\) reads, \[\exp\big{(}-\frac{\big{|}\vec{r}-\bar{r}_{j}^{\bar{\phantom{j}}}\big{|}}{\xi_{n }}\big{)}\Omega^{\text{LE}}\big{(}t\big{)}+\sum_{i\neq j}\frac{\bigg{(}\exp \big{(}-\frac{\big{|}\vec{r}-\bar{r}_{i}^{\bar{\phantom{j}}}\big{|}}{\xi_{n}} \big{)}\Omega^{\text{LE}}\big{(}t\big{)}\,\exp\big{(}-\frac{\big{|}\vec{r}- \bar{r}_{j}^{\bar{\phantom{j}}}\big{|}}{\xi_{n}}\big{)}\bigg{)}\bigg{(}\exp \big{(}-\frac{\big{|}\vec{r}-\bar{r}_{j}^{\bar{\phantom{j}}}\big{|}}{\xi_{n}} \big{)}\Omega^{\text{LE}}\big{(}t\big{)}\,\exp\big{(}-\frac{\big{|}\vec{r}- \bar{r}_{i}^{\bar{\phantom{j}}}\big{|}}{\xi_{n}}\big{)}\bigg{)}}{\Lambda_{i}- \Lambda_{j}}\ \.\] ### Parity-Time symmetric materials The third applicaton is to parity-time symmetric materials. #### 2.3.1 Description In the PT symmetric materials setting, [2, 5], the eigenvalues of real matrices arise from considering an M-matrix and its connection with the scattering, S-matrix, in which, \[\vec{E}^{+}=ME^{\vec{\cdot}}\ \,\] for \(\vec{E}^{+}=\left[E_{f}^{+}E_{B}^{+}\right]^{\mathsf{T}}\), \(\vec{E}^{-}=\left[E_{f}^{-}E_{B}^{-}\right]^{\mathsf{T}}\), and, \[M\equiv\begin{bmatrix}M_{11}&M_{12}\\ M_{21}&M_{22}\end{bmatrix}\ \,\] from properties of the transfer matrix \(M\) above, [2], which is given by the block representation, \[S\equiv\begin{bmatrix}T^{l}&R^{r}\\ R^{l}&T^{r}\end{bmatrix}\ \,\] where the entries of the scattering matrix above are given by, \(T^{l}=\big{(}M_{22}\big{(}k\big{)}\big{)}^{-1}\), \(R^{r}=\frac{M_{12}(k)}{M_{22}(k)}\), \(R^{l}=-\frac{M_{21}(k)}{M_{22}(k)}\), and \(T^{r}=\big{(}M_{22}\big{(}k\big{)}\big{)}^{-1}\). #### 2.3.2 Eigenvalue attraction statement _Main result, case three_. To characterize eigenvalue attraction for random matrices of PT symmetric materials, from unimodular \(M\) with determinant \(1\), consider the eigenvalues of the scattering matrix, which are given by, \[\lambda_{\pm,k}\equiv s_{\pm,k}\equiv\frac{1\pm\sqrt{1-M_{11}\big{(}k\big{)}M _{22}\big{(}k\big{)}}}{M_{22}\big{(}k\big{)}}\ \.\] With each \(\lambda_{k}\), the parity-time symmetric material satisfies the boundary conditions, in which \[\psi_{k_{\pm}}\big{(}x\big{)}=A_{\pm}\mathrm{exp}\big{(}ik_{\pm}x\big{)}+B_{ \pm}\mathrm{exp}\big{(}-ik_{\pm}x\big{)}\stackrel{{ x\to\pm\infty}}{{\longrightarrow}}\mathrm{exp} \big{(}\pm ikx\big{)}\ \,\] are given by the linear combination of exponentials, with powers \(ik_{\pm}x\), or \(-ik_{\pm}x\). Asymptotically, for large \(x\), the \(k\) th left and right eigenvectors obey, \[\psi_{k}^{\mathrm{L}}\big{(}x\big{)}\equiv\psi_{k}^{\mathrm{L}}\sim \begin{array}{l}N_{l}\big{(}\mathrm{exp}\big{(}ikx\big{)}+R^{l} \mathrm{exp}\big{(}-ikx\big{)}\big{)}\ \ \,\ \mathrm{as}\ x\longrightarrow-\infty\ \,\\ N_{l}T^{l}\mathrm{exp}\big{(}ikx\big{)}\ \[\begin{array}{c}\mathcal{P}\big{(}i,j,\dot{M},u_{i},v_{j}\big{)}\equiv\mathcal{P} \big{(}i,j\big{)}=\big{[}N_{l}{\rm exp}\big{(}ikx_{1}\big{)}+N_{l}R^{l}{\rm exp }\big{(}-ikx_{1}\big{)},\stackrel{{ n-2}}{{\cdots}},N_{l}{\rm exp }\big{(}ikx_{n}\big{)}+\cdots\\ N_{l}R^{l}{\rm exp}\big{(}-ikx_{n}\big{)}\big{]}M(t)\ \times\cdots\\ \big{[}N_{r}{\rm exp}\big{(}ikx_{1}\big{)}+N_{r}R^{r}{\rm exp}\big{(}-ikx_{1} \big{)},\stackrel{{ n-2}}{{\cdots}},N_{r}{\rm exp}\big{(}ik_{x}x_{n }\big{)}+\cdots\\ N_{r}R^{r}{\rm exp}\big{(}-ikx_{n}\big{)}\big{]}\ \,\end{array}\] in the first term of the summation over \(i\neq j\), for the \(i\) th left eigenvector, \[\psi_{k}^{{\rm L},i}\equiv\big{[}N_{l}{\rm exp}\big{(}ikx_{1}\big{)}+N_{l}R^ {l}{\rm exp}\big{(}-ikx_{1}\big{)},\stackrel{{ n-2}}{{\cdots}},N_{l}{\rm exp }\big{(}ikx_{n}\big{)}+N_{l}R^{l}{\rm exp}\big{(}-ikx_{n}\big{)}\big{]}\ \,\] and for the \(j\) th right eigenvector, \[\psi_{k}^{{\rm R},i}\equiv\big{[}N_{r}{\rm exp}\big{(}ikx_{1}\big{)}+N_{r}R^ {r}{\rm exp}\big{(}-ikx_{1}\big{)},\stackrel{{ n-2}}{{\cdots}},N_{r}{\rm exp }\big{(}ikx_{j}\big{)}+N_{r}R^{r}{\rm exp}\big{(}-ikx_{j}\big{)}\big{]}\ \.\] Similarly, for the other term in the summation over \(i\neq j\), \[\begin{array}{c}\mathcal{P}\big{(}j,i,\dot{M},u_{i},v_{j}\big{)}\equiv \mathcal{P}\big{(}j,i\big{)}=\big{[}N_{l}{\rm exp}\big{(}ikx_{1}\big{)}+N_{l}R ^{l}{\rm exp}\big{(}-ikx_{1}\big{)},\stackrel{{ n-2}}{{\cdots}},N_{l}{ \rm exp}\big{(}ikx_{n}\big{)}+N_{l}R^{l}{\rm exp}\big{(}-ikx_{n}\big{)}\big{]} \times\cdots\\ M\big{(}t\big{)}\big{[}N_{r}{\rm exp}\big{(}ikx_{1}\big{)}+N_{r}R^{r}{\rm exp }\big{(}-ikx_{1}\big{)},\stackrel{{ n-2}}{{\cdots}},\\ N_{r}{\rm exp}\big{(}ikx_{n}\big{)}+N_{r}R^{r}{\rm exp}\big{(}-ikx_{n}\big{)} \big{]}\ \,\end{array}\] in the second term of the summation over \(i\neq j\), and, \[\frac{\dot{\mathcal{P}}\big{(}j,i\big{)}}{v_{j}\big{(}t\big{)}}\equiv\frac{ \dot{\mathcal{P}}\big{(}j,i,\dot{M},u_{j},1\big{)}}{v_{j}\big{(}t\big{)}}= \frac{\mathcal{P}\big{(}j,i,\ddot{M},u_{j},1\big{)}}{v_{j}\big{(}t\big{)}}=u_{j }^{*}\big{(}t\big{)}M\ddot{(}t\big{)}\ \,\] in the term appearing before the summation over \(i\neq j\). The first time derivative of \(M\) has the block representation, \[\frac{\partial}{\partial t}M\big{(}t\big{)}\equiv\frac{\partial}{\partial t}M \equiv\dot{M}\big{(}t\big{)}\equiv\begin{bmatrix}\frac{\partial}{\partial t}M _{11}&\frac{\partial}{\partial t}M_{12}\\ \frac{\partial}{\partial t}M_{21}&\frac{\partial}{\partial t}M_{22}\end{bmatrix} \equiv\begin{bmatrix}\dot{M}_{11}&\dot{M}_{12}\\ \dot{M}_{21}&\dot{M}_{22}\end{bmatrix}\ \,\] for \(\dot{M}_{11}\equiv\dot{M}_{11}\big{(}t\big{)}\), \(\dot{M}_{12}\equiv\dot{M}_{12}\big{(}t\big{)}\), \(\dot{M}_{21}\equiv\dot{M}_{12}\big{(}t\big{)}\), and \(\dot{M}_{22}\equiv\dot{M}_{22}\big{(}t\big{)}\).
2305.19664
Unveiling Cross Modality Bias in Visual Question Answering: A Causal View with Possible Worlds VQA
To increase the generalization capability of VQA systems, many recent studies have tried to de-bias spurious language or vision associations that shortcut the question or image to the answer. Despite these efforts, the literature fails to address the confounding effect of vision and language simultaneously. As a result, when they reduce bias learned from one modality, they usually increase bias from the other. In this paper, we first model a confounding effect that causes language and vision bias simultaneously, then propose a counterfactual inference to remove the influence of this effect. The model trained in this strategy can concurrently and efficiently reduce vision and language bias. To the best of our knowledge, this is the first work to reduce biases resulting from confounding effects of vision and language in VQA, leveraging causal explain-away relations. We accompany our method with an explain-away strategy, pushing the accuracy of the questions with numerical answers results compared to existing methods that have been an open problem. The proposed method outperforms the state-of-the-art methods in VQA-CP v2 datasets.
Ali Vosoughi, Shijian Deng, Songyang Zhang, Yapeng Tian, Chenliang Xu, Jiebo Luo
2023-05-31T09:02:58Z
http://arxiv.org/abs/2305.19664v1
# Unveiling Cross Modality Bias in Visual Question Answering: A Causal View with Possible Worlds VQA ###### Abstract To increase the generalization capability of VQA systems, many recent studies have tried to debias spurious language or vision associations that shortcut the question or image to the answer. Despite these efforts, the literature fails to address the confounding effect of vision and language simultaneously. As a result, when they reduce bias learned from one modality, they usually increase bias from the other. In this paper, we first model a confounding effect that causes language and vision bias simultaneously, then propose a counterfactual inference to remove the influence of this effect. The model trained in this strategy can concurrently and efficiently reduce vision and language bias. To the best of our knowledge, this is the first work to reduce biases resulting from confounding effects of vision and language in VQA, leveraging causal explain-away relations. We accompany our method with an explain-away strategy, pushing the accuracy of the questions with numerical answers results compared to existing methods that have been an open problem. The proposed method outperforms the state-of-the-art methods in VQA-CP v2 datasets. ## 1 Introduction Visual Question Answering (VQA) systems are one of the most fundamental building blocks at the intersection of vision and language Zellers et al. (2019); Niu et al. (2021); Kolling et al. (2022). VQA systems use linguistic and visual information to obtain correct and robust answers to given questions from an image. Despite the efforts, regrettably, most VQA systems shortcut directly from the vision or language to an answer Niu and Zhang (2021); Cadene et al. (2019, 2019). This shortcut is known as vision or language bias and has been well-studied in recent years Jing et al. (2020); Ramakrishnan et al. (2018); Cadene et al. (2019); Clark et al. (2019); Niu et al. (2021); Gat et al. (2020). Spurious correlations sometimes shortcut an answer to an image, and other times question to an answer. CF-VQA Niu et al. (2021) was proposed to alleviate this problem by replacing natural indirect effect (NIE) with total indirect effect (TIE). Still, this method focuses only on language bias, ignores the visual information, and can also mislead the VQA model, resulting in CF-VQA sometimes giving its answer directly by uttering salient objects in the picture even if the answer obviously should be a number or "yes/no" according to the type of the question. Recent studies suggest that memory and culture influence the perception of visual information in humans Lupyan et al. (2020), the problem that we illustrate in Fig. 1 with a famous example of Rubin's vase Rubin (1915), where the same image can be perceived differently. The difference in preference and perception confounds VQA datasets, making them biased in data collection and annotation process Antol et al. (2015); Niu et al. (2021). Therefore, VQA models fail to generalize as these confounders affect vision and language in datasets. Contradicting those existing methods, we propose a new system called possible worlds VQA (PW-VQA) to address vision and language biases by removing the confounding effects of two modalities through a causal lens. After removing these effects captured through training, our model is less biased by either language or vision modality during test time. Furthermore, compared to other models, ours achieved significant performance improvement on the numerical questions, which used to be a struggling problem for previous methods. Our contributions are as follows. 1) We propose a causal graph separating the problem into two sub-graphs of anticausal learning and an explain-away network. We simultaneously model the visual and linguistic biases through the explain-away network to distinguish between bad and good language and vision biases. We model the experience bias of the annotator as an unobserved confounder that influences the choice of question and answer pairs. 2) We propose a counterfactual approach to reduce these bad biases while keeping the good ones. To the best of our knowledge, our work is the first to propose a causal method to simultaneously alleviate language and vision biases. 3) We double the accuracy of the numerical questions, which has been an open question recently Niu et al. (2021). ## 2 Motivation and Background Our method is motivated by Counterfactual VQA, CF-VQA Niu et al. (2021), which was motivated by Reducing Unimodal Biases for VQA, RUBi Cadene et al. (2019). We review these two methods and their evolution in 2.1 and 2.2 and then discuss their limitations in 2.3. ### Reducing Unimodal Biases for VQA The undirected graph of a RUBi is shown in Fig. 1(a), with \(\{V,Q,K,A,M\}\) as set of nodes, \(V\): image, \(Q\): question, \(K\): multimodal knowledge, \(A\): answer out of a set of answers \(\mathcal{A}=\{a\}\), \(M\): question mask. \(\mathcal{F}_{Q}\) is an encoder for questions, and \(\mathcal{F}_{V}\) is for images. Consequently, a multimodal function \(\mathcal{F}_{VQ}\) is used to obtain \(k=\mathcal{F}_{VQ}(v,q)\). An auxiliary neural network \(nn_{q}\) is trained to classify answers based on only \(\{q,a\}\) pairs. Then, the classification head is discarded at inference to obtain the masks \(m=\sigma(nn_{q}(\mathcal{F}_{Q}(q)))\), where \(\sigma\) is the _sigmoid_ function. The masks are then applied to the multimodal classification \(k\odot m\) to reduce the language bias. ### Counterfactual VQA (CF-VQA) CF-VQA uses counterfactual thinking and causal inference to improve RUBi, by only adding one learnable parameter. The causal graph of CF-VQA is shown in Fig. 1(b). The graph \(\mathcal{G}=\{\mathcal{V},\mathcal{E}\}\) is a Directed Acyclic Graph (DAG), where \(\mathcal{V}=\{V,Q,K,A\}\) with a set of causal edges such that if \(Q\to K\), then \(Q\) is a direct cause of \(K\). Moreover, \(Q\) is an indirect cause of \(A\) through the _mediator_\(K\), as \(Q\to K\to A\). The causal edge assumption states that every parent is a direct cause of all its children. The answer \(a\) can be defined in a multi-class classifier using logits (score) \(Z\). Therefore, for \(h\) as a fusion function, for question \(q\), image \(v\), and multimodal knowledge \(k\), these scores for question-only, multimodal fused and vision-only are: \[\begin{split} Z_{q}=\mathcal{F}_{Q}(q),\quad Z_{v}=\mathcal{F}_ {V}(v),\quad Z_{k}=\mathcal{F}_{VQ}(v,q),\\ Z_{q,v,k}=h(Z_{q},Z_{v},Z_{k}),\end{split} \tag{1}\] Denoting answer score \(Z_{q,v,k}\) as: \[Z_{q,v,k}=Z(Q=q,V=v,K=k), \tag{2}\] the total effect (TE) of \(V=v\) and \(Q=q\) on \(A=a\), according to Niu et al. (2021), is defined as: \[TE=Z_{q,v,k}-Z_{q^{*},v^{*},k^{*}}, \tag{3}\] where \(Z_{q^{*},v^{*},k^{*}}\) is answer logits \(Z\) for counterfactual question \(q^{*}\), counterfactual image \(v^{*}\), and counterfactual multimodal knowledge \(k^{*}\). The total effect can be decomposed into natural direct effect (NDE) and total indirect effect Figure 2: VQA graphs related to RUBi and CF-VQA are shown. a) In RUBi, question \(Q\) and image \(V\) are fused through multimodal knowledge \(K\) to obtain an answer \(A\), while question-only mask \(M\) is applied on \(K\); b) causal graph of CF-VQA is shown, where \(Q\to A\) and \(V\to A\) are vision and language shortcuts, all \(V\), \(Q\), and \(K\) are factual; c) output of VQA with counterfactual question \(Q=q^{*}\) and vision \(V=v^{*}\) is subtracted from a regular VQA with factual \(V=v\) and \(Q=q\). (TIE): \[TE=TIE+NDE. \tag{4}\] NDE for the question-only branch is \(Q\to A\) by comparing \(Z_{q,v^{*},k^{*}}\) and \(Z_{q^{*},v^{*},k^{*}}\): \[NDE=Z_{q,v^{*},k^{*}}-Z_{q^{*},v^{*},k^{*}}. \tag{5}\] Finally, using (3), (4), and (5), TIE will be: \[TIE=Z_{q,v,k}-Z_{q,v^{*},k^{*}}, \tag{6}\] as shown in Fig. 2c. Consequently, the logits \(Z_{q,v,k}\) is parametrized as \(\mathcal{F}_{Q}\): \(Q\!\rightarrow\!A\), and \(\mathcal{F}_{VQ}\): \((V,Q)\!\rightarrow\!K\!\rightarrow\!A\). The question-only and vision-only logits \(Z_{q}\) and \(Z_{v}\) will be as: \[Z_{b}=\begin{cases}z_{b}=\mathcal{F}_{B}(b)&\text{ if }B=b\\ z_{b}^{*}=c&\text{ if }b=\varnothing\end{cases}, \tag{7}\] where \(B\in\{Q,V\}\), and \(c\) as a constant, learnable feature, as described in Niu et al. (2021), and \(z_{b}^{*}\) is a counterfactual realization of \(Z_{b}\). Furthermore, multimodal knowledge's logit \(Z_{k}\) is defined as: \[Z_{k}=\begin{cases}z_{k}=\mathcal{F}_{VQ}(v,q)&\text{ if }V=v\text{ and }Q=q\\ z_{k}^{*}=c&\text{ if }V=\varnothing\text{ or }Q=\varnothing\end{cases}. \tag{8}\] ### Limitations **Visual Bias in VQA:** Visual bias is relatively recent, especially since the language has been known as the primary source of spurious question-answer correlations and has shadowed the research on vision bias Gat et al. (2020). Some recent works have studied the VQA systems' shortcuts directly from the vision's contextual information to the answer Gupta et al. (2022). This includes the learning biases of the colors and pixels or the context of the image and a lack of accurate attention to the important parts Gupta et al. (2022). We propose a method that mitigates both the language and vision biases in a counterfactual explain-away network that enhances the multimodality of the VQA models. **Memory's Influence:** Recent studies suggest that memory influences visual perception Lupyan et al. (2020), the problem that we illustrate with a famous example of Rubin's face Rubin (1915), as shown in Fig. 1, where the same image can be perceived differently. Rubin discussed this as memory bias Rubin (1915, 1921), which accumulates based on the experiences of individuals. The importance of visual perception may depend on the language Lupyan et al. (2020), location Zhang and Choi (2021), time Zhang and Choi (2021), and experiences of people Liu et al. (2021), which may lead to interpret an image in different ways. Although prior works on other datasets have tried to add concepts or new languages Liu et al. (2021), our work proposes a method to address the experience of the annotator as an unobserved confounder to reduce experience bias. ## 3 Possible Worlds VQA (PW-VQA) In this section, we explain the proposed method in four subsections. First, we simultaneously model the language and vision bias using a causal view. Then we model experience bias as unobserved confounders of the VQA systems. Third, a counterfactual method is proposed in the subsequent subsection to solve these problems. Finally, we propose a novel strategy to fuse multimodal vision and language information in VQA systems. Assume that a multimodal knowledge \(K\) contains fused information of question \(Q\) and vision \(V\) used in a VQA system. We propose the causal graph \(\mathcal{G}=\{\mathcal{V},\mathcal{E}\}\) with the set of nodes \(\mathcal{V}=\{Q,V,K,A,\hat{A}\}\), which is shown in Fig. 3a to model VQA systems. Inspired by the anticausal learning Janzing et al. (2012); Arjovsky et al. (2019), we model the answer \(A\) as a cause of both the images \(V\) and question \(Q\). Unlike previous works Niu et al. (2021); Cadene et al. (2019); Niu and Zhang (2021), we distinguish between the ground-truth answer \(A\) for the training of the VQA model and the estimated answer \(\hat{A}\) when the model is used in practice (test). Therefore, as shown in Fig. 3b, \(Q\) and \(V\) have a causal effect on \(K\) and are also a child of the answer \(A\). **Collider Confounder in Vision and Language:** The relationship \(Q\to A\) creates a spurious correlation between the question \(Q\) directly to the answer \(A\). Therefore, the \(V\to K\to A\) information is ignored. Contrarily the VQA models may shortcut visual information to answer \(V\to K\to A\) rather than multimodal knowledge Gupta et al. (2022). By looking at the subgraph shown in Fig. 3c, the explain-away network, or collider bias network simultaneously can model vision and language bias. The relationship \(Q\rightarrow\hat{A}\gets V\) is a collider, a primitive graph structure, _aka_ explain-away network. Consequently, having a strong connection \(Q\rightarrow\hat{A}\) removes the dependency of the \(\hat{A}\) on \(V\). Noteworthy that there are useful information and harmful biases in both vision and language. Our explain-away method aims to remove biases but keep good information. Therefore we introduce the collider of \(Q\rightarrow\hat{A}\gets V\) as a source of vision-language Figure 3: The proposed causal graph reformulates the VQA problem by stating that a) the answer \(A\) is a cause of the question \(Q\), and vision \(V\), and the final estimated answer \(\hat{A}\) is achieved by fusing \(V\) and \(Q\) information. b) The anticausal subgraph consists of the ground-truth answer \(A\) that is a cause of the \(V\) and \(Q\), which leads to multimodal knowledge \(K\). c) The collider \(Q\to K\gets V\) is an explain-away network that models the language-vision bias. bias in VQA models. **Experience as an Unobserved Confounder:** Based on the proposed causal graph \(\mathcal{G}\), a novel source of confounding is introduced related to the experience of the annotator that happens during the preparation of the datasets. As an example of experience bias, we have seen the visual illusion problem in Fig. 1. To be specific, selecting questions \(Q\) and answering \(A\) to the question based on an image \(V\) relies on the personal preferences of the annotator. Therefore, unobserved bias \(U\) depends on the personal preferences of the annotator. The proposed causal graph for the VQA models with unobserved confounder is shown in Fig. 5. Consequently, by looking into different paths that are parents or ancestors of \(\hat{A}\), they can be listed as \(U\to A\to Q\to K\rightarrow\hat{A}\), \(U\to A\to Q\to K\rightarrow\hat{A}\), \(U\to A\to V\rightarrow\hat{A}\), and \(U\to A\to V\to K\rightarrow\hat{A}\). The same can be listed for \(U\to Q\) paths; however, only \(K\rightarrow\hat{A}\) is of interest for the VQA models. **Explain-Away Fusion Strategy (EA):** We propose the following Explain-Away (EA) fusion function as follows. For parametrization, we use similar notations as Niu et al. (2021). Therefore, the score \(Z_{q,v,k}\) which is the feature space of the fusion \(K\), is parametrized as \(\mathcal{F}_{Q}\): \(Q\!\rightarrow\!\hat{A}\), and \(\mathcal{F}_{VQ}\): \((V,Q)\!\rightarrow\!K\!\rightarrow\!\hat{A}\). Based on \(Z_{q},Z_{v}\), and \(Z_{k}\), we define the fusion function as follows: (EA) \[h(Z_{q},Z_{v},Z_{k})=\frac{1}{\alpha+1}\log(Z_{\text{EA}}),\] (9) Figure 4: The multimodal knowledge \(K=k^{*}\) is counterfactual, while Q and V are facts (\(Q=q,V=v,K=k^{*}\)), then, the natural indirect effect (NDE) is subtracted from the total effect (TE) to obtain total indirect effect (TIE). The values \(V\!=\!v\) and \(Q\!=\!q\) are fact, and \(V\!=\!v^{*}\) and \(Q\!=\!q^{*}\), which leads to \(K\!=\!k^{*}\) are counterfactuals. Figure 5: The causal graph of the VQA where the question \(Q\) and the answer \(A\) are influenced by unobserved confounder \(U\). where \(Z_{\text{EA}}\) is defined as: \[\begin{split} Z_{\text{EA}}=&\sigma(Z_{q})^{\alpha} \sigma(Z_{v})^{\alpha+1}\sigma(Z_{k})^{\alpha+1}\\ &+\sigma(Z_{q})^{\alpha+1}\sigma(Z_{v})^{\alpha}\sigma(Z_{k})^{ \alpha+1}\\ &+\sigma(Z_{q})^{\alpha+1}\sigma(Z_{v})^{\alpha+1}\sigma(Z_{k})^ {\alpha},\end{split} \tag{10}\] and \(\alpha\geq 0\) is a free parameter that can be defined based on empirical analysis. **Unobserved Confounding Bias Reduction:** Since the model relies on the fused information \(K\) of \(V\) and \(Q\), and as shown in Fig. 4, the confounding bias of vision-language can be removed by maximizing the total indirect effect (TIE) by subtracting natural direct effect (NDE) of this confounding influence from its total effect (TE) Pearl (2001): \[\begin{split}\textit{TIE}&=\textit{TE}-\textit{ NDE}\\ &=h(Z_{q},Z_{v},Z_{k})-h(Z_{q},Z_{v},Z_{k^{*}}),\end{split} \tag{11}\] where \(K^{*}\) is a counterfactual of \(K\), as described in (8). As the influence of the unobserved confounding bias is subtracted in (11), it will block the influence of the explain-way of vision-language and experience biases altogether. By blocking the two paths \(V\to K\) and \(Q\to K\), all influences from unobserved confounding bias are blocked. **Training:** For the training of the network, we use vision-only branch \(\mathcal{L}_{VA}(v,a)\), question-only branch \(\mathcal{L}_{QA}(q,a)\), and multimodal fusion branch \(\mathcal{L}_{VQA}(v,q,a)\). As illustrated in Fig. 5, given a triplet \((v,q,a)\) where \(a\) is the ground-truth answer of image-question pair \((v,q)\), the branches are optimized by minimizing the cross-entropy losses over the scores \(Z_{q,v,k}\), \(Z_{q}\) and \(Z_{v}\): Niu et al. (2021): \[\mathcal{L}_{cls}=\mathcal{L}_{VQA}(v,q,a)+\mathcal{L}_{QA}(q,a)+\mathcal{L}_ {VA}(v,a), \tag{12}\] where \(\mathcal{L}_{VQA}\), \(\mathcal{L}_{QA}\) and \(\mathcal{L}_{VA}\) are over \(Z_{q,v,k}\), \(Z_{q}\) and \(Z_{v}\). A learnable parameter \(c\) in Eq. (7)-(8), which controls the sharpness of the distribution of \(Z_{q,v^{*},k^{*}}\) is also included, as the sharpness of NDE should be similar to that of TE Hinton et al. (2015); Niu et al. (2021). An improper \(c\) would lead to the domination of TIE in Eq. (11) by either TE or NDE. Thus, we use KL-divergence to estimate \(c\): \[\mathcal{L}_{kl}=\frac{1}{|\mathcal{A}|}\sum_{a\in\mathcal{A}}-p(a|q,v,k)\log p (a|q,v^{*},k^{*}), \tag{13}\] where \(p(a|q,v,k)\!=\!\text{softmax}(Z_{q,v,k})\) and \(p(a|q,v^{*},k^{*})\!=\!\text{softmax}(Z_{q,v^{*},k^{*}})\). Only \(c\) is updated when minimizing \(\mathcal{L}_{kl}\). The final loss is the combination of \(\mathcal{L}_{cls}\) and \(\mathcal{L}_{kl}\): \[\mathcal{L}_{final}=\sum_{(v,q,a)\in\mathcal{D}}\mathcal{L}_{cls}+\mathcal{L} _{kl} \tag{14}\] **Inference**. For the inference, we use the debiased causal effect for inference, which is implemented as: \[\begin{split}\textit{TIE}=\textit{TE}-\textit{NDE}& =Z_{q,v,k}-Z_{q,v^{*},k^{*}}\\ &=h(z_{q},z_{v},z_{k})-h(z_{q},z_{v}^{*},z_{k}^{*}).\end{split} \tag{15}\] ## 4 Experiments The model can be trained on a computer with a single GeForce GTX 1080 GPU. We used GTX 1080 Ti and RTX A6000 GPUs in our simulations. We used the VQA-CP v2 dataset, which has about 438K questions on the train set and 220K questions on the test set, with corresponding question-answer pairs in the English language Agrawal et al. (2018). We applied our VQA model on three backbones, namely Stacked Attention Network (SAN) Yang et al. (2016), Bottom-up and Top-down Attention (UpDn) Anderson et al. (2018), and a simplified MUREL Cadene et al. (2019b) \begin{table} \begin{tabular}{|l c c|c c c|c c c|c|} \hline \multirow{2}{*}{Test set} & \multicolumn{5}{c|}{VQA-CP v2 test} & \multicolumn{5}{c|}{VQA v2 test} \\ \cline{2-10} Methods & Base & All & Y/N & Num. & Other & All & Y/N & Num. & Other \\ \hline GVQA Agrawal et al. (2018) & - & 31.30 & 57.99 & 13.68 & 22.14 & 42.24 & 72.03 & 31.17 & 34.65 \\ SAN Yang et al. (2016) & - & 24.96 & 38.35 & 11.14 & 21.74 & 52.41 & 70.06 & 39.28 & 47.84 \\ UpDn Anderson et al. (2018) & - & 39.74 & 42.27 & 11.93 & 46.05 & 63.84 & 8.18 & 42.14 & **55.66** \\ S-MRL Cadene et al. (2019a) & - & 38.46 & 42.85 & 12.81 & 43.20 & 63.10 & - & - & - \\ \hline \multicolumn{10}{l}{_Methods based on modifying language module or using language prior:_} \\ \hline DLR Jing et al. (2020) & UpDn & 48.87 & 70.99 & 18.72 & 45.57 & 57.96 & 76.82 & 39.33 & 48.54 \\ VGQE Kv and Mittal (2020) & UpDn & 48.75 & - & - & - & **64.04** & - & - & - \\ VGQE Kv and Mittal (2020) & S-MRL & 50.11 & 66.35 & 27.08 & 46.77 & 63.18 & - & - & - \\ AdvReg. Ramakrishnan et al. (2018) & UpDn & 41.17 & 65.49 & 15.48 & 35.48 & 62.75 & 79.84 & 42.35 & 55.16 \\ RUBi Cadene et al. (2019a) & UpDn & 44.23 & 67.05 & 17.48 & 39.61 & - & - & - \\ RUBi Cadene et al. (2019a) & S-MRL & 47.11 & 68.65 & 20.28 & 43.18 & 61.16 & - & - & - \\ LM Clark et al. (2019) & UpDn & 48.78 & 72.78 & 14.61 & 45.58 & 63.26 & 81.16 & 42.22 & 55.22 \\ LM-H Clark et al. (2019) & UpDn & 52.01 & 72.58 & 31.12 & 46.97 & 56.35 & 65.06 & 37.63 & 54.69 \\ CF-VQA (SUM) Niu et al. (2021) & UpDn & 53.55 & **91.15** & 13.03 & 44.97 & 63.54 & **82.51** & **43.96** & 54.30 \\ CF-VQA (SUM) Niu et al. (2021) & S-MRL & 55.05 & 90.61 & 21.50 & 45.61 & 60.94 & 81.13 & 43.86 & 50.11 \\ GGE-DQ-tog Han et al. (2021) & UpDn & 57.32 & 87.04 & 27.75 & **49.59** & 59.11 & 73.27 & 39.99 & 54.39 \\ \hline \multicolumn{10}{l}{Methods based on reducing visual bias or enhancing visual attention/grounding:} \\ \hline AtAlign Selvaraju et al. (2019) & UpDn & 39.37 & 43.02 & 11.89 & 45.00 & 63.24 & 80.99 & 42.55 & 55.22 \\ HINT Selvaraju et al. (2019) & UpDn & 46.73 & 67.27 & 10.61 & 45.88 & 63.38 & 81.18 & 42.99 & 55.56 \\ SCR Wu and Mooney (2019) & UpDn & 49.45 & 72.36 & 10.93 & 48.02 & 62.20 & 78.80 & 41.60 & 54.50 \\ \hline \multicolumn{10}{l}{_Methods mitigation both language and vision:_} \\ \hline LIM+Fisher [Gat et al., 2020] & UpDn & 54.55 & 74.03 & 49.16 & 45.82 & - & - & - & - \\ PW-VQA (ours) & UpDn & 59.06 & 88.26 & 52.89 & 45.45 & 62.63 & 81.80 & 43.90 & 53.01 \\ PW-VQA (ours) & S-MRL & **60.26** & 88.09 & **59.13** & 45.99 & 61.25 & 80.32 & 43.17 & 51.53 \\ \hline \multicolumn{10}{l}{_Methods that synthesize data to augment and balance running splits:_} \\ \hline CVL Abbasnejad et al. (2020) & UpDn & 42.12 & 45.72 & 12.45 & 48.34 & - & - & - & - \\ Unshuffling Teney et al. (2020) & UpDn & 42.39 & 47.72 & 14.43 & 47.24 & 68.08 & 78.32 & 42.16 & 52.81 \\ Randling Teney et al. (2020) & UpDn & 55.37 & 83.89 & 41.60 & 44.20 & 57.24 & 76.53 & 33.87 & 48.57 \\ SSL Zhu et al. (2021) & UpDn & 57.59 & 86.53 & 29.87 & 50.03 & 63.73 & - & - & - \\ CSS Chen et al. (2020) & UpDn & 58.95 & 84.37 & 49.42 & 48.21 & 59.91 & 73.25 & 39.77 & 55.11 \\ CSS+CL Liang et al. (2020) & UpDn & 59.18 & 86.99 & 49.89 & 47.16 & 57.29 & 67.27 & 38.40 & 54.71 \\ Mutant Gokhale et al. (2020) & UpDn & 61.72 & 88.90 & 49.68 & 50.78 & 62.56 & 82.07 & 42.52 & 53.28 \\ LIM+CCD Kolling et al. (2022) & UpDn & 59.92 & 83.23 & 52.59 & 49.71 & 57.38 & 69.06 & 35.74 & 54.25 \\ \hline \end{tabular} \end{table} Table 1: The table lists the accuracy values for the most recent studies, especially on both VQA-CP v2 and VQA v2 datasets. We show the best-performing method with bold and the second-best-performing method with an underline. We use a dash for the papers that miss reporting performance values on datasets. Figure 6: The plots show the backbones using our proposed causal framework (PW-VQA) and fusion strategy (EA). The results are consistently improving for all three different backbones, namely, SAN Yang et al. (2016), S-MRL Cadene et al. (2019a), and UpDn Anderson et al. (2018). (S-MRL) Cadene et al. (2019). Training of our model on VQA-CP v2 dataset Agrawal et al. (2018) with SAN Yang et al. (2016) takes about 8 hours, and with S-MRL Cadene et al. (2019) and UpDn Anderson et al. (2018) on average takes about 3 hours. The validation on the test split of the VQA-CP v2 dataset takes about 10 minutes. We used accuracy as the evaluation metric. We manually searched the hyperparameter, and we reported those which have the best results in the ablation study. We used a batch size of 256 and 22 epochs for all runs. Increasing the number of epochs does not improve the results since the model converges to a stable result within 22 epochs. We observed that the model does not converge with \(\alpha<1\) values, and therefore we bound \(\alpha\geq 1\). We tried \(\alpha\) values between 1 to 2 for 11 times and based on empirical study, \(\alpha=1.5\) achieves the best-performing result. **Quantitative results:** To compare our method with the available literature on the benchmark datasets, we list the performance values in Table 1. Then, to compare reasonably with the existing methods, we divide them into four categories: 1) Methods like DLR Jing et al. (2020), VGQE Kv and Mittal (2020)), AdvReg Ramakrishnan et al. (2018), RUBi Cadene et al. (2019), LM Clark et al. (2019), LM+H Clark et al. (2019), CF-VQANiu et al. (2021), GGE-DQ-tog Han et al. (2021) modify language modules or use language before suppress, control, or mask language shortcuts. However, these methods only consider spurious language correlations and neglect vision in their schema. 2) Some approaches, such as AttAlign Selvaraju et al. (2019), HINT Selvaraju et al. (2019), SCR Wu and Mooney (2019) mitigate visual biases by loosening contextual ties to the answer or improving visual grounding and attention via human feedback, de-coupling shortcuts that couple vision to answer. 3) Other approaches like LMH+Fisher Gat et al. (2020) mitigate both language and vision bias together, attempting to balance two modalities of vision and language for robust multimodal inference. Our proposed method here is in this class. 4) Methods such as CVL Abbasnejad et al. (2020), Unshuffling Teney et al. (2020), RandImg Teney et al. (2020), SSL Zhu et al. (2021), Mutant Gokhale et al. (2020), CSS Chen et al. (2020), CSS+CL Liang et al. (2020), LMH+ECD Kolling et al. (2022) synthesize samples and augment data to balance training and test sets; however, it violates the main idea of the VQA-CP v2 dataset. Therefore, it is not fair to compare them with our method; however, we include them in our results for inclusiveness. As listed in Table 1, our method outperforms most of the competing methods on the benchmark datasets, especially in numerical questions, which was introduced as an open problem recently Niu et al. (2021). Moreover, the results indicate that our method improves the accuracy of both the S-MRL and UpDn backbones, demonstrating that they are generalizable to both architectures. Noteworthy to mention that there are higher accuracy values for methods that augment data. In contrast, these methods are not comparable to ours as they do not obey the main idea of the VQA-CP v2 dataset, conducting unbiased inference under biased training. Simulation results of our proposed EA fusion strategy and the PW-VQA are shown in Fig. 6. Both the EA fusion strategy and PW-VQA framework increase the accuracy of all question types. Particularly, the accuracy of numerical results with SAN as backbone increases from 12.4 to 37.6 by adding EA fusion and further increases to 57.7 by adding the PW-VQA framework. Furthermore, the improvements are consistent for all backbones, including SAN, S-MRL, and UpDn. More improvements can be achieved using large pretrained language-vision models. We used generative BLIP decoder Li et al. (2022) and CLIP encoders Radford et al. (2021) to achieve more improvements.2 Footnote 2: For more explanations see the appendix. **Qualitative results** To qualitatively show the results of our method vs CF-VQA and regular VQA, we did simulations on the VQA-CP v2 dataset which, some examples have been shown in Fig. 7. As seen in the pictures of Fig. 7, our method is less biased by either language or vision bias. For example, When asked, "Is she eating?" for a picture showing a cat and a woman, our method can correctly answer "No" while the regular VQA cannot. Interestingly, the CF-VQA is clearly biased by the salient object in the picture and has extremely high confidence in answering this question with "cat" which is obviously ridiculous. Another example is both regular VQA and CF-VQA cannot use the key information from the question, which results in answering the question with biased inference due to the foreground animals in the picture. More examples are in the appendix. Five distributions of numerical answers for train, test, regular VQA, CF-VQA, and our model are shown in Fig. 8. We can clearly see in Fig. 8 that CF-VQA captured many biases in question with "numerical" answers from the training Figure 8: The distributions of the train, test sets, the previous methods, namely regular VQA, CF-VQA Niu et al. (2021), and our proposed method are shown. Note that there is a subtle difference between “how many” questions versus questions with “numerical” answers, which is related to the difference between recognizing “numerical” rather than counting “how many”. Figure 7: Qualitative comparison on VQA-CP v2 test split on regular VQA, CF-VQA Niu et al. (2021) and our method are shown as bar plots, where the red bars with a sparse pattern are ground-truth. Values on the bars are probabilities out of 100% to have an answer as correct. dataset. At the same time, our model reduces those biases and obtains answer distribution benefits from removing language and vision biases simultaneously during de-confounding causal inference and closer to the test dataset. ## 5 Related Work VQA-CP dataset has been proposed to benchmark the generalizability of VQA models under changing prior conditions Agrawal et al. (2018). Various methods Niu et al. (2021), Han et al. (2021), Gat et al. (2020), Kv and Mittal (2020), Abbasnejad et al. (2020), Kolling et al. (2022), Gupta et al. (2022), Shrestha et al. (2022) have been proposed to solve VQA-CP, which can be divided into four main categories. 1) Methods that modify language module or use language prior to suppressing or controlling language shortcuts by separating question-only branches or capturing language prior to subtracting or masking in the model Ramakrishnan et al. (2018), Cadene et al. (2019), Clark et al. (2019). 2) Methods that mitigate bias through reducing visual bias or enhancing visual attention/grounding use human input to increase the attention to visual information or reduce contextual biases that shortcut vision to answer Selvaraju et al. (2019), Wu and Mooney (2019), Das et al. (2017). 3) Mitigation of both language and vision bias together that tries to balance two modalities of vision and language for robust multimodal inference. And 4) Methods that synthesize data to augment and balance training splits, that use generative models to synthesize and augment visual and linguistic data to balance the distribution of training splits Chen et al. (2020), Abbasnejad et al. (2020), Teney et al. (2020), Zhu et al. (2021), Kolling et al. (2022). The causal inference has inspired several studies in computer vision, including visual explanations Goyal et al. (2019), Wang and Vasconcelos (2020), Yi et al. (2019), scene graph generation Tang et al. (2020), image recognition Tang et al. (2020), zero-shot and few-shot learning Yue et al. (2020, 2021) incremental learning Hu et al. (2021), representation learning Wang et al. (2020), Zhang et al. (2020), semantic segmentation Zhang et al. (2020), and vision-language tasks Chen et al. (2020), Teney et al. (2020), Yang et al. (2021), Fu et al. (2020), Yang et al. (2021). Especially, counterfactual learning has been exploited in recent VQA studies Chen et al. (2020), Teney et al. (2020), Abbasnejad et al. (2020). ## 6 Conclusion VQA systems suffer from leveraging information only from one modality, especially the language modality from the given question. Many methods have been proposed to address this kind of problem. However, the previous method didn't consider that biases that come from each modality are highly confounded through the annotation process. VQA systems that ignore this effect cannot avoid increasing the bias learned from one modality while trying to reduce bias from another modality. We formulate the Explain-Away effect that causes the bias of both vision and language modalities with a novel causal framework for VQA systems. This framework can be implemented on the different VQA backbones and improve their generalizability significantly. The proposed framework successfully helps VQA systems reduce language bias without increasing vision bias. Experiment results show that our proposed method achieved state-of-the-art performance on de-bias oriented dataset VQA-CP especially doubled the accuracy on numerical questions from the previous best model. ## Acknowledgments This work has been partially supported by the National Science Foundation (NSF) under Grant 1909912 and the Defense Advance Research Projects Agency (DARPA) under Contract HR00112220003. The content of the information does not necessarily reflect the position of the Government, and no official endorsement should be inferred.
2309.06491
When the moduli space is an orbifold: Spontaneous breaking of continuous non-invertible symmetries
We investigate theories of Nambu-Goldstone bosons where the spontaneously broken continuous symmetry is non-invertible. In such theories, the vacua generically parameterize an orbifold. We study in detail the simplest example of a single free scalar with shift symmetry, modded by reflection symmetry. At singular points of the vacuum manifold, we show that the spectrum of NG excitations is reduced, in particular there are no single-particle states. At the smooth points, on the other hand, single NG modes are present. We show that this is a consequence of the fact that at those points one can construct invertible operators implementing the continuous symmetry on the Hilbert space.
Jeremias Aguilera Damia, Riccardo Argurio, Soumyadeep Chaudhuri
2023-09-12T18:05:03Z
http://arxiv.org/abs/2309.06491v2
# When the moduli space is an orbifold: Spontaneous breaking of continuous non-invertible symmetries ###### Abstract We investigate theories of Nambu-Goldstone bosons where the spontaneously broken continuous symmetry is non-invertible. In such theories, the vacua generically parameterize an orbifold. We study in detail the simplest example of a single free scalar with shift symmetry, modded by reflection symmetry. At singular points of the vacuum manifold, we show that the spectrum of NG excitations is reduced, in particular there are no single-particle states. At the smooth points, on the other hand, single NG modes are present. We show that this is a consequence of the fact that at those points one can construct invertible operators implementing the continuous symmetry on the Hilbert space. + Footnote †: institutetext: \({}^{1}\)Department of Physics, University of California, Berkeley, CA 94720, USA ## 1 Introduction and outlook The spontaneous breaking of a (0-form) continuous global symmetry has profound consequences in quantum field theory (QFT). Most notably, it leads to an effective low-energy description in terms of massless modes, the Nambu-Goldstone (NG) modes, which are directly related to the (Lie algebra) generators of the broken symmetries. Such a low-energy theory can then be formulated as a \(\sigma\)-model on a group (or more generally a coset) manifold. In this paper we investigate what changes when the continuous symmetry that is spontaneously broken does not form a group, but rather constitutes a fusion category of non-invertible symmetries. Non-invertible symmetries have been at the center of recent interest, see [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33] for a partial list of references. While the spontaneous breaking of non-invertible 0-form symmetries of finite order has been considered for instance in [15; 32; 21; 33], the same question for non-invertible 0-form symmetries of the continuous kind has not yet been explored to our knowledge.1 Footnote 1: We stress that our set-up is different from the one in [31], where the broken symmetry is originally of the “rational” kind considered first in [22; 23]. The latter is enough to ensure the masslessness of the axion/NG boson. In [31] (see also [30]), a construction is devised to see the axion as an ordinary NG boson parameterising an \(S^{1}\), i.e. the group \(U(1)\). This property will be the main distinction from our set-up. Continuous non-invertible symmetries are most easily obtained in the following way. Let us start by considering theories with both a global continuous invertible symmetry and a discrete symmetry. Importantly, the latter acts non-trivially on the generators of the continuous symmetry. A paradigmatic example is the one of charge conjugation in presence of a continuous global symmetry. The non-trivial twist is to then gauge the discrete symmetry acting on the theory. This gauging does not destroy the continuous symmetry, but makes it non-invertible. This is known to happen in scalar models in two dimensions, _e.g._ the \(c=1\) orbifold CFT, and in \(4d\)\(O(2)\) gauge theory (see for instance [11] and [10; 20; 24], respectively). In both of the cases above, there is no degeneracy of vacua, because in \(2d\) the Coleman theorem prevents the breaking of the symmetry, while in \(4d\) the continuous symmetry is a higher-form symmetry [34]. In order to explore theories with continuous vacuum degeneracies, we generalize the orbifold construction to scalars in \(d>2\). In fact, one way to generate such a theory is to put the \(O(2)\) gauge theory discussed in [24], on the manifold \(\mathbb{R}^{d-1,1}\times S^{1}\). In such a set-up the scalar model that we will discuss emerges as a decoupled effective theory in the deep IR through dimensional reduction.2 Our aim is to study the nature of the NG modes arising in this model from the breaking of a continuous non-invertible 0-form symmetry. Footnote 2: This is analogous to how a free compactified boson emerges in the deep IR when the Maxwell theory is defined on a manifold with a compact direction. In this work we start with a model of a free scalar \(\phi\) with a \(U(1)\) shift symmetry. This model has a \(\mathbb{Z}_{2}\) reflection symmetry acting as \(\phi\to-\phi\) which we proceed to gauge. The \(\mathbb{Z}_{2}\)-gauging leads to the shift symmetry becoming non-invertible.3 A distinctive feature of this model is that it presents a moduli space of vacua \(\mathcal{M}\) which includes an \(S^{1}/\mathbb{Z}_{2}\) orbifold. The points in this orbifold are parametrized by a coordinate \(\theta\in[0,\pi]\). In ordinary situations, when a modulus originates from the breaking of an invertible symmetry, the Hilbert spaces of NG modes \(\mathcal{H}(\theta)\) are isomorphic at every point in the vacuum manifold. This stems from the fact that there exists a bijective map \(\mathcal{U}:\mathcal{H}(\theta)\to\mathcal{H}(\theta^{\prime})\), generated by the broken symmetry, which is well-defined for every pair of points in \(\mathcal{M}\). Indeed, \(\mathcal{M}\) is a homogeneous manifold in this case. This picture is drastically modified when the symmetry is non-invertible. The vacuum manifold \(\mathcal{M}\) can now have singular points which in our case correspond to \(\theta=0\) and \(\theta=\pi\). We nevertheless show that there is a non-vanishing order parameter at any point on \(\mathcal{M}\) which means that the non-invertible global symmetry is spontaneously broken everywhere. It turns out however that the Hilbert spaces built upon the singular points are qualitatively distinct from those built upon the other points in the orbifold. Footnote 3: In fact, the non-invertibility is manifest already in the action on the local charged operators. This is in contrast to many other instances of non-invertible (0-form) symmetries, which usually act invertibly on the local operators charged under them, but manifest their non-invertibility when higher dimensional operators are involved, see e.g. [14; 15; 22; 23]. Let us be more specific. In the presence of global symmetries, we may further refine the definition of the Hilbert space. On general grounds, \(n\)-dimensional topological defects associated to a \((d-n-1)\)-form global symmetry may host non-trivial \((n-1)\)-dimensional (disorder) operators at their boundaries when defined on an open surface. Operators of this kind are usually associated with twisted sectors in the theory. In particular, when \(n=1\) and the global symmetry is a \((d-2)\)-form \(\mathbb{Z}_{2}\) symmetry, such operators implement a mapping between two distinct superselection sectors in the Hilbert space of the theory: \[\mathcal{H}=\mathcal{H}^{(u)}\oplus\mathcal{H}^{(t)}\,, \tag{1}\] where we denote \(\mathcal{H}^{(u)}\) the untwisted sector and \(\mathcal{H}^{(t)}\) the twisted sector. In our model, there is a \((d-2)\)-form \(\mathbb{Z}_{2}\) quantum symmetry that is generated by topological Wilson lines of the \(\mathbb{Z}_{2}\) gauge field. Hence, the Hilbert space of the theory is split as above. The spontaneous breaking of the non-invertible symmetry actually leads to two sets of degenerate vacua in these two superselection sectors. The vacua in the untwisted sector take the form of the \(S^{1}/\mathbb{Z}_{2}\) orbifold mentioned earlier. However, only the regular points parametrized by \(\theta\in(0,\pi)\) have their counterparts in the twisted sector. We show that one can define a set of invertible operators acting at any value of \(\theta\) lying in the open set \((0,\pi)\). While these operators are not topological, in the sense that there is no sensible (time-like) defect associated to them, they are conserved over time. The action of these operators implements translations between the regular points of the orbifold, while it is trivial on the singular points. Furthermore, these operators define isomorphisms between the Hilbert spaces built upon the different regular points in the orbifold. Making use of this structure, we establish the existence of single particle states corresponding to the propagating NG modes at any regular point. In fact, one can construct a state with arbitrary number of particles on these vacua. However, at the fixed points of the orbifold, i.e. \(\theta=0\) or \(\theta=\pi\), the spectrum is drastically reduced as all odd-particle states are in the twisted sector. In particular, the single particle states are no longer in the untwisted spectrum. This means that the symmetry breaking produces only massless NG bosons in pairs at those specific points. The paper is organized as follows. In section 2 we discuss in detail our simple model which involves a single compact scalar with shift symmetry and gauged reflection symmetry, i.e. the simplest orbifold, however in spacetime dimension \(d>2\). All the notions that we want to highlight are present in this model: non-invertibility, vacuum degeneracy, vacuum-dependent spectrum. In section 3 we conclude with some comments on generalizations and further investigations. ## 2 \(\mathbb{Z}_{2}\)-gauged theory of a free compact scalar ### Review of the free theory of a compact scalar Let us consider the theory of a free compact scalar, \(\phi\sim\phi+2\pi\) in \(d\)-dimensional Minkowski spacetime. The action of this model is given by \[\mathcal{S}=\frac{1}{2}\ g\ \int d\phi\wedge\star d\phi\,, \tag{1}\] where \(g\) is a parameter with mass dimension \(d-2\). This model has a \(U(1)\) global symmetry realized by shifts \(\phi\to\phi-\alpha\) where \(\alpha\) is a constant respecting the identification \(\alpha\sim\alpha+2\pi\). The associated Noether current \[j=-g\,d\phi \tag{2}\] is conserved due to the equation of motion \(d\star d\phi=0\). The symmetry is implemented by \((d-1)\)-dimensional topological operators \[\mathcal{U}_{\alpha}(\Sigma)=e^{i\alpha\mathcal{Q}(\Sigma)}=e^{i\alpha\int_{ \Sigma}\star j}, \tag{3}\] where \(\Sigma\) can be taken to be either a closed submanifold when considering action on operator insertions, or a space-like surface extending to infinity when considering action on the physical states. With a slight abuse of notation we will use the same symbols for both the cases. The meaning should be clear to the reader from the context. Local operators charged under this symmetry are accounted for by properly quantized vertex operators \(e^{in\phi(x)}\), \(n\in\mathbb{Z}\). The standard quantization of the free field \(\phi\) in momentum space reads \[\phi(t,\mathbf{x})=\overline{\phi}+\lim_{V\to\infty}\frac{\overline{\pi}}{gV}t +\frac{1}{\sqrt{2g}}\int\frac{d^{d-1}k}{(2\pi)^{d-1}}\frac{1}{\sqrt{|\mathbf{k }|}}\Big{[}a_{\mathbf{k}}e^{-i|\mathbf{k}|t+i\mathbf{k}.\mathbf{x}}+a_{\mathbf{ k}}^{\dagger}e^{i|\mathbf{k}|t-i\mathbf{k}.\mathbf{x}}\Big{]}\, \tag{4}\] with the usual commutation rules, _i.e._ \[[\overline{\phi},\overline{\pi}]=i\,\ \ \ \ [a_{\mathbf{k}},a_{\mathbf{k}^{ \prime}}^{\dagger}]=(2\pi)^{d-1}\delta^{(d-1)}(\mathbf{k}-\mathbf{k}^{\prime} ). \tag{5}\] Here \(\overline{\phi}\) is the zero-momentum mode which satisfies the identification \(\overline{\phi}\sim\overline{\phi}+2\pi\). \(\overline{\pi}\) is the momentum conjugate to \(\overline{\phi}\). The term involving \(\overline{\pi}\) vanishes in the infinite volume (\(V\)) limit. Nevertheless, we indicate it since, even in this limit, \(\overline{\pi}\) appears in the charge \(\mathcal{Q}\) that generates the shift symmetry given in (3).4 This shift symmetry is spontaneously broken resulting in a continuous set of degenerate vacua parametrized by the eigenvalues of \(e^{i\hat{\phi}}\), namely \(e^{i\hat{\phi}}|\theta\rangle=e^{i\theta}|\theta\rangle\) with \(\theta\sim\theta+2\pi\). Therefore, the moduli space takes the form \(\mathcal{M}_{0}=S^{1}\). Throughout this work, we will take the spacetime dimension to be \(d>2\), so that we are actually dealing with the low energy effective theory of an NG boson. Indeed, in \(d=2\) the vacuum degeneracy is lifted due to the Coleman-Mermin-Wagner theorem, while we would like to focus precisely on the properties of the moduli space of vacua. Shifts within \(\mathcal{M}_{0}\) are generated by the topological operators (3) acting as5 Footnote 4: When the surface \(\Sigma\) in (3) is an infinite space-like surface, \(\mathcal{Q}=-\overline{\pi}\). Footnote 5: For sake of notational simplicity, we are omitting the fixed time slice \(\Sigma\) in these expressions. \[\mathcal{U}_{\alpha}|\theta\rangle=|\theta+\alpha\rangle\ \ \,\ \ \ \mathcal{U}_{\alpha}a_{\mathbf{k}}d_{\alpha}^{\dagger}=a_{ \mathbf{k}}\ \ \,\ \ \ \mathcal{U}_{\alpha}a_{\mathbf{k}}^{\dagger}d_{\alpha}^{\dagger}=a_{ \mathbf{k}}^{\dagger}. \tag{6}\] In addition to the above-mentioned \(U(1)\) symmetry, there is also a \(\mathbb{Z}_{2}\) 0-form symmetry which is the reflection \(\phi\to-\phi\). Hence, it acts on the conserved current as \(j\to-j\). This enhances the symmetry group to \(U(1)\rtimes\mathbb{Z}_{2}\cong O(2)\). There is also a \(U(1)\)\((d-2)\)-form symmetry associated to the topologically conserved current \(\hat{j}=(2\pi)^{-1}\star d\phi\). Objects charged under this symmetry are properly quantized holonomies \(e^{in\int\hat{\phi}}\) of the dual \((d-2)\)-form field, defined by \(d\hat{\phi}=2\pi g\star d\phi\), over closed \((d-2)\)-dimensional manifolds. Note that the conserved current \(\hat{j}\) is also reversed by the action of the \(\mathbb{Z}_{2}\) reflection symmetry. However, being a continuous \((d-2)\)-form symmetry, it can never be spontaneously broken [34]. Hence, it does not play any substantial role in our analysis. Let us notice that the \(\mathbb{Z}_{2}\) reflection symmetry is not realized in the same way for all points in \(\mathcal{M}_{0}\). In fact, this symmetry is preserved only by the vacua \(|0\rangle\) and \(|\pi\rangle\), whereas it is broken for the rest of the values of \(\theta\). Note that since \(O(2)\cong U(1)\rtimes\mathbb{Z}_{2}\) (and not a direct product), when it is spontaneously broken the moduli space is still just isomorphic to \(S^{1}\cong U(1)\), but with the \(\mathbb{Z}_{2}\) acting non-trivially on all the points except \(\theta=0,\pi\). The Hilbert spaces \(\mathcal{H}_{0}(\theta)\) associated to the NG bosons are obtained by acting with creation operators \(\{a_{\mathbf{k}}^{\dagger}\}\) on the respective vacua. The vector spaces \(\mathcal{H}_{0}(\theta)\) that are obtained in this way from distinct vacua are mutually orthogonal, though completely isomorphic. The latter statement is a consequence of the fact that different vacua \(|\theta\rangle\) and \(|\theta^{\prime}\rangle\) are related by a shift symmetry transformation \({\cal U}_{\alpha}\) with \(\alpha=\theta^{\prime}-\theta\). From its action (6) it is clear that it implements a bijection \({\cal U}_{\alpha}\,:\,{\cal H}_{0}(\theta)\rightarrow{\cal H}_{0}(\theta^{ \prime})\). The full Hilbert space of the theory is the direct sum of all these vector spaces. On general grounds, when a given theory possesses global symmetries, the spectrum of operators decomposes into two classes. On the one hand, the untwisted sector comprises all the _genuine_ operators in the spectrum. Genuine local operators are the ones that can be defined locally without any need to be attached to topological lines. More generally, an \(n\)-dimensional operator is called genuine when it does not live on the boundary of any \((n+1)\)-dimensional open topological defect. On the contrary, the twisted sector is formed by all the _non-genuine_ operators, that is the ones that are well defined only as boundaries of topological defects.6 Footnote 6: This notion becomes more transparent in two spacetime dimensions, where the state operator correspondence induces a similar grading on the Hilbert space. More precisely, non-genuine operators defined at the endpoints of topological lines are in one-to-one correspondence with states obtained by quantization with twisted boundary conditions. This defines the so-called defect Hilbert space. In higher dimensions, this construction becomes less precise, mainly due to the fact that topological defects may come in various dimensionalities. Borrowing the intuition from the two-dimensional case, we will associate a state in a twisted Hilbert space to operators attached to topological line defects. The latter necessarily correspond to generators of a \((d-2)\)-form symmetry. On the contrary, extended non-genuine operators will not be interpreted in terms of states. Let us pause here to comment about the different classes of operators arising in this theory. First, genuine local operators are accounted for by properly quantized vertex operators \(e^{in\phi(x)}\), \(n\in\mathbb{Z}\), together with arbitrary products of derivatives of \(\phi(x)\). As explained above, there are also genuine \((d-2)\)-dimensional 'vortices' described by properly quantized holonomies of the dual field. These exhaust the untwisted sector in the ungauged theory. Let us now list the non-genuine operators contained in the twisted sector. Associated to \((d-1)\)-dimensional defects of the \(U(1)\) 0-form shift symmetry there are twisted sectors encompassing improperly quantized vortices. There is also a discrete \(\mathbb{Z}_{2}\)'reflection vortex' living on the boundaries of open \(\mathbb{Z}_{2}\) reflection symmetry defects. When going around either of these vortex-type operators, the field \(\phi\) undergoes \(\phi\rightarrow\phi-\alpha\) (\(\alpha\in[0,2\pi)\)) or \(\phi\rightarrow-\phi\) respectively. In \(d>2\) non-compact dimensions, these extended operators do not map to states in a twisted Hilbert space (see footnote 6). The latter actually consists of states created by improperly quantized vertex operators \(e^{i\nu\phi(x)}\) with \(\nu\notin\mathbb{Z}\), that need to be attached to a topological line associated to the \((d-2)\)-form \(U(1)\) symmetry with current \(\hat{j}\). These operators may become genuine by gauging discrete subgroups of the \(U(1)\) shift symmetry but we will ignore them throughout this paper. ### Operators and states in the \(\mathbb{Z}_{2}\)-gauged theory Let us now proceed to the theory obtained by gauging the \(\mathbb{Z}_{2}\) reflection symmetry. Before entering into a detailed discussion of this theory, let us spell out the procedure of \(\mathbb{Z}_{2}\)-gauging to avoid any confusion. One way to implement the \(\mathbb{Z}_{2}\)-gauging is to introduce a \(U(1)\) gauge field, restrict its holonomies to the \(\mathbb{Z}_{2}\) subgroup via a BF action [34; 35], and inally couple the scalar field to this gauge field [36]. We follow a completely equivalent approach where we divide the manifold arbitrarily into simply connected patches. Within each patch the scalar field varies continuously, and the Lagrangian is given by \[{\cal L}=\frac{1}{2}g\partial_{\mu}\phi\partial^{\mu}\phi. \tag{7}\] However, while going from one patch to another neighboring patch, the field can undergo a reflection in the overlapping region. The corresponding transition function is \(-1\) or \(+1\) depending on whether such a \(\mathbb{Z}_{2}\) transformation takes place or not. A gauge transformation in this picture corresponds to flipping the sign of the field throughout a patch. The path integral involves summing over all field configurations (satisfying the above-mentioned constraints) while identifying configurations that are related by such gauge transformations.7 Footnote 7: In such a path integral Dirichlet boundary conditions are imposed at infinity, namely the configurations related by sign flips of the field at infinity are not identified. As a consequence of the \(\mathbb{Z}_{2}\)-gauging, only the \(\mathbb{Z}_{2}\)-neutral sector of the genuine operators discussed above remains genuine. In addition, gauging the \(\mathbb{Z}_{2}\) reflection symmetry retrieves originally non-genuine operators into the spectrum. This occurs with the \((d-2)\)-dimensional reflection vortices. Moreover, these are charged under the dual (quantum) \(\hat{\mathbb{Z}}_{2}^{(d-2)}\) (\(d-2\))-form symmetry generated by topological defect lines corresponding to the holonomies of the \(\mathbb{Z}_{2}\) gauge field [1, 2, 37]. Let us discuss the spectrum of operators in the \(\mathbb{Z}_{2}\)-gauged theory in more detail. The local vertex operators that survive under the gauging are given by the symmetric combinations \[{\cal V}_{n}(x)\equiv\ \frac{1}{2}(e^{in\phi(x)}+e^{-in\phi(x)})\quad,\quad n \in\mathbb{Z}\,. \tag{8}\] The antisymmetric combinations, by themselves, are not gauge-invariant. However, one can construct gauge-invariant operators out of them by attaching a semi-infinite topological \(\mathbb{Z}_{2}\) Wilson line as shown below \[{\cal W}_{n}(x)\equiv\ \frac{1}{2i}\left(e^{in\phi(x)}-e^{-in\phi(x)}\right) \eta_{x}^{\infty}\, \tag{9}\] where \(\eta_{x}^{\infty}\) denotes the semi-infinite line ending at the point \(x\). \(\eta_{x}^{\infty}\) is given by the product of the transition functions for all the overlapping regions through which the line passes as it goes from one patch to another. The topological nature of the line follows from the flatness of the \(\mathbb{Z}_{2}\)-gauge connection. In the simply connected spacetime \(\mathbb{R}^{d-1,1}\) that we are considering, all such semi-infinite Wilson lines ending at a particular point are equivalent as there is no loop where the gauge connection has a nontrivial holonomy. The operators in (9) belong to the spectrum of non-genuine local operators. In fact, since these operators are attached to a line that generates the dual quantum symmetry \(\hat{\mathbb{Z}}_{2}^{(d-2)}\), one may regard them as disorder operators of the latter symmetry.8 Footnote 8: As non-genuine operators, they are not unambiguously defined in presence of reflection vortices, i.e. the objects whose charge is measured by closed loops of the \(\mathbb{Z}_{2}\) gauge field. As a consequence of the topological line in (9), the action of such an operator can be interpreted as a map from objects in the untwisted sector to those in the twisted sector and vice versa. Importantly, note that the subsector of non-genuine operators is generically not closed under fusion. More precisely, due to the fusion algebra satisfied by the \(\mathbb{Z}_{2}\) topological lines (_i.e._\((\eta_{x}^{\infty})^{2}=1\)), products of an even number of disorder operators lead to genuine local operators. We will make use of this property in the following analysis. The above-mentioned semi-infinite Wilson line can also be attached to the field operator \(\phi(x)\) yielding the twisted field \(\phi^{\prime}(x)\) defined below: \[\phi^{\prime}(x)\equiv\ \phi(x)\eta_{x}^{\infty}. \tag{10}\] The periodicity of the field \(\phi(x)\) leads to the identification \(\phi^{\prime}(x)\sim\phi^{\prime}(x)+2\pi\). Let us note that this twisted field satisfies the equation of motion \(d\star d\phi^{\prime}=0\) which follows from the Lagrangian (7) and the fact that \(\eta_{x}^{\infty}\) does not vary within a patch. Moreover, by taking cosines and sines of this field, one can get the operators in (8) and (9) as shown below: \[\mathcal{V}_{n}(x)=\cos\Big{(}n\phi^{\prime}(x)\Big{)}\,\ \mathcal{W}_{n}(x)= \sin\Big{(}n\phi^{\prime}(x)\Big{)}. \tag{11}\] The even powers that appear in the expansion of the cosines lead to the disappearance of the Wilson line since \((\eta_{x}^{\infty})^{2}=1\). Similarly, a single factor of \(\eta_{x}^{\infty}\) survives in each term of the expansion of the sines. So, one ends up with the expressions given in (8) and (9). In addition to the twisted field \(\phi^{\prime}(x)\) discussed above, let us introduce its canonical conjugate \[\pi^{\prime}(x)\equiv g\partial_{t}\phi^{\prime}(x) \tag{12}\] which is also a non-genuine local operator due to the attached Wilson line. Now, we can canonically quantize the fields \(\phi^{\prime}(x)\) and \(\pi^{\prime}(x)\) and demand the following equal-time commutation relations: \[[\phi^{\prime}(t,\mathbf{x}),\phi^{\prime}(t,\mathbf{y})]=0,\ [\pi^{\prime}(t, \mathbf{x}),\pi^{\prime}(t,\mathbf{y})]=0,\ [\phi^{\prime}(t,\mathbf{x}),\pi^{\prime}(t, \mathbf{y})]=i\delta^{(d-1)}(\mathbf{x}-\mathbf{y}). \tag{13}\] Note that the above commutation relations have support only at coincident points. Hence, the effect of the Wilson line trivializes and these commutation relations are the same as those between the field \(\phi\) and its conjugate momentum \(\pi\equiv g\partial_{t}\phi\) in any gauge.9 Footnote 9: To see this, one simply notes that \(\pi^{\prime}(x)=\pi(x)\eta_{x}^{\infty}\), and then uses again the fusion rule \((\eta_{x}^{\infty})^{2}=1\). Next, analogous to (4), we can do a Fourier mode expansion of \(\phi^{\prime}\) as follows: \[\phi^{\prime}(t,\mathbf{x})=\overline{\phi}^{\prime}+\lim_{V\to\infty}\frac{ \overline{\pi}^{\prime}}{gV}t+\frac{1}{\sqrt{2g}}\int\frac{d^{d-1}k}{(2\pi)^{d -1}}\frac{1}{\sqrt{|\mathbf{k}|}}\Big{[}a^{\prime}_{\mathbf{k}}e^{-i|\mathbf{ k}|t+i\mathbf{k}.\mathbf{x}}+a^{\prime\dagger}_{\mathbf{k}}e^{i|\mathbf{k}|t-i \mathbf{k}.\mathbf{x}}\Big{]}, \tag{14}\] with \(\overline{\phi}^{\prime}\sim\overline{\phi}^{\prime}+2\pi\). All the Fourier modes are defined by integrals along a spatial slice.They are essentially linear combinations of the twisted operators \(\phi^{\prime}(t,\mathbf{x})\) and \(\pi^{\prime}(t,\mathbf{x})\) over that slice. Therefore, these Fourier modes, which act on the total Hilbert space of the theory, now map states in the untwisted sector to the twisted sector and vice versa. Note that the total, or extended, Hilbert space includes both untwisted and twisted states, that we will describe in detail shortly. Based on the commutation relations given in (13), we get the following commutators between the Fourier modes introduced above: \[[\overline{\phi}^{\prime},\overline{\pi}^{\prime}]=i\,\ \ \ \ [a^{\prime}_{ \mathbf{k}},a^{\prime\dagger}_{\mathbf{k}^{\prime}}]=(2\pi)^{d-1}\delta^{(d-1)} (\mathbf{k}-\mathbf{k}^{\prime}). \tag{15}\] We will shortly use the above Fourier modes and the commutation relations between them to construct the Hilbert space of the theory. For the moment, let us continue our discussion on the operators in the theory. From the operator \(\phi^{\prime}(x)\) defined above, one can also form an analogue of the current in (2) as follows \[j^{\prime}(x)\equiv-gd\phi^{\prime}(x). \tag{16}\] It is tempting to use this non-genuine current operator to construct an analogue of the operator implementing the shift symmetry in the ungauged theory as shown below: \[\mathcal{U}^{\prime}_{\alpha}(\Sigma)\equiv e^{i\alpha\int_{\Sigma}\star j^{ \prime}}. \tag{17}\] More precisely, the above construction is needed only for \(\alpha\neq 0,\pi\) as the operators \(\mathcal{U}_{0}(\Sigma)\) and \(\mathcal{U}_{\pi}(\Sigma)\) are already gauge invariant by themselves.10 Footnote 10: \(\mathcal{U}_{0}\) is just the identity operator, while \(\mathcal{U}_{\pi}\) implements the shift \(\phi\to\phi-\pi\sim\phi+\pi\) which commutes with the \(\mathbb{Z}_{2}\) gauge transformation. An operator such as (17) is indeed invariant under deformations of the surface \(\Sigma\) as can be seen from the equation of motion \(d\star d\phi^{\prime}=0\). However, the operator \(\mathcal{U}^{\prime}_{\alpha}(\Sigma)\) is a linear combination of genuine and non-genuine surface operators which can be seen as follows. For simplicity, let us place ourselves in a set up suitable for canonical quantization, i.e. take \(\Sigma\) to be a fixed time slice. Then we have \(\mathcal{U}^{\prime}_{\alpha}(\Sigma)=e^{-i\alpha\bar{\pi}^{\prime}}\), where \(\bar{\pi}^{\prime}\) is a twisted operator as we saw above. Hence an operator like \(\mathcal{U}^{\prime}_{\alpha}(\Sigma)\) takes a generic state into a superposition of untwisted and twisted states. Indeed, recalling that the sum of the untwisted and twisted Hilbert spaces is nothing else than the Hilbert space of the ungauged theory, we recognize that \(\mathcal{U}^{\prime}_{\alpha}(\Sigma)\) acts exactly as \(\mathcal{U}_{\alpha}(\Sigma)\) there. However, in the gauged theory, the operators that truly implement symmetries are only those that act within the untwisted sector (and the twisted sector, separately). It is only the latter that we will call genuine. Let us comment here on the following subtlety that needs to be taken into account if one considers such defects on a surface \(\Sigma\) with at least a non-trivial one-cycle. In this case, the definition of the operators (17) has an ambiguity due to the fact that while integrating, the semi-infinite line that defines \(j^{\prime}\) can wind (or not) on the non-trivial 1-cycles. This is taken care of by positing that the final expression contains as a factor the projector11 Footnote 11: As already mentioned, exceptions to this definition are the operators related to \(\alpha=0\) and \(\pi\). Those are associated to the subset of invertible symmetries, and as such they should not involve a projector. \[\mathcal{P}(\Sigma)=\frac{1}{|H_{1}(\Sigma,\mathbb{Z}_{2})|}\sum_{\gamma\in H _{1}(\Sigma,\mathbb{Z}_{2})}\eta(\gamma). \tag{18}\] ote that since \({\cal P}(\Sigma)\eta(\gamma)={\cal P}(\Sigma)\) for \(\gamma\in H_{1}(\Sigma,\mathbb{Z}_{2})\), a consequence of this fact is that the operators \({\cal U}^{\prime}_{\alpha}(\Sigma)\) absorb the \(\eta(\gamma)\) closed lines, i.e. the generators of the quantum \(\hat{\mathbb{Z}}_{2}^{(d-2)}\) symmetry.12 Footnote 12: Another consequence is that if one takes such a defect on a surface \(\Sigma\) with a cylindrical shape, wrapping a reflection vortex, it will annihilate it, a first hint of non-invertibility. One can then extract a genuine topological operator for \(\alpha\neq 0,\pi\) by taking the following symmetric combination of operators \({\cal U}^{\prime}_{\alpha}(\Sigma)\) and \({\cal U}^{\prime}_{-\alpha}(\Sigma)\): \[{\cal T}_{\alpha}(\Sigma)\equiv{\cal U}^{\prime}_{\alpha}(\Sigma)+{\cal U}^{ \prime}_{-\alpha}(\Sigma)\,. \tag{19}\] The above normalization yields fusion rules of these operators with integer coefficients, as we will see shortly. Similarly, a non-genuine topological operator can be obtained by taking the antisymmetric combination of \({\cal U}^{\prime}_{\alpha}(\Sigma)\) and \({\cal U}^{\prime}_{-\alpha}(\Sigma)\): \[{\cal Z}_{\alpha}(\Sigma)\equiv{\cal U}^{\prime}_{\alpha}(\Sigma)-{\cal U}^{ \prime}_{-\alpha}(\Sigma)\,. \tag{20}\] Such defects turn genuine operators into non-genuine ones, and vice-versa, as we will discuss below. A technical remark is in order: the expression (19) for the genuine surface operator as a sum of non-genuine operators \({\cal U}^{\prime}_{\alpha}\) should be interpreted with some care as, at the end, the operators \({\cal T}_{\alpha}\) are indecomposable objects. However, introducing \({\cal U}^{\prime}_{\alpha}\) as a formal intermediate step in the construction leads to a more intuitive picture of the underlying structure. While the operators \({\cal U}_{0}(\Sigma)\) and \({\cal U}_{\pi}(\Sigma)\) form an invertible \(\mathbb{Z}_{2}\) group, the \({\cal T}_{\alpha}(\Sigma)\) implement a non-invertible symmetry which is analogous to the non-invertible symmetry in the \(O(2)\) gauge theory [24]. The non-invertibility of this symmetry is manifest at the level of the fusion rule satisfied by these operators: \[{\cal T}_{\alpha}(\Sigma)\otimes{\cal T}_{\beta}(\Sigma)={\cal T}_{\alpha+ \beta}(\Sigma)+{\cal T}_{\alpha-\beta}(\Sigma)\, \tag{21}\] where we are taking all of \(\alpha\), \(\beta\), \(\alpha+\beta\) and \(\alpha-\beta\) to be different than \(0\) or \(\pi\). If \(\alpha\pm\beta=0\) or \(\pi\), then the right hand side contains an invertible operator, however its coefficient is a projector (or more specifically, and up to a normalization, a condensation defect). For instance: \[{\cal T}_{\alpha}(\Sigma)\otimes{\cal T}_{\alpha}(\Sigma)={\cal T}_{2\alpha}( \Sigma)+2{\cal P}(\Sigma){\cal U}_{0}(\Sigma). \tag{22}\] The presence of \({\cal P}(\Sigma)\) in front of \({\cal U}_{0}(\Sigma)\) is necessary for consistency with the left hand side, and because \({\cal U}_{0}(\Sigma)\equiv\mathbb{I}\) does not carry it, since it is an invertible defect, see a similar discussion in [18, 24]. Within correlation functions, the operators (19) and (8) satisfy the following Ward identity \[{\cal T}_{\alpha}(\Sigma){\cal V}_{n}(x)=2\cos\Big{(}n\alpha\ \text{Lk}(\Sigma,x)\Big{)}{\cal V}_{n}(x)\, \tag{23}\] where \(\text{Lk}(\Sigma,x)\) denotes the linking number between \(\Sigma\) and \(x\). For unit linking, one sees that the symmetry operators with \(\alpha=\frac{(2k+1)\pi}{2n}\), \(k\in\mathbb{Z}\), annihilate \({\cal V}_{n}(x)\), so that they have a non-trivial kernel. This is another manifestation of their non-invertibility. We will later show that this non-invertible symmetry is spontaneously broken in all the vacua of the theory. We can similarly display the Ward identities satisfied by the non-genuine topological defects: \[\begin{split}&\mathcal{Z}_{\alpha}(\Sigma)\mathcal{V}_{n}(x)=2 \sin\Big{(}n\alpha\ \text{Lk}(\Sigma,x)\Big{)}\mathcal{W}_{n}(x)\,\\ &\mathcal{Z}_{\alpha}(\Sigma)\mathcal{W}_{n}(x)=-2\sin\Big{(}n \alpha\ \text{Lk}(\Sigma,x)\Big{)}\mathcal{V}_{n}(x)\.\end{split} \tag{24}\] Finally, a completely analogous structure arises for the continuous \((d-2)\)-form symmetry acting on the vortices since the \(\mathbb{Z}_{2}\) gauge symmetry also takes \(\hat{j}\to-\hat{j}\). Indeed, one can define (non-)genuine extended vortices by taking (anti-)symmetric combinations as in (8) and (9), and similarly for the the continuous \((d-2)\)-form symmetry generators. Having discussed both genuine and non-genuine operators, let us now turn our attention to the states in the theory. We will first discuss the different vacua of the theory. For this, consider the eigenstates of the operator \(e^{i\overline{\phi}^{\prime}}\) which are annihilated by the operators \(\{a^{\prime}_{\mathbf{k}}\}\). These are nothing else than the vacua of the ungauged theory, parametrized by the angular variable \(\theta\) with \(e^{i\overline{\phi}^{\prime}}|\theta\rangle=e^{i\theta}|\theta\rangle\). Note that just as the operator in (17), \(e^{i\overline{\phi}^{\prime}}\) is a linear combination of genuine and non-genuine operators. Accordingly, its eigenstates are (generically) linear combinations of states in the untwisted and the twisted sector. To obtain the vacua in these two respective sectors, one needs to take appropriate linear combinations of the above eigenstates. The vacua in the untwisted sector take the following form: \[|v^{(u)}\rangle_{\theta}\equiv\begin{cases}\frac{1}{\sqrt{2}}(|\theta\rangle+| -\theta\rangle)\text{ for }\theta\in(0,\pi),\\ |\theta\rangle\hskip 56.905512pt\text{ for }\theta=0,\pi.\end{cases} \tag{25}\] The moduli space of vacua in this sector is an orbifold \(S_{1}/\mathbb{Z}_{2}\) which is parametrized by \(\theta\in[0,\pi]\). From these states, one can also obtain the vacua in the twisted sector as follows \[|v^{(t)}\rangle_{\theta}\equiv\frac{1}{\sin(\theta)}\sin(\overline{\phi}^{ \prime})|v^{(u)}\rangle_{\theta}=\frac{1}{\sqrt{2}}(|\theta\rangle-|-\theta \rangle)\text{ for }\theta\in(0,\pi). \tag{26}\] They can be thought as parametrizing an open segment. This can also be inverted to retrieve a subset of the vacua in the untwisted sector, i.e., \[|v^{(u)}\rangle_{\theta}=\frac{1}{\sin(\theta)}\sin(\overline{\phi}^{\prime}) |v^{(t)}\rangle_{\theta} \tag{27}\] In the expressions above, we use \(\sin(\overline{\phi}^{\prime})/\sin(\theta)\) instead of the simpler expression \(\overline{\phi}^{\prime}/\theta\) because it respects the \(2\pi\)-periodicity. Note that \(\sin(\overline{\phi}^{\prime})\) is a twisted operator. Furthermore, the action of this operator on the vacua \(|v^{(u)}\rangle_{0}\) and \(|v^{(u)}\rangle_{\pi}\) vanishes. So, there is no counterpart of these vacua in the twisted sector. This illustrates a distinction between these singular points and the regular points in the orbifold. We will later show that this distinction gets carried over to the Hilbert spaces built upon these two classes of vacua. We may now look at the fate of the non-invertible symmetry given in (2.19) at these different vacua. The order parameters for this symmetry are the expectation values of the operators defined in (2.8) after a normal-ordering, namely \[{}_{\theta}\langle v^{(u)}|:\mathcal{V}_{n}(x):|v^{(u)}\rangle_{ \theta}=\cos(n\theta)\ \text{for}\ \theta\in[0,\pi], \tag{2.28}\] where \(:():\) indicates the normal-ordering in which the creation operators \(\{a^{\prime\dagger}_{\mathbf{k}}\}\) are pushed to the left of the annihilation operators \(\{a^{\prime}_{\mathbf{k}}\}\).13 It is clear that there is an infinite number of non-vanishing order parameters at any point in the orbifold. Hence, the non-invertible symmetry is spontaneously broken in all these vacua,14 including those at \(\theta=0\) and \(\pi\). Footnote 13: As usual, the normal-ordering removes the divergences in the expectation values of the composite operators \(\mathcal{V}_{n}(x)\). Footnote 14: One can check that the same statement is true for the vacua in the twisted sector. Let us next discuss the Hilbert spaces that are constructed upon the different vacua. First, let us focus on the states that can be obtained from the vacua \(|v^{(u)}\rangle_{0}\) and \(|v^{(u)}\rangle_{\pi}\). One can act with the creation operators \(\{a^{\prime\dagger}_{\mathbf{k}}\}\) on \(|v^{(u)}\rangle_{0}\) or \(|v^{(u)}\rangle_{\pi}\) to get these states. Since these creation operators are twisted operators, an odd number of them acting on the vacuum leads to a state in the twisted sector. On the other hand, the states in the untwisted sector are obtained by the action of even number of these creation operators on the vacuum. We emphasize that, in particular, there is no single particle excitation in the untwisted sector living upon the vacua \(|v^{(u)}\rangle_{0}\) and \(|v^{(u)}\rangle_{\pi}\). Such excitations rather lie in the twisted sector. Let us now contrast this with the Hilbert spaces built upon the vacua \(|v^{(u)}\rangle_{\theta}\) and \(|v^{(t)}\rangle_{\theta}\) with \(\theta\in(0,\pi)\). Consider the action of the creation operator \(a^{\prime\dagger}_{\mathbf{k}}\) on the vacuum \(|v^{(u)}\rangle_{\theta}\): \[a^{\prime\dagger}_{\mathbf{k}}|v^{(u)}\rangle_{\theta}=\frac{1} {\sin(\theta)}a^{\prime\dagger}_{\mathbf{k}}\sin(\overline{\phi}^{\prime})|v^ {(t)}\rangle_{\theta}. \tag{2.29}\] The resulting state lies in the twisted sector. Similarly, the action of this operator on \(|v^{(t)}\rangle_{\theta}\) gives a state in the untwisted sector: \[a^{\prime\dagger}_{\mathbf{k}}|v^{(t)}\rangle_{\theta}=\frac{1} {\sin(\theta)}a^{\prime\dagger}_{\mathbf{k}}\sin(\overline{\phi}^{\prime})|v^ {(u)}\rangle_{\theta}. \tag{2.30}\] Based on the above observations, we can define the modified creation/annihilation operators \[\widetilde{a}^{(\theta)}_{\mathbf{k}}\equiv\frac{1}{\sin(\theta) }a^{\prime}_{\mathbf{k}}\sin(\overline{\phi}^{\prime}),\ \widetilde{a}^{(\theta)\dagger}_{\mathbf{k}}\equiv\frac{1}{\sin(\theta)}a^{ \prime\dagger}_{\mathbf{k}}\sin(\overline{\phi}^{\prime})\ \ \text{for}\ \theta\in(0,\pi). \tag{2.31}\] The action of these operators retains states in the untwisted/twisted sector in the same sector as they are obtained by taking the product of two operators that map between these sectors. They satisfy the following commutation relations \[[\widetilde{a}^{(\theta)}_{\mathbf{k}},\widetilde{a}^{(\theta)\dagger}_{ \mathbf{k}^{\prime}}]=(2\pi)^{d-1}\delta^{(d-1)}(\mathbf{k}-\mathbf{k}^{\prime })\frac{\sin^{2}(\overline{\phi}^{\prime})}{\sin^{2}(\theta)}. \tag{2.32}\] Note that these commutation relations reduce to the standard ones in the sectors built upon the vacua \(|v^{(u)}\rangle_{\theta}\) and \(|v^{(t)}\rangle_{\theta}\). Therefore, one can build a tower of states in the untwisted sector by acting the modified creation operators \(\{\widetilde{a}^{(\theta)\dagger}_{\mathbf{k}}\}\) on the vacuum \(|v^{(u)}\rangle_{\theta}\). A similar tower of states can be built upon the vacuum \(|v^{(t)}\rangle_{\theta}\) in the twisted sector by acting with the same operators. In particular, there are single excitation states in both the untwisted and the twisted sectors living upon the vacua \(|v^{(u)}\rangle_{\theta}\) and \(|v^{(t)}\rangle_{\theta}\) respectively. ### Translations along the moduli space of vacua In the above analysis of the Hilbert space of the \(\mathbb{Z}_{2}\)-gauged theory we found that there are two distinct classes of vacua in the untwisted sector. On one hand, there are the ground states \(|v^{(u)}\rangle_{0}\) and \(|v^{(u)}\rangle_{\pi}\), namely the singular points of the \(S^{1}/\mathbb{Z}_{2}\) orbifold. The Hilbert spaces in the untwisted sector that are constructed on top of these vacua only contain states with an even number of massless excitations. In particular, there is no single excitation state in this sector. On the other hand, the lowest energy states \(|v^{(u)}\rangle_{\theta}\) with \(\theta\in(0,\pi)\) can be acted upon by an arbitrary number of the modified gauge invariant creation operators \(\{\widetilde{a}^{\dagger}_{\mathbf{k}}\}\). In other words, the Hilbert spaces living on these vacua have states with arbitrary number of massless excitations.15 Footnote 15: Let us comment here that the single and multi-particle states may be experimentally indistinguishable in a gapless theory. It would be interesting to find a physical setting where the difference in the respective Hilbert spaces is made manifest. Notice that the \(\mathbb{Z}_{2}\) gauge symmetry is Higgsed in this second class of vacua as can be verified from the behavior of the 2-point function of the disorder operator \(\mathcal{W}_{1}\) introduced in (9) at a large separation between the insertions: \[{}_{\theta}\langle v^{(u)}|:\mathcal{W}_{1}(t,\mathbf{x}):\,:\mathcal{W}_{1}( t,\mathbf{y}):|v^{(u)}\rangle_{\theta}\xrightarrow{|\mathbf{x}-\mathbf{y}| \rightarrow\infty}\sin^{2}(\theta). \tag{33}\] We will now argue that the above distinction is emphasized by the presence of a charge operator \(\widetilde{Q}\) in the \(\mathbb{Z}_{2}\)-Higgsed sector which generates translations along the moduli space of vacua. The allowed range of translations is constrained to keep the vacuum in the \(\mathbb{Z}_{2}\)-Higgsed sector, i.e. these translations do not connect the regular points in the moduli space to the singular points \((|v^{(u)}\rangle_{0}\) and \(|v^{(u)}\rangle_{\pi})\). The above feature distinguishes these translations from the familiar case of spontaneous breaking of an ordinary symmetry where the translations along the moduli space cover all the vacua. Despite this distinction, the operators implementing the afore-mentioned translations do form a group which is isomorphic to the group of real numbers under addition.16 We will show that the presence of the charge \(\widetilde{Q}\) generating this group of translations about the regular points in the moduli space leads to isomorphisms between the Hilbert spaces built upon such points. We will also show that, just as in case of ordinary symmetry breaking [38], the single excitation states built upon these vacua are obtained from plane wave superpositions of local excitations of the charge density. Furthermore, we will demonstrate that this charge density (and hence, the charge operator \(\widetilde{Q}\)) has a vanishing action on the singular points in the moduli space, viz. \(|v^{(u)}\rangle_{0}\) and \(|v^{(u)}\rangle_{\pi}\). This is what leads to the absence of the single excitation states in the sectors built upon these vacua. Let us now proceed to construct the charge \(\widetilde{Q}\) that we mentioned above. Consider the non-genuine current operator that was given in (16) and multiply this operator by \(\sin(\overline{\phi}^{\prime})\) to define a modified current \[\widetilde{j}(x)\equiv j^{\prime}(x)\sin(\overline{\phi}^{\prime})=-gd\phi^{ \prime}(x)\sin(\overline{\phi}^{\prime}). \tag{34}\] Just as \(j^{\prime}(x)\), this modified current is also conserved, i.e. \(d\star\widetilde{j}=0\). Integrating the Hodge dual of this current over a \((d-1)\)-dimensional space-like surface \(\Sigma\) that extends to infinity, we can construct the conserved charge operator \[\widetilde{Q}\equiv\int_{\Sigma}\widetilde{\star j}. \tag{35}\] Note that the factor \(\sin(\overline{\phi}^{\prime})\) in \(\widetilde{j}\) implies a surface integral in order to define the (twisted) zero mode \(\overline{\phi}^{\prime}\). In the following, we will always take this surface to be aligned with \(\Sigma\) in (35). Then, considering the Taylor expansion of \(\sin(\overline{\phi}^{\prime})\), we see that we have a sum of terms each of which involves an even number of integrations, starting with a double integral. One can consequently take the integrands to consist of local insertions pairwise connected by finite Wilson lines. This makes the operator \(\widetilde{Q}\) a genuine operator at the price of fixing the surface \(\Sigma\) to be essentially a spacelike slice.17 In this sense it is not a topological operator, as for instance (19), but merely a conserved operator acting on the Hilbert space that we will use to make the structure of the latter more explicit. In other words, the surface operator is still invariant under time translations of the surface. However, there is no clear notion of a defect constructed out of it, hence not complying with the modern view on symmetries. Footnote 17: In the present case, we can take \(\Sigma\) to be a surface with trivial \(H_{1}\), so that we do not need to consider a projector over the closed \(\eta\) lines. Due to the presence of the factor \(\sin(\overline{\phi}^{\prime})\), the operator \(\widetilde{Q}\) has a vanishing action on the vacua \(|v^{(u)}\rangle_{0}\) and \(|v^{(u)}\rangle_{\pi}\). On the other hand, it has a nontrivial action on \(|v^{(u)}\rangle_{\theta}\) for \(\theta\in(0,\pi)\) which generates translations along the moduli space. To show this, let us take \(\Sigma\) to be a constant time slice and use the mode expansion given in (14) to obtain \[\widetilde{Q}=-g\int d^{d-1}x\ \partial_{t}\phi^{\prime}(x)\sin(\overline{\phi} ^{\prime})=-\overline{\pi}^{\prime}\sin(\overline{\phi}^{\prime}). \tag{36}\] Using this charge operator as a generator, we can define a set of operators \[\widetilde{U}(\xi)=e^{i\xi\widetilde{Q}},\ \xi\in(-\infty,\infty). \tag{37}\] Note that the charge \(\widetilde{Q}\) does not obey any quantization condition, hence the operators are indeed parameterized in \(\mathbb{R}\). Consider the action of such an operator with a small value \(\epsilon\) of the parameter \(\xi\) on the vacuum \(|v^{(u)}\rangle_{\theta}\) (\(\theta\in(0,\pi)\)): \[\begin{split}\widetilde{U}(\epsilon)|v^{(u)}\rangle_{\theta}& =\Big{(}\mathbb{I}+i\epsilon\widetilde{Q}\Big{)}|v^{(u)}\rangle_{ \theta}+O(\epsilon^{2})=\frac{1}{\sqrt{2}}\Big{(}|\theta+\epsilon\sin\theta \rangle+|-\theta-\epsilon\sin\theta\rangle\Big{)}+O(\epsilon^{2})\\ &=|v^{(u)}\rangle_{\theta+\epsilon\sin\theta}+O(\epsilon^{2}). \end{split} \tag{38}\] We see that the action on a vacuum specified by \(\theta\) depends on the value of \(\theta\) itself. For a finite transformation, we would like to determine the value of \(\theta^{\prime}\) that one obtains by acting with a transformation of (finite) parameter \(\xi\) on a vacuum given by \(\theta\), i.e. \[\widetilde{U}(\xi)|v^{(u)}\rangle_{\theta}=|v^{(u)}\rangle_{\theta^{\prime}( \xi;\theta)}\, \tag{39}\] where we have made explicit that \(\theta^{\prime}\) depends also on the starting point \(\theta\). From above we learn that for small variations of the parameter \(\xi\) we have \[\theta^{\prime}(\xi+\delta\xi;\theta)=\theta^{\prime}(\xi;\theta)+\delta\xi \sin\theta^{\prime}(\xi;\theta)+O(\delta\xi^{2}). \tag{40}\] In other words, \[\frac{\partial}{\partial\xi}\theta^{\prime}(\xi;\theta)=\sin\theta^{\prime}( \xi;\theta) \tag{41}\] with \(\theta^{\prime}(0,\theta)=\theta\). Integrating the above differential equation we get \[\theta^{\prime}(\xi;\theta)=2\arctan\left(e^{\xi}\tan\frac{\theta}{2}\right)\,. \tag{42}\] Now note that for \(\theta\in(0,\pi)\) and for \(\xi\in\mathbb{R}\), \(\theta^{\prime}(\xi;\theta)\in(0,\pi)\). More precisely, for a fixed \(\theta\in(0,\pi)\), \(\theta^{\prime}(\xi;\theta)\) is a monotonically increasing function from \(\mathbb{R}\) to \((0,\pi)\). Indeed, one can easily see that for \(\xi\to-\infty\), \(\theta^{\prime}\to 0\), while for \(\xi\to+\infty\), \(\theta^{\prime}\to\pi\). On the other hand, if \(\theta=0\), then \(\theta^{\prime}=0\) for any \(\xi\). Similarly if \(\theta=\pi\), then \(\theta^{\prime}=\pi\) for any \(\xi\). Therefore the operators \(\widetilde{U}(\xi)\) keep the vacua \(|v^{(u)}\rangle_{0}\) and \(|v^{(u)}\rangle_{\pi}\) fixed, while they implement translations between the other vacua. The fusion rule of the operators \(\widetilde{U}(\xi)\) is straightforward: \[\widetilde{U}(\xi)\widetilde{U}(\eta)=e^{i\xi\widetilde{Q}}e^{i\eta \widetilde{Q}}=e^{i(\xi+\eta)\widetilde{Q}}=\widetilde{U}(\xi+\eta). \tag{43}\] One can indeed be easily convinced that \[\theta^{\prime}(\xi;\theta^{\prime}(\eta;\theta))=\theta^{\prime}(\xi+\eta; \theta). \tag{44}\] In particular, the action of \(\widetilde{U}(\xi)\) is perfectly invertible. It reproduces the additive group of the real numbers. The open segment \(\theta\in(0,\pi)\) furnishes a faithful representation, while the endpoints \(\theta=0,\pi\) provide trivial representations. Note that the states \(|v^{(t)}\rangle_{\theta}\) are acted upon in exactly the same way as \(|v^{(u)}\rangle_{\theta}\) for \(\theta\in(0,\pi)\). Let us now show that these operators implementing translations between the regular points in the moduli space of vacua also define isomorphisms between the Hilbert spaces built upon those vacua. To see this, consider the states of the following form which span a basis of the Hilbert space living on the vacuum \(|v^{(u)}\rangle_{\theta}\) (\(\theta\in(0,\pi)\)): \[|\Psi^{(\theta)}_{{\bf k}_{1}\cdots{\bf k}_{n}}\rangle=\widetilde{a}^{(\theta )\dagger}_{{\bf k}_{1}}\cdots\widetilde{a}^{(\theta)\dagger}_{{\bf k}_{n}}|v^ {(u)}\rangle_{\theta}. \tag{45}\] If \(n\) is odd, we further have \[|\Psi^{(\theta)}_{{\bf k}_{1}\cdots{\bf k}_{n}}\rangle=a^{\prime\dagger}_{{ \bf k}_{1}}\cdots a^{\prime\dagger}_{{\bf k}_{n}}|v^{(t)}\rangle_{\theta}\, \tag{46}\] while for \(n\) even, we have \[|\Psi^{(\theta)}_{{\bf k}_{1}\cdots{\bf k}_{n}}\rangle=a^{\prime\dagger}_{{\bf k}_ {1}}\cdots a^{\prime\dagger}_{{\bf k}_{n}}|v^{(u)}\rangle_{\theta}. \tag{47}\] Since \(\widetilde{Q}\), and hence \(\widetilde{U}(\xi)\), commute with \(a^{\prime\dagger}_{\bf k}\), it immediately follows that18 Footnote 18: Here note that we have \(\widetilde{U}(\xi)\widetilde{a}^{(\theta)\dagger}_{\bf k}\widetilde{U}(-\xi)= \widetilde{a}^{(\theta^{\prime})\dagger}_{\bf k}\) only when acting on \(|v^{(u)}\rangle_{\theta^{\prime}}\). On other vacua, the relation does not come out with unit normalization. \[\widetilde{U}(\xi)|\Psi^{(\theta)}_{{\bf k}_{1}\cdots{\bf k}_{n}}\rangle= \widetilde{a}^{(\theta^{\prime})\dagger}_{{\bf k}_{1}}\cdots\widetilde{a}^{( \theta^{\prime})\dagger}_{{\bf k}_{n}}|v^{(u)}\rangle_{\theta^{\prime}}\equiv| \Psi^{(\theta^{\prime})}_{{\bf k}_{1}\cdots{\bf k}_{n}}\rangle. \tag{48}\] This evidently defines an isomorphism between the Hilbert spaces built upon \(|v^{(u)}\rangle_{\theta}\) and \(|v^{(u)}\rangle_{\theta^{\prime}}\), for any two \(\theta,\theta^{\prime}\in(0,\pi)\). Now, let us turn our attention to the charge density that appeared in the integral given in (36). This charge density is \[\widetilde{\rho}(t,{\bf x})\equiv-g\partial_{t}\phi^{\prime}(t,{\bf x})\sin( \overline{\phi}^{\prime}). \tag{49}\] By using the mode expansion of \(\phi^{\prime}\) given in (14), we get the following expression for the Fourier transform of the above charge density at the time \(t=0\): \[\int d^{d-1}x\ e^{i{\bf k}\cdot{\bf x}}\widetilde{\rho}(0,{\bf x})=\widetilde{ Q}\delta_{{\bf k},{\bf 0}}+i\sqrt{\frac{g|{\bf k}|}{2}}\Big{[}a^{\prime}_{-{\bf k}}-a^{ \prime\dagger}_{{\bf k}}\Big{]}\sin(\overline{\phi}^{\prime}). \tag{50}\] From this we can easily see that the single excitation states living on the vacuum \(|v^{(u)}\rangle_{\theta}\) (\(\theta\in(0,\pi)\)) are given by \[\widetilde{a}^{\dagger}_{\bf k}|v^{(u)}\rangle_{\theta}=i\sqrt{\frac{2}{g|{\bf k }|}}\frac{1}{\sin(\theta)}\int d^{d-1}x\ e^{i{\bf k}\cdot{\bf x}}\widetilde{ \rho}(0,{\bf x})|v^{(u)}\rangle_{\theta} \tag{51}\] for \({\bf k}\neq{\bf 0}\). Therefore, as mentioned earlier, these single excitation states are obtained from plane wave superpositions of local excitations of the charge density. Note that, just as the charge \(\widetilde{Q}\), the charge density \(\widetilde{\rho}(t,{\bf x})\) has a vanishing action on the singular points in the moduli space of vacua (\(|v^{(u)}\rangle_{0}\) and \(|v^{(u)}\rangle_{\pi}\)). This results in the absence of single excitation states like the ones given in (51) on these vacua. Let us mention here that the above discussion for the \(\mathbb{Z}_{2}\)-Higgsed vacua in the untwisted sector goes through for their counterparts in the twisted sector. This means that the operators defined in (37) also implement translations in the space of the twisted vacua and define isomorphisms between the Hilbert spaces living on these vacua. The single excitation states in these Hilbert spaces are given by expressions analogous to (51). Finally, let us discuss the action of the operators \(\mathcal{T}_{\alpha}\) implementing the non-invertible symmetry on the different vacua. For the following discussion, we take the surface \(\Sigma\) in their definition given in (19) to a be a constant time slice. Unlike the operators defined in (37), these operators are not unitary. For \(\alpha\in(0,\pi)\), \(\mathcal{T}_{\alpha}\) acting on the different regular points in the moduli space generically produces linear combinations of two vacua as shown below: \[\mathcal{T}_{\alpha}|v^{(u)}\rangle_{\theta}=|v^{(u)}\rangle_{\theta+\alpha}+| v^{(u)}\rangle_{\theta-\alpha}\, \tag{52}\] with the understanding that if \(\theta+\alpha>\pi\), then \(|v^{(u)}\rangle_{\theta+\alpha}\equiv|v^{(u)}\rangle_{2\pi-\theta-\alpha}\), and if \(\theta-\alpha<0\), then \(|v^{(u)}\rangle_{\theta-\alpha}\equiv|v^{(u)}\rangle_{\alpha-\theta}\). We also have the special cases \[\begin{split}&\mathcal{T}_{\theta}|v^{(u)}\rangle_{\theta}=\sqrt{2}|v^{ (u)}\rangle_{0}+|v^{(u)}\rangle_{2\theta}\,\\ &\mathcal{T}_{\pi-\theta}|v^{(u)}\rangle_{\theta}=\sqrt{2}|v^{(u) }\rangle_{\pi}+|v^{(u)}\rangle_{2\theta-\pi}\end{split} \tag{53}\] (with the same understanding as above, and the special case \(\mathcal{T}_{\pi/2}|v^{(u)}\rangle_{\pi/2}=\sqrt{2}|v^{(u)}\rangle_{0}+\sqrt{ 2}|v^{(u)}\rangle_{\pi}\)), and finally the (invertible) cases (from now on for simplicity we identify \(\mathcal{T}_{0}\equiv\mathcal{U}_{0}\) and \(\mathcal{T}_{\pi}\equiv\mathcal{U}_{\pi}\)) \[\mathcal{T}_{0}|v^{(u)}\rangle_{\theta}=|v^{(u)}\rangle_{\theta},\qquad \mathcal{T}_{\pi}|v^{(u)}\rangle_{\theta}=|v^{(u)}\rangle_{\pi-\theta}. \tag{54}\] The above expressions completely define the action of \(\mathcal{T}_{\alpha}\) on \(|v^{(u)}\rangle_{\theta}\) (\(\theta\in(0,\pi)\)) for all \(\alpha\in\mathbb{R}\) because of the identities \(\mathcal{T}_{\alpha}=\mathcal{T}_{-\alpha}\) and \(\mathcal{T}_{\alpha}=\mathcal{T}_{\alpha+2\pi}\) which can be verified from the definition of these operators given in (19). From the above action of \(\mathcal{T}_{\theta}\) or \(\mathcal{T}_{\pi-\theta}\) on \(|v^{(u)}\rangle_{\theta}\), we can see that these operators allow one to make a transition from a regular point in the moduli space to a singular point. This may lead the reader to wonder whether, contrary to our previous analysis, the action of these operators on the single excitation states living on \(|v^{(u)}\rangle_{\theta}\) can produce single excitaton states living on the singular points \(|v^{(u)}\rangle_{0,\pi}\). This is indeed not the case as we show below for the action of \(\mathcal{T}_{\theta}\) on a single excitation state built upon \(|v^{(u)}\rangle_{\theta}\): \[\begin{split}\mathcal{T}_{\theta}\widetilde{a}^{(\theta)\dagger }_{\mathbf{k}}|v^{(u)}\rangle_{\theta}&=\frac{1}{\sqrt{2}}( \mathcal{U}^{\prime}_{\theta}+\mathcal{U}^{\prime}_{-\theta})a^{\prime \dagger}_{\mathbf{k}}(|\theta\rangle-|-\theta\rangle)\\ &=\frac{1}{\sqrt{2}}a^{\prime\dagger}_{\mathbf{k}}(|2\theta \rangle-|0\rangle+|0\rangle-|-2\theta\rangle)=\frac{1}{\sqrt{2}}a^{\prime \dagger}_{\mathbf{k}}(|2\theta\rangle-|-2\theta\rangle)\\ &=\text{sgn}(\pi-2\theta)\widetilde{a}^{(2\theta)\dagger}_{ \mathbf{k}}|v^{(u)}\rangle_{2\theta}\,\end{split} \tag{55}\] with a similar understanding as above, i.e. when \(2\theta>\pi\), then \(|v^{(u)}\rangle_{2\theta}\equiv|v^{(u)}\rangle_{2\pi-2\theta}\) and \(\widetilde{a}^{(2\theta)\dagger}_{\mathbf{k}}\equiv\widetilde{a}^{(2\pi-2 \theta)\dagger}_{\mathbf{k}}\). Note that the terms involving states built upon the singular points neatly cancel in the above expression. A similar argument can be presented for the action of \(\mathcal{T}_{\pi-\theta}\) on such a single excitation state. Note that in particular \(\mathcal{T}_{\pi/2}\widetilde{a}^{\dagger}_{\mathbf{k}}|v^{(u)}\rangle_{\pi/ 2}=0\). Let us now consider the action of \(\mathcal{T}_{\alpha}\) on the singular points \(|v^{(u)}\rangle_{0}\) and \(|v^{(u)}\rangle_{\pi}\): \[\mathcal{T}_{\alpha}|v^{(u)}\rangle_{0}=\sqrt{2}|v^{(u)}\rangle_{\alpha},\ \mathcal{T}_{\alpha}|v^{(u)}\rangle_{\pi}=\sqrt{2}|v^{(u)}\rangle_{\pi-\alpha} \tag{56}\] for \(\alpha\in(0,\pi)\). We see that these operators acting on the vacua at the singular points in the moduli space produce the vacua at the regular points (up to a normalization factor). The operator \(\mathcal{T}_{0}\) acts trivially on \(|v^{(u)}\rangle_{0}\) and \(|v^{(u)}\rangle_{\pi}\), whereas the operator \(\mathcal{T}_{\pi}\) exchanges these two vacua. Let us note here that the invertible symmetry generated by \(\mathcal{T}_{\pi}\) leads to an isomorphism between the Hilbert spaces built upon \(|v^{(u)}\rangle_{0}\) and \(|v^{(u)}\rangle_{\pi}\). Unlike the operators \(\widetilde{U}(\xi)\) implementing translations between the regular points in the moduli space of vacua, the operators \(\mathcal{T}_{\alpha}\) are not generated by a charge. Nevertheless, one can perform an expansion of these operators near \(\alpha=0\) as follows: \[\mathcal{T}_{\alpha}=2\mathbb{I}+\alpha^{2}\int d^{d-1}x\int d^{d-1}y\ \rho_{2}(t,\mathbf{x},\mathbf{y})+O(\alpha^{4})\, \tag{57}\] where \[\rho_{2}(t,\mathbf{x},\mathbf{y})\equiv-g^{2}\partial_{t}\phi^{ \prime}(t,\mathbf{x})\partial_{t}\phi^{\prime}(t,\mathbf{y}). \tag{58}\] Just as the single excitation states on the regular points in the moduli space are obtained from the action of the charge density on those vacua, the double excitations on the singular points \(|v\rangle_{0,\pi}\) are obtained from \(\rho_{2}(0,\mathbf{x},\mathbf{y})\) as follows: \[a^{\prime\dagger}_{\mathbf{k}_{1}}a^{\prime\dagger}_{\mathbf{k}_ {2}}|v^{(u)}\rangle_{0,\pi}=\frac{2}{g}\frac{1}{\sqrt{|\mathbf{k}_{1}|| \mathbf{k}_{2}|}}\int d^{d-1}x\int d^{d-1}y\ e^{i\mathbf{k}_{1}.\mathbf{x}+ \mathbf{k}_{2}.\mathbf{y}}:\rho_{2}(0,\mathbf{x},\mathbf{y}):|v^{(u)}\rangle_{ 0,\pi}. \tag{59}\] Similarly, the other states with even number of excitations can be obtained from the integrands appearing in the higher order terms in the expansion (57). As a final comment in this section, let us mention that the states in the twisted sector with odd number of \(a^{\prime\dagger}_{\mathbf{k}}\)'s acting on \(|v^{(u)}\rangle_{0,\pi}\) can be obtained similarly from the integrands appearing in an expansion of the twisted operator \(\mathcal{Z}_{\alpha}\) defined in (20). ## 3 Conclusion and discussion We conclude with some remarks on generalizations of the model that we considered, and on further investigations. So far, we have considered only free field theories, i.e. theories of NG bosons in the strict IR limit. A natural question is whether the features we have discussed are also present when the spontaneous breaking happens in an interacting theory. We believe the answer is positive, simply because most of the arguments can be phrased in terms of the conserved currents before the gauging of the discrete symmetry. Let us consider for simplicity the case of a single NG mode, modded by reflection symmetry. Because of the broken shift symmetry, the low energy theory will be organized as a derivative expansion. For instance, the first non-trivial interaction is the quartic higher dimensional operator \((\partial_{\mu}\phi\partial^{\mu}\phi)^{2}\), which is automatically invariant under reflections \(\phi\rightarrow-\phi\). The current on the other hand, is odd under reflections. The vacuum structure is exactly the same, by definition. Then, using the current, one can build the operators \(\mathcal{U}_{\alpha}\) in the ungauged theory, and the operators \(\mathcal{T}_{\alpha}\) and \(\widetilde{U}_{\alpha}\) after gauging.19 The single massless excitations on the vacua at the regular points in the orbifold can again be extracted from the action of the charge density associated with \(\widetilde{U}_{\alpha}\)[38]. The absence of these states in the Hilbert spaces built upon the singular points also follows from the vanishing action of the charge density on the respective vacua. Footnote 19: In an interacting theory, the expansion (14) of the twisted field \(\phi^{\prime}\) is no longer valid. Nevertheless, one can still define a zero momentum mode \(\overline{\phi}^{\prime}\) by taking an average of this field over a spatial slice. Using this one can construct the operator \(\sin(\overline{\phi}^{\prime})\) which enters in the definition of \(\widetilde{U}_{\alpha}\). It would be interesting to explore how our analysis may be extended to theories with multiple scalars. In particular, it would be nice to generalize to cases where the gauged symmetry forms a finite non-abelian group such as \(S_{N}\). A subtely in this case is that the dual quantum symmetry which arises due to the gauging is non-invertible [39]. This may introduce new complications in defining the twisted operators that played an important role in our analysis. We would like to address these issues in the future. Let us finally return to our model once more and comment on a complementary way to diagnose whether the vacuum is on a singular or a regular point of the moduli space, i.e. whether the \(\mathbb{Z}_{2}\) gauge symmetry is preserved or Higgsed, respectively. Now, we recall the argument that ties the breaking or not of the quantum symmetry to whether the original symmetry was broken or not before gauging [5; 40; 41; 35]. This is because order parameters for the original symmetry in the ungauged theory become disorder parameters (twisted sectors) for the quantum symmetry upon gauging. Moreover, for a given symmetry, a non-vanishing order parameter implies a vanishing disorder parameter, and vice-versa. In other words, we can probe whether a symmetry is broken or not by the vacuum expectation value of its disorder parameter. Then, for the quantum symmetry of our orbifold model, the correlator (33) would imply that the \(\mathbb{Z}_{2}\)\((d-2)\)-form symmetry is preserved in the vacua \(|v^{(u)}\rangle_{\theta}\) for \(\theta\) in \((0,\pi)\). Conversely, at \(\theta=0,\pi\) the symmetry might be broken since the VEV of the disorder operator vanishes. Unfortunately, even if these arguments can be safely applied to the study of massive phases, it is not clear whether they hold in presence of gapless excitations. Therefore, it would be interesting to directly probe the status of the quantum symmetry by evaluating the vacuum expectation value of its order parameter, which is a \((d-2)\)-dimensional surface, namely a "reflection vortex" for \(\phi\). One expects to find an area law in the smooth region of the moduli space, probably after the addition of higher orders in the effective action. In addition, it would be very nice to establish a strong connection between the realization of this emergent symmetry and the structure of the Hilbert space analysed in this work. An alternative path to reach the same conclusion may be the following. Let us first note that the invertible part of the global symmetry in this theory seems to form an interesting higher group structure. Indeed, by a simple generalization of the arguments presented for the orbifold theory in \(d=2\)[11], it is easy to check that the ungauged theory contains an anomaly involving all three global symmetries \[S_{anomaly}\supset\pi\int_{d+1}C^{(1)}\cup A^{(1)}\cup B^{(d-1)}, \tag{34}\] where \(C^{(1)}\), \(A^{(1)}\), \(B^{(d-1)}\) are the background fields (normalized as integral co-cycles) for the \(\mathbb{Z}_{2}\) reflection symmetry and for the \(\mathbb{Z}_{2}\) restrictions of both the \(U(1)\) shift symmetry and the \((d-2)\)-form vortex symmetry respectively. We are restricting to the particular subgroups of the continuous symmetries that remain invertible after gauging. One can then verify that, upon gauging the reflection symmetry, _i.e._ making \(C^{(1)}\to c^{(1)}\) dynamical, gauge invariance forces the following correlation between gauge bundles \[\delta\hat{B}^{(d-1)}=A^{(1)}\cup B^{(d-1)}\, \tag{35}\] where \(\hat{B}^{(d-1)}\) is the background field for the quantum symmetry and \(\delta\) denotes the co-boundary operator. Indeed, the correlation (35) is the signature of a higher group structure [42; 43]. Such a structure usually leads to interesting hierarchies on the symmetry breaking scales corresponding to the global symmetries involved. In physical terms, such a constraint applied for the case at hand would imply that a phase preserving both \(\mathbb{Z}_{2}^{(0)}\) and \(\mathbb{Z}_{2}^{(d-2)}\) necessarily confines the \(\hat{\mathbb{Z}}_{2}^{(d-2)}\) reflection symmetry vortices. Now, consider the vacuum at \(\theta=\pi/2\). From (2.54), we see that in this vacuum the \(\mathbb{Z}_{2}^{(0)}\) generated by \(\mathcal{T}_{\pi}\) is unbroken. Furthermore, the generalization of the Coleman theorem states that a continuous \((d-2)\)-form symmetry cannot be broken. Thus the \(U(1)^{(d-2)}\) vortex symmetry is unbroken in the ungauged model. If we assume that gauging the reflection symmetry does not change that status, then it would follow that its \(\mathbb{Z}_{2}^{(d-2)}\) subgroup that survives in the gauged model is unbroken as well. If this holds, then we would deduce that by virtue of the higher group constraint, the \(\hat{\mathbb{Z}}_{2}^{(d-2)}\) reflection symmetry is also unbroken in the \(\theta=\pi/2\) vacuum. Recalling the isomorphism between Hilbert spaces discussed in this paper, it would then seem reasonable to extend this conclusion to all regular points of the orbifold. We hope to come back to these problems in the future. ## Acknowledgements We thank Andrea Antinucci, Giovanni Galati, Inaki Garcia-Etxebarria, Diego Hofman, Zohar Komargodski, Ho Tat Lam, Giovanni Rizi, Luigi Tizzano, Stathis Vitouladitis and Sasha Zhiboedov for helpful discussions. J.A.D. and R.A. are respectively a Postdoctoral Researcher and a Research Director of the F.R.S.-FNRS (Belgium). S.C. is partially supported by funds from the Solvay Family. The research of J.A.D., R.A. and S.C. is supported by IISN-Belgium (convention 4.4503.15) and through an ARC advanced project.
2309.14095
Velocity-resolved high-J CO emission from massive star-forming clumps
(Abridged) Context. Massive star formation is associated with energetic processes, which result in significant gas cooling via far-infrared (IR) lines. Velocity-resolved observations can constrain the kinematics of the gas, allowing the identification of the physical mechanisms responsible for gas heating. Aims. Our aim is to quantify far-infrared CO line emission toward high-mass star-forming regions, identify the high-velocity gas component associated with outflows, and estimate the physical conditions required for the excitation of the observed lines. Methods. Velocity-resolved SOFIA/GREAT spectra of 13 high-mass star forming clumps of various luminosities and evolutionary stages are studied using CO 11-10 and 16-15 lines. Results. All targets show strong high-J CO emission in the far-IR, characterized by broad line wings associated with outflows, thereby significantly increasing the sample of sources with velocity-resolved high-J CO spectra. The contribution of the emission in the line wings does not correlate with the envelope mass or evolutionary stage. Gas rotational temperatures cover a narrow range of 120-220 K for the line wings. The non-LTE radiative transfer models indicate gas densities of 1e5-1e7 cm-3 and N(CO) of 1e17- 1e18 cm-2, similar to physical conditions in deeply-embedded low- and high-mass protostars. The velocity-integrated CO line fluxes correlate with the bolometric luminosity over 7 orders of magnitude including data on the low-mass protostars, suggesting similar processes are responsible for the high-J CO excitation over a significant range of physical scales. Conclusions. Velocity-resolved line profiles allow the detection of outflows toward massive star-forming clumps spanning a broad range of evolutionary stages. The lack of clear evolutionary trends suggest that mass accretion and ejection prevail during the entire lifetime of star-forming clumps.
Hoang Thanh Dat, Agata Karska, Min Young Lee, Friedrich Wyrowski, Le Ngoc Tram, Aiyuan Y. Yang, Karl M. Menten
2023-09-25T12:42:32Z
http://arxiv.org/abs/2309.14095v1
# Velocity-resolved high-\(J\) CO emission from massive star-forming clumps ###### Abstract Context:Massive star formation is associated with energetic processes, which result in significant gas cooling via far-infrared (IR) lines. Velocity-resolved observations can constrain the kinematics of the gas, allowing the identification of the physical mechanisms responsible for gas heating. Aims:Our aim is to quantify far-infrared CO line emission toward high-mass star-forming regions, identify the high-velocity gas component associated with outflows, and estimate the physical conditions required for the excitation of the observed lines. Methods:Selocity-resolved SOFIA/GREAT spectra of 13 high-mass star forming clumps of various luminosities and evolutionary stages are studied in highly-excited rotational lines of CO. For most targets, the spectra are from frequency intervals covering the CO 11-10 and 16-15 lines. Toward two sources, also the CO 13-12 line was observed with SOFIA/4GREAT. Angular resolutions at the line frequencies range from 14\({}^{\prime\prime}\) to 20\({}^{\prime\prime}\), corresponding to spatial scales of \(\sim\) 0.1-0.8 pc. Radiative transfer models are used to determine the physical conditions giving rise to the emission in the line wings. Results:All targets in our sample show strong high-\(J\) CO emission in the far-IR, characterized by broad line wings associated with outflows, thereby significantly increasing the sample of high-mass objects with velocity-resolved high-\(J\) CO spectra. Twelve sources show emission in the line wings of the CO 11-10 line (\(E_{\rm u}/k_{\rm B}\)=365 K), and 8 sources in the CO 16-15 line (\(E_{\rm u}/k_{\rm B}\)=752 K). The contribution of the emission in the line wings to the total emission ranges from \(\sim\)28% to 76%, and does not correlate with the envelope mass or evolutionary stage. Gas excitation temperatures cover a narrow range of 120-220 K for the line wings, and 110-200 K for the velocity-integrated line emission, assuming Local Thermodynamics Equilibrium (LTE). For the two additional sources with the CO 13-12 line (\(E_{\rm u}/k_{\rm B}\)=503 K) data, wing emission rotational temperatures of \(\sim\)130 K and 165 K are obtained using Boltzmann diagrams. The corresponding non-LTE radiative transfer models indicate gas densities of 10\({}^{5}\)-10\({}^{5}\) cm\({}^{-3}\) and CO column densities of 10\({}^{5}\)-10\({}^{5}\) cm\({}^{-2}\) in the line wings, similar to physical conditions in deeply-embedded low- and high-mass protostars. The velocity-integrated CO line fluxes correlate with the bolometric luminosity over 7 orders of magnitude including data on the low-mass protostars from the literature. This suggests that similar processes are responsible for the high-\(J\) CO excitation over a significant range of physical scales. Conclusions:Velocity-resolved line profiles allow the detection of outflows toward massive star-forming clumps spanning a broad range of evolutionary stages. The lack of clear evolutionary trends suggest that mass accretion and ejection prevail during the entire lifetime of star-forming clumps. ## 1 Introduction High-mass stars have a significant impact on their environments and on galaxy evolution globally through their ionising radiation, stellar winds, and their deaths in supernova explosions (Zinnecker & Yorke, 2007). Already during the earliest stages of their formation, massive protostars might inject significant amounts of energy and momentum into the interstellar medium (ISM) in the form of outflows, capable of disrupting clumps and cores (Beuther et al., 2002; Bally, 2016). Outflows, a ubiquitous phenomenon in both low and high mass star-forming regions, play an essential role in transporting angular-momentum and regulating the star-forming process across multiple spatial scales (Bally & Lada, 1983; Evans, 1999). Both, the dissipation of the envelope material and mass loss via the outflows lower the core-to-star formation efficiency (Krumholz et al., 2014; Offner & Chaban, 2017). At cluster/clump scales, outflows drive turbulence that provides additional support against gravitational collapse (Frank et al., 2014). Outflows are typically detected using low-\(J\) (\(J\lesssim\)5) velocity-resolved rotational lines of carbon monoxide (CO), which is the second most abundant molecule in the interstellar medium (CO/H\({}_{2}=1.2\times 10^{-4}\), Frerking et al. 1982). The low-lying rotational levels of CO are easily collisionally-excited even at low densities and can readily be observed at millimeter wavelengths. These lines constitute a useful diagnostic of the gas kinetic temperature of outflows (Bally and Lada 1983; Yildiz et al. 2015). An extensive search for outflows traced by such low-\(J\) CO lines toward a total of 2052 massive star-forming clumps that were identified in the APEX Telescope Large Area Survey of the Galaxy (ATLASGAL, Schuller et al. 2009), provided an overall outflow detection rate of 58% (Yang et al. 2018a, 2022). Observations of high-\(J\) CO (\(J\gtrsim 10\)) lines provide an opportunity to study denser and warmer parts of star-forming clumps and the outflows that arise in them. Recent surveys with the _Herschel_ Space Observatory (Pilbratt et al. 2010) found that CO lines account for the bulk far-infrared (IR) gas cooling in both low- and high-mass star-forming regions (Karska et al. 2013, 2014, 2018; van Dishoeck et al. 2021). The velocity-resolved profiles of high-\(J\) CO toward low-mass protostars revealed a significant contribution of high-velocity (\(v\sim 20\)-\(30\) km s\({}^{-1}\)) gas to the total far-IR line emission (San Jose-Garcia et al. 2013; Yildiz et al. 2013), and similarity to the H\({}_{2}\)O emission likely arising from the same gas (San Jose-Garcia et al. 2016; Kristensen et al. 2017). Single-pointing observations toward high-mass sources have also revealed broad, outflow wings in high-\(J\) CO line profiles, but have been limited to just a few sources: W3 IRS5 (San Jose-Garcia et al. 2013), AFGL 2591 (Kazmierczak-Barthel et al. 2014), Orion KL, Orion S, Sgr B\({}^{*}\), and W49N (Indriolo et al. 2017). Complementary observations have been obtained with the German REceiver for Astronomy at Terahertz frequencies1 (GREAT, Rissacher et al. 2018) onboard the Stratospheric Observatory For Infrared Astronomy (SOFIA, Young et al. 2012). High-resolution spectroscopy of far-IR CO lines from an intermediate-mass protostar Cep E revealed an extremely high-velocity gas (EHV; \(v\) up to \(\sim 140\) km s\({}^{-1}\)) tracing shocks associated with the jet and intermediate-to-high velocity gas (\(v\) from 50 to 100 km s\({}^{-1}\)) associated with outflow cavities and a bow shock (Gomez-Ruiz et al. 2012; Lefloch et al. 2015; Gusdorf et al. 2017). The line profiles of CO \(16-15\) toward two high-mass sources, however, lacked the EHV component and revealed broad line wings extending up to \(v\sim 50\) km s\({}^{-1}\)(Leurini et al. 2015; Gusdorf et al. 2016). Other surveys, conducted with the PACS and SPIRE instruments aboard the _Herschel_ space telescope (Pilbratt et al. 2010), lacked the high spectral resolution necessary to disentangle the envelope and outflow emission in the spectra (Karska et al. 2014; Goicoechea et al. 2013, 2015). Footnote 1: GREAT is a development by the MPI für Radioastronomie and the KOSMA/Universität zu Köln, in cooperation with the MPI für Sonnensystemforschung and the DLR Institut für Planetenforschung. In this paper, we use SOFIA/GREAT to quantify high-\(J\) CO emission toward 13 high-mass star forming clumps with the aim to isolate the contribution from the outflows and estimate excitation conditions associated with the line wing emission. We also examine how the high-\(J\) CO emission varies as a function of clump properties and evolutionary stages. The paper is organized as follows: Section 2 describes the source sample, observations with SOFIA, and the complementary CO observations with the APEX telescope and _Herschel_. In Section 3, we present line profiles of high-\(J\) CO transitions (Section 3.1) and decompose the emission that belongs to the line wings (Section 3.2). In addition, we study the correlations of velocity-integrated emission with source properties, and those of the fraction of wing emission with source evolutionary stages (Section 3.3). Subsequently, we analyse the excitation of high-\(J\) CO lines using LTE and non-LTE approaches (Sections 3.4 and 3.5). Section 4 consists of the discussion of our results in the context of previous studies and Section 5 presents a summary and our conclusions. ## 2 Observations ### Sample All sources have been selected from the ATLASGAL survey covering 420 deg\({}^{2}\) of the inner Galactic plane in the 870 \(\mu\)m dust continuum (Urquhart et al. 2014; Konig et al. 2017). The latest version of the ATLASGAL source catalog contains 5007 clumps spanning a wide range of masses (\(M_{\rm clump}\)) and luminosities (\(L_{\rm bol}\)), and divided into four evolutionary stages - quiescent, protostellar, young stellar objects, and H II regions (H II), see Urquhart et al. (2022). For this work, we originally selected a representative sample of 20 sources grouped within 4 star-forming regions in the Galactic plane. Among them, 13 sources within 3 regions were successfully observed with SOFIA. Table 1 shows the final list of sources with the overview of their properties and evolutionary stages. The sample consists of 3 protostellar (24d), 7 young stellar object (IRb), and 3 H II regions (HII), with \(L_{\rm bol}\) from \(1.6\times 10^{3}\) to \(4.6\times 10^{5}\) L\({}_{\odot}\) and \(M_{\rm clump}\) from \(1.6\times 10^{2}\) to \(2.3\times 10^{3}\) M\({}_{\odot}\)(Konig et al. 2017; Urquhart et al. 2019, 2022). ### SOFIA observations and data reduction Observations of the CO 11-10 and 16-15 lines were collected using the SOFIA/GREAT (Heyminck et al. 2012; Risacher et al. 2016) and upGREAT receivers (Risacher et al. 2018). Our program "Probing high-\(J\) CO through the evolution of high-mass star forming clumps" (project IDs 02_0102 & 03_0103; PI: F. Wyrowski) run during Cycle 2 (2014 May) and Cycle 3 (2016 May). GREAT was a high resolution, dual-color spectrometer (\(R\geq 10^{7}\)) initially designed for single-beam observations. In 2014, we used its L1 and L2 channels to obtain simultaneous coverage of bands in the 1.25-1.52 THz and 1.80-1.90 THz windows, respectively. In 2016, we combined the GREAT's L1 channel with the upGREAT Low Frequency Array (LFA) which covered the 1.83-2.07 THz window in two polarizations. The 7-pixel hexagonal setup of the LFA provided spatial information about the line emission whereas each pixel had an FWHM beam size of \(14.8^{\prime\prime}\) on the sky2. The corresponding beam size in the L1 channel was \(19.9^{\prime\prime}\) in 2014 and \(19.1^{\prime\prime}\) in 2016. The higher frequency L2 channel provided a FoV of \(14.1^{\prime\prime}\). The adopted main beam efficiencies (\(\eta_{\rm MB}\)) are 0.7 (in 2014) and 0.66 (in 2016) for the L1 channel, 0.65 for L2 channel, and 0.65 for the central spaxel of LFA. Data are processed and reduced by SOFIA/GREAT staff and released at product level 3 where first order baselines have been subtracted. Most of the data are ready to use, except for CO 16-15 spectra of G13.66\(-\)0.6 where an additional third order baseline was subtracted. Spectral resolutions are presented in Table 2. To perform the analyses without any spectral resolution bias, all spectra were smoothed to a common resolution of \(1.0\) km s\({}^{-1}\). Footnote 2: Observer’s Handbook for Cycle 3: [https://www.sofia.usra.edu/sites/default/files/ObsHandbook-Cy3.pdf](https://www.sofia.usra.edu/sites/default/files/ObsHandbook-Cy3.pdf) For G12.81\(-\)0.2 and G351.25\(+\)0.7, additional line observations were collected using the SOFIA/4GREAT receiver (Duran et al. 2021). The observations were done in 2019 under project "high-\(J\) CO observations towards high-mass star forming clumps" (project ID 83\({}_{-}\)0711; PI: H. T. Dat). 4GREAT was a single-beam system with four sub-receivers (4G-1 to 4G-4) and could observe four spectral windows simultaneously. The 4G-3 and 4G-4 modules, which cover the 1.24\(-\)1.52 THz and 2.49\(-\)2.69 THz windows, were tuned to map the CO 13-12 and 22-21 transitions. The maps were scanned in 5 \(\times\) 5 grids with centers 6\({}^{\prime\prime}\) away from each other. The typical beam sizes for the 4G-3 and 4G-4 modules are 20\({}^{\prime\prime}\) and 10.5\({}^{\prime\prime}\), respectively (Duran et al., 2021). Main beam efficiencies are 0.7 for 4G-3 and 0.57 for 4G-4. Observations of the CO 22-21 line are affected by instrumental standing waves that make it difficult to confidently detect line emission. The noise levels of averaged spectra range from 0.40 K to 0.88 K at \(\Delta v\) of 0.6 km s\({}^{-1}\). Data reduction for the CO 13-12 line was performed with the CLASS program, which is part of the GILDAS3 software developed by the Institut de Radioastronomie Millimetrique (IRAM). A second order baseline was subtracted, and the spectra were also smoothed to an adequate 1.0 km s\({}^{-1}\). For this study, we extracted averaged spectra within a beam of 20\({}^{\prime\prime}\). Footnote 3: [https://www.iram.fr/IRAMFR/GILDAS/](https://www.iram.fr/IRAMFR/GILDAS/) ### Additional observations and ancillary data Additional single pointing observations of \({}^{13}\)CO 10-9 and C\({}^{18}\)O 9-8 were conducted with the Herschel-Heterodyne Instrument for the Far-Infrared (HIFI, de Graauw et al., 2010) onboard of the _Herschel_ space telescope. Observations for 10 sources (Appendix D) were obtained as part of project 'A Water survey of massive star forming clumps in the inner Galaxy' (project ID OT2\({}_{-}\)fwryrowsk\({}_{-}\)3, PI: F. Wyrovski). In addition, archival data for G351.44\(+\)0.7 using _Herschel_/HIFI were taken from the "Water in star forming region with Herschel" program (San Jose-Garcia et al., 2013; van Dishoeck et al., 2021). Data from the H and V polarizations of the wide-band spectrometer were averaged. Baselines lower than third order were also subtracted. The spectra were converted to a \(T_{\rm MB}\) scale using a forward efficiency \begin{table} \begin{tabular}{c c c c c c c c c c c} \hline \hline No. & Source & ATLASGAL name\({}^{a}\) & RA & Dec & \(V_{\rm{H}^{a}}\) & \(D^{c}\) & \(L_{\rm{bol}^{a}}\) & \(M_{\rm{clump}^{a}}\) & \(D_{\rm{OC}}\) & Type\({}^{d}\) \\ & & (J2000) & (J2000) & (km s\({}^{-1}\)) & (kpc) & (\(L_{\odot}\)) & (\(M_{\odot}\)) & (kpc) & \\ \hline \hline 1 & G351.16\(+\)0.7 & AGAL351.161\(+\)00.697 & 17:19:56.69 & -35:57:53.0 & -6.0 & 1.3 & \(8.8\times 10^{3}\) & \(1.2\times 10^{3}\) & 6.7 & IRb \\ 2 & G351.25\(+\)0.7 & AGAL351.244\(+\)00.669 & 17:20:18.86 & -35:54:42.5 & -2.8 & 1.3 & \(4.9\times 10^{4}\) & \(3.7\times 10^{2}\) & 6.7 & IRb \\ 3 & G351.44\(+\)0.7 & AGAL351.444\(+\)00.659 & 17:20:55.20 & -35:45:08.0 & -3.8 & 1.3 & \(2.0\times 10^{4}\) & \(1.0\times 10^{3}\) & 6.7 & 24d \\ 4 & G351.58\(-\)0.4 & AGAL351.581\(-\)00.352 & 17:25:25.03 & -36:12:45.4 & -95.6 & 8.0 & \(4.6\times 10^{5}\) & \(2.3\times 10^{3}\) & 2.0 & IRb \\ 5 & G351.77\(-\)0.5 & AGAL351.774\(-\)00.537 & 17:26:42.54 & -36:09:20.1 & -2.8 & 1.3 & \(3.7\times 10^{4}\) & \(3.3\times 10^{2}\) & 7.8 & IRb \\ \hline 6 & G12.81\(-\)0.2 & AGAL012.804\(-\)00.199 & 18:14:13.54 & -17:55:32.0 & 34.6 & 2.6 & \(2.5\times 10^{3}\) & \(1.9\times 10^{3}\) & 6.2 & HII \\ 7 & G14.19\(-\)0.2 & AGAL014.194\(-\)00.194 & 18:16:58.63 & -16:42:16.4 & 39.7 & 3.1 & \(3.7\times 10^{4}\) & \(5.1\times 10^{4}\) & 4.8 & 24d \\ 8 & G13.66\(-\)0.6 & AGAL013.658\(-\)00.599 & 18:17:24.09 & -17:22:10.3 & 48.5 & 4.5 & \(2.4\times 10^{4}\) & \(2.7\times 10^{2}\) & 4.3 & IRb \\ 9 & G14.63\(-\)0.6 & AGAL014.632\(-\)00.577 & 18:19:14.65 & -16:30:02.7 & 18.5 & 1.5 & \(1.6\times 10^{3}\) & \(1.6\times 10^{2}\) & 6.3 & 24d \\ \hline 10 & G34.41\(+\)0.2 & AGAL034.411\(+\)00.234 & 18:53:18.13 & +01:25:23.7 & 57.9 & 2.9 & \(3.1\times 10^{3}\) & \(4.4\times 10^{3}\) & 7.2 & IRb \\ 11 & G34.26\(+\)0.15 & AGAL034.258\(+\)00.154 & 18:53:18.51 & +01:14:57.6 & 58.0 & 2.9 & \(6.1\times 10^{4}\) & \(1.7\times 10^{2}\) & 6.9 & HII \\ 12 & G34.40\(-\)0.2 & AGAL034.401\(+\)00.226 & 18:53:18.63 & +01:24:40.4 & 57.1 & 2.9 & \(3.2\times 10^{3}\) & \(7.9\times 10^{2}\) & 7.2 & HII \\ 13 & G35.20\(-\)0.7 & AGAL035.197\(-\)00.742 & 18:58:12.94 & +01:40:40.6 & 33.5 & 2.2 & \(2.4\times 10^{4}\) & \(4.6\times 10^{2}\) & 6.8 & IRb \\ \hline \hline \end{tabular} 1 [FOOTOTNOTE:1]Footnote 1: Source classification using the criteria from Konig et al. (2017), and refers to IR-bright sources (IRb), IR-weak sources (24d), and HII regions (HII).[ENDFOOTNOTE] \end{table} Table 2: Overview of the observations \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline Molecule & Trans. & Freq. & \(E_{u}/k_{\rm B}\) & \(A_{u}\) & \(g_{u}\) & Receiver & Beam & \(\Delta v\) \\ & \(J_{u}-J_{I}\) & (GHz) & (K) & (s\({}^{-1}\)) & & & (\({}^{a}\)) & (km s\({}^{-1}\)) \\ \hline CO & 6–5 & 691.5 & 116.16 & 2.1(-5) & 13 & APEX/CHAMP\({}^{+}\) & 9 & 0.318 \\ CO & 11–10 & 1267.0 & 364.97 & 1.3(-4) & 23 & SOFIA/GREAT & 20 & (0.361, 0.578) \\ CO & 13–12 & 1496.9 & 503.13 & 2.2(-4) & 27 & SOFIA/GREAT & 20 & 0.978 \\ CO & 16–15 & 1841.4 & 751.27 & 4.1(-4) & 33 & SOFIA/GREAT & 14 & (0.248, 0.795) \\ \hline \({}^{13}\)CO & 6–5 & 661.1 & 111.05 & 1.9(-5) & 13 & APEX/CHAMP\({}^{+}\) & 9 & 0.32 \\ \({}^{13}\)CO & 10–9 & 1101.4 & 290.79 & 8.8(-5) & 21 & Herschel/HIFI & 19 & 0.136 \\ \hline C\({}^{18}\)O & 6–5 & 658.6 & 110.63 & 1.9(-5) & 13 of 0.96 and a main beam efficiency of 0.64 for the \({}^{13}\)CO 10-9 line and 0.74 for the C\({}^{18}\)O 9-8 lines, respectively. Finally, the spectra were smoothed to 1.0 km s\({}^{-1}\). Angular and original spectral resolutions are listed in Table. 2. We also use high spectral resolution \({}^{13}\)CO 6-5 and C\({}^{18}\)O 6-5 data from Dat et al. (in preparation) and CO 6-5 from Navarete et al. (2019). All three transitions were observed with the CHAMP\({}^{+}\) receiver (Kasemann et al. 2006; Gusten et al. 2008) at the Atacama Pathfinder Experiment 12 m submillimeter telescope (APEX) (Gusten et al. 2006). The on-the-fly (OTF) scans resulted in datacubes of \(80\arcsec\times 80\arcsec\) with angular resolution of \(\sim\)9\(\arcsec\). For comparisons with the higher-\(J\) CO observations, averaged spectra with an effective beam size of 20\(\arcsec\) around the Figure 1: SOFIA line profiles of CO \(J\)=11–10 (black), 16–15 (blue), 13–12 (bottom right) transitions. All spectra are resampled to a common spectral resolution of 1.0 km s\({}^{-1}\). Black vertical lines show values of \(V_{\rm lsr}\) (see Table 1). Green horizontal lines show baselines. sources were extracted and then smoothed to 1.0 km s\({}^{-1}\) for all three lines. ## 3 Results and analysis ### Line detections In this section, we examine detection rates of high-\(J\) CO lines toward high-mass clumps from ATLASGAL, present their line profiles, and quantify the correlations of integrated intensities with the sources' properties. Figure 1 shows the spectra of high-\(J\) CO lines toward the central position of high-mass clumps from our sample (see also Table 1). The pattern of emission is generally compact, based on additional observations offset from the clump centers toward four sources, see Appendix A. The CO 11-10 line is detected at 3\(\sigma\) or higher levels toward all sources, which span a broad range of evolutionary stages and have diverse properties. The CO 16-15 line, however, is firmly detected toward 10 out of 13 clumps; G13.66\(-\)0.6 and G34.41\(+\)0.2 show only 2\(\sigma\) peaks and G14.19\(-\)0.2 shows a non-detection. In addition, the CO 13-12 line was successfully observed and detected toward G12.81\(-\)0.2 and G351.25\(+\)0.7. In Appendix B, the peak and integrated intensity of the detected lines are given. The line profiles of clump central positions exhibit a broad line wing emission, suggesting the presence of outflows (Fig. 1). The median full width at zero power4 (FWZP) of 45 km s\({}^{-1}\) is measured for the CO 11-10 line and 33 km s\({}^{-1}\) for the CO 16-15 line (Appendix C). The broadest profile, with FWZP of 165 km s\({}^{-1}\), is seen toward G351.77\(-\)0.5 where high-velocity gas has been detected in CO 2 - 1 and 6 - 5 lines (Leurini et al. 2009). However, multiple pointing observations show a lack of EHV gas component toward the central source; it is only detected at offset outflow positions (Leurini et al. 2009), consistent with the analysis of the outflow emission from an intermediate mass protostar Cep E (Gomez-Ruiz et al. 2012; Lefloch et al. 2015; Gusdorf et al. 2017). The lack of clear evidence of EHV gas toward our sources may also result from the beam dilution, and could only be addressed using high-angular resolution observations (e.g., Cheng et al. 2019). Footnote 4: The FWZP is calculated following a procedure described in San José-García et al. (2016). We first resample the spectra to 3 km s\({}^{-1}\), and subsequently check the velocity of the channel where the line emission drops below 1\(\sigma\). The velocity ranges of high-\(J\) CO lines resemble those detected in CO 6-5 toward the same sources (Fig. 2). Self-absorption features are seen in the CO 11-10 line profiles toward G351.25\(+\)0.7 and G351.77\(-\)0.5. In addition, G12.81\(-\)0.2 and G35.20\(-\)0.7 have tilted peaks which could be an indication of self-absorption. The latter source shows also a sign of self-absorption in the CO 16-15 line. Other profile asymmetries, in particular the triangular blue-wing shape of G351.16\(+\)0.7, resemble those of high\(-J\) CO emission from a photodissociation region in M17 SW (Perez-Beaupuits et al. 2015). For G34.26\(+\)0.15, an additional narrow peak is seen at \(\sim\)38 km s\({}^{-1}\) in both the CO 11-10 and CO 16-15 spectra. This feature is an artefact due to over-corrected mesospheric CO, which shows the limitations of the adopted atmospheric model (see also, Gusdorf et al. 2016). For G34.40\(-\)0.2, the line profiles of high-\(J\) CO lines seem to be shifted by \(\sim\)1 km s\({}^{-1}\) from the source velocity obtained from the C\({}^{18}\)O 9-8 line (Fig. 1). The uncertainty of the Gaussian fit to the C\({}^{18}\)O line is smaller than 0.25 km s\({}^{-1}\), and thus cannot ac Figure 2: SOFIA/GREAT line profiles of the CO 11–10 and 16–15 lines as well as the CO 6–5 lines. Source velocities (\(V_{\rm{ls}}\)) are shown with vertical lines. The lines are smoothed to a common bin of 1.0 km s\({}^{-1}\). count for the observed shift, suggesting that it may be caused by self-absorption. Small velocity-shifts are also present in the line profiles of other objects, e.g. G34.26+0.15 and G351.25+0.7. We calculate CO line luminosities, \(L_{\rm CO}\), as \(4\pi D^{2}F_{\lambda}^{CO}\), where \(D\) is the distance to the source (Table 1) and \(F_{\lambda}^{\rm CO}\) is the velocity integrated flux in W m\({}^{-1}\). The flux conversion from K km s\({}^{-1}\) to W m\({}^{-1}\) follows Equation 1 in Indriolo et al. (2017). Figure 3 shows the correlations between \(L_{\rm CO}\) and source properties (Table 1). The significance of the correlations is quantified by the Pearson correlation coefficient \(r\), which depends also on the number of data points \(N\)(Marseille et al., 2010). Both CO 11-10 and CO 16-15 line luminosities show weak correlations (\(r\) of 0.63\(-\)0.66) with the clump mass, \(M_{\rm clump}\), tracing primarily a cold gas and dust reservoir (Konig et al., 2017). Stronger correlations (\(r\) of 0.85-0.95) are found for the high-\(J\) CO line luminosities and clump bolometric luminosities, \(L_{\rm bol}\), in line with previous studies using CO 10-9 (see Section 4). Noteworthy, clumps at different evolutionary stages do not show any clear trend in Figure. 3, suggesting similar underlying physical processes are responsible for high-\(J\) CO emission from all sources in the sample. In summary, high-\(J\) CO emission is detected in high-mass clumps and correlates most strongly with clump bolometric luminosity. The line shapes show that high-velocity gas most likely associated with the outflows. ### Profile decomposition We use mid-\(J\) (\(6\leq J\lesssim 10\)) CO rare isotopologue lines to subtract the envelope component from the line profiles of CO 11-10 and CO 16-15. This way, we isolate the high-velocity emission associated with the line wings. The emission in the line wings is characterised using a decomposition method which is described in detail in Appendix D. Briefly, the decomposition procedure aims to subtract the contribution from the envelope, as traced by rare isotopologue emission, resulting in the residual outflow component (Codella et al., 2004; van der Walt et al., 2007; de Villiers et al., 2014; Yang et al., 2018). This method was initially used for kinematical studies of methanol masers, and subsequently adopted for low-\(J\) CO line profiles. Here, we use the version described in Yang et al. (2018) which do not account for the opacity broadening, because the high-\(J\) CO lines are likely optically thin. Rare isotopologue lines are used as a proxy for the envelope emission; here, depending on data availability and detection, we used the emission of the C\({}^{18}\)O 9-8 line for eight sources, the \({}^{13}\)CO 10-9 line for three sources, the C\({}^{18}\)O 6-5 for one source, and the \({}^{13}\)CO 6-5 for one source (see Appendix D). We identify line wing emission in the CO 11-10 line toward all sources except G13.66\(-\)0.6 (Table 3). The wings in the CO 16-15 line are seen only toward 8 out of 10 sources with the 3\(\sigma\) line detection. Properties and profiles of all wing emission are shown in Appendix D. The ubiquity of line wings is consistent with previous detections of the outflows toward the same sources using lower-\(J\) lines of CO and SiO (Table 3). In particular, all sources from our sample show line wings in the CO 6-5 line (Navarete et al., 2019). The non-detection of the CO 11-10 line wing in G13.66\(-\)0.6 could be either due to the low S/N (Fig. 1) or a lack of recent heating of the outflow gas due to shocks (Karska et al., 2013; Figure 3: Line luminosities of CO 11–10 and 16–15, as a function of \(M_{\rm clump}\) and \(L_{\rm bol}\). IR-weak (24d) sources are shown in blue circles, IR-bright (IRb) sources in red triangles, and HII regions (HII) in green squares. Linear regression fit with Markov chain Monte Carlo is shown in dashed black lines and yellow shades. The linear log-log and Pearson correlation coefficients, \(r\), are presented on each plot. Objects with self-absorption are shown with an upward arrow, indicating the lower limit for calculated luminosities. Kristensen et al. 2017). The \({}^{13}\)CO 2-1 wings have only been seen toward G351.77\(-\)0.5, G12.81\(-\)0.2, and G14.19\(-\)0.2 (Yang et al. 2022) due to limited sensitivity, illustrating the difficulty in detecting line wings in rare CO isotopologues (see also, Stephens et al. 2018, 2019). Finally, SiO 2-1 have been observed toward our sources (Urquhart et al. 2019; Csengeri et al. 2016) and line wings are detected in six of them. All the non-detections, in fact, show line wings in the high-\(J\) CO lines (Table 3), indicating that additional factors play a role in the excitation of SiO and CO lines. Detecting outflows toward distant star-forming clumps is often a hampered by confusion. Background and foreground galactic sources along the line-of-sight might contribute to the wing emission, which may result in false outflow detections. We note, however, that the high detection statistics (\(>60\%\)) of line wings and the wings' smooth shapes in our source sample are very unlikely be explained by source confusion. The high-\(J\) CO emission is typically well-confined to the regions with active star formation. In conclusion, our decomposition method results in the estimate of line wing emission toward 12 and 8 sources in the CO 11-10 and CO 16-15 lines, respectively. ### CO line wing emission Decomposition of the line profiles allows us to quantify the amount of high-\(J\) CO emission in the line wings, and its contribution to the entire line profiles. Furthermore, the ratio of the two CO transitions can be studied as a function of gas velocity. The fraction of emission in the line wings of the CO 11-10 transition ranges from \(\sim\)29 to 73%, whereas the mean fraction of each evolutionary stage is \(\sim\)50%, suggesting that there is no dependence with the source evolution (Fig. 4). The fraction of emission in CO 16-15 line wings is similar to the CO 11-10 transition, and ranges from \(\sim\)28 to 76%. These results are consistent with the fraction of line wing emission measured toward two High Mass Protostellar Objects: AFGL 2591 in both CO 11-10 (\(\sim\)37%) and CO 16-15 (\(\sim\)34%) from van der Wiel et al. (2013), \begin{table} \begin{tabular}{c c c c c c|c c c c} \hline \hline No. & Source & CO 4–3 & CO 6–5 & CO 3–2 & SiO 2–1 & SiO 13–12 & CO 16–15 & CO 11–10 & CO 16–15 & SiO 2–1 & SiO 13–12 & SiO 2–1 & CO 16–15 & CO 16–15 & SiO 2–1 & SiO 11–10 [OTNOTE:1]Footnote 1: Based on the identification of line wings toward SEDIGSM sources (Yang et al. 2022).[ENDFOOTNOTE] & CO 16–15 & CO 16–15 & CO 16–15 & SiO 2–1 & SiO 2–1 & SiO 2–1 & SiO 2–1 & CO 11–10 [OTNOTE:1]Footnote 1: Based on the identification of line wings toward SEDIGSM sources (Yang et al. 2022).[ENDFOOTNOTE] & CO 16–15 & CO 16–15 & CO 16–15 & CO 16–15 & CO 16–15 & CO 16–15 & CO 16–15 & CO 16–15 & SiO 2–1 & SiO 2–1 & CO 16–15 & CO 16–15 & CO 16–15 & CO 16–15 & CO 16–15 & CO 16–15 & CO 16–15 & CO 16–15 & CO 16–15 & CO 16–15 [FOOTNOTE:1]Footnote 1: Based on the identification of line wings toward SEDIGSM sources (Yang et al and W3 IRS5 in CO 10-9 (\(\sim\)50%) from San Jose-Garcia et al. (2016). The fraction of high-\(J\) CO emission has been also estimated for several high-mass YSOs by subtracting the envelope contribution from the total, unresolved line profiles. The _Herschel_/HIFI observations of rare isotopologues of CO were used to constrain models of CO (main isotopologue) emission arising from envelopes (e.g., Herpin et al. 2012, 2016; Karska et al. 2014; Jacq et al. 2016). In case of NGC 7538 IRS1, 70-100% of velocity-unresolved CO \(J=\) 15-14 emission and 3-22% of CO \(J=\) 22-21 was attributed to the envelope (Karska et al. 2014). Thus, the contribution of the outflow component was predicted to increase with the rotational level of CO line. The increase of the relative contribution of the wing emission from \(J_{\rm up}=11\) to 16 is indeed measured for 4 out of 8 sources in our sample, for which outflow wings are detected in both CO transitions. The fraction of wing emission increases from \(\sim\)42% (CO 11-10) to 50% (CO 16-15) for G34.26+0.15, from 57% to 68% for G351.16+0.7, from 69% to 76% for G351.44+0.7, and from 62% to 70% for G351.58-0.4. The increase is therefore not as sharp as for the CO 15-14 and CO 22-21 lines of NGC 7538 IRS1, but consistent with a rising contribution of wing emission in higher-\(J\) lines. The amount of emission in the wings of higher-\(J\) CO lines allows us to study the gas excitation conditions in the outflowing gas. Assuming that emission in the line wings is optically thin and thermalized, the higher CO line ratios would correspond to higher gas kinetic temperatures, \(T_{\rm kin}\) (see Sections 3.4 and 3.5). Figure 5 shows the observed ratio of CO 16-15 and 11-10 lines in the red and blue wings as a function of absolute offset from the source velocity. The ratio is calculated in steps of 1.0 km s\({}^{-1}\), avoiding the line centers (\(\pm\)5 km s\({}^{-1}\)), and presented for channels where signal-to-noise is above 2. The ratio of CO 16-15 and CO 11-10 increases as a function of velocity for at least a few sources, e.g., the red wing of G12.81-0.2, G351.16+0.7, and G351.77\(-\)0.5, and the blue wing of G35.20-0.7 (Fig. 5). In most of those cases, the highest-velocity emission is stronger in the CO 16-15 than in the CO 11-10. Such trends are consistent with similar studies using CO 3-2, 10-9, and 16-15 toward a sample of low- to high-mass protostars (see Section 4.2). In summary, we find a lack of correlation between the fraction of high-\(J\) CO integrated emission in the line wings and the clump evolutionary stage. Yet, the fraction increases with the CO rotational level in half of the sources. The ratio of the wing emission in the CO 16-15 and CO 11-10 lines increases with velocity in several sources. ### Molecular excitation in LTE (full profile + wings) Detection of at least two CO lines allows us to determine the rotational temperature of the outflowing gas detected in the line wings under the assumption of LTE. For comparisons with previous studies with _Herschel_/PACS, the calculations are also performed for the velocity-integrated line profiles. Emission line fluxes of CO 11-10 and CO 16-15 are used to calculate the number of emitting molecules, \(\mathcal{N}_{\rm u}\), for each molecular transition as: \[\mathcal{N}_{u}=\frac{L_{\rm CO}\lambda}{hcA}, \tag{1}\] where \(L_{\rm CO}\) refers to the line luminosity of CO line at wavelength \(\lambda\), \(A\) to the Einstein coefficient, \(c\) to the speed of light, and \(h\) Figure 5: The ratio of line wing emission in CO 16–15 and 11–10 transitions as a function of absolute velocity offset from source velocity. The red-shifted emission is shown in red squares, and the blue-shifted is in blue circles. The dashed horizontal line presents the level above which CO 16–15 is greater than CO 11–10. to the Planck's constant. Note that for two sources, G12.81\(-\)0.2 and G351.25\(+\)0.7, additional observations of CO 13-12 are included. The number of emitting molecules, \(\mathcal{N}_{\rm u}\), is used instead of column densities, because the size of the emitting region is unresolved. The relation between \(\mathcal{N}_{\rm u}\) and the total number of emitting molecules, \(\mathcal{N}_{\rm tot}\), follows the equation: \[\ln\left(\frac{N_{\rm u}}{g_{\rm u}}\right)=-\frac{E_{\rm u}}{T_{\rm rot}k_{ \rm b}}+\ln\left(\frac{\mathcal{N}_{\rm tot}}{Q(T_{\rm rot})}\right), \tag{2}\] where \(g_{\rm u}\) is the statistical weight of the upper level, \(E_{\rm u}\) - the energy of the upper level, \(k_{\rm b}\) - the Boltzmann constant, \(T_{\rm rot}\) - the rotational temperature, and \(Q(T_{\rm rot})\) - the partition function at the temperature \(T_{\rm rot}\). The rotational temperature is calculated from the slope \(b\) of the linear fit (\(y=ax+b\)) to the data in the natural logarithm units, \(T_{\rm rot}=-1/a\). The total number of emitting molecules, \(\mathcal{N}_{\rm tot}\), is determined from the fit intercept \(b\) as: \[\mathcal{N}_{\rm tot}=Q(T_{\rm rot})\cdot\exp(b). \tag{3}\] Figure 6 shows example Boltzmann diagrams for G351.25\(+\)0.7 and G12.81\(-\)0.2, constructed using the velocity-integrated emission of CO (full profile). Table 4 shows \(T_{\rm rot}\) and \(\mathcal{N}_{\rm tot}\) for all sources with at least two CO line detections, separately for the integrated-profile emission and the line wings (see Section 3.2). The two sources with three CO line detections are characterized by \(T_{\rm rot}\) of \(\sim\)170 K using the integrated line emission. The remaining sources show \(T_{\rm rot}\) in the range from \(\sim\)110 K to 200 K, with a mean of 152 K. Similar temperatures are obtained for the wing emission tracing outflow gas, with a mean \(T_{\rm rot}\) of 167 K. While the wing emission is often responsible for the bulk of the total emission, not core emission, with typical temperatures of \(\sim\) 100\(-\)200 K (Fontani et al. 2007; Taniguchi et al. 2023), might also contribute to the far-IR emission at source velocity. For G34.26\(+\)0.15, \(T_{\rm rot}\) of \(\sim\)150 K is significantly lower than \(365\pm 15\) K obtained from _Herschel_/PACS (Karska et al. 2014). We note, however, that the latter temperature was obtained using CO lines with \(J_{\rm u}\) from 14 to 30, sensitive to both "warm" and "hot" gas components (Karska et al. 2018). If CO transitions with \(J_{\rm u}\) from 14 to 16 are used instead, \(T_{\rm rot}\) of \(244\pm 45\) K is obtained (adopting values from Table C.1 in Karska et al. 2014). Even lower \(T_{\rm rot}\) is expected when the CO 11-10, tracing colder gas component, is used in the calculation, in line with results obtained for G34.26\(+\)0.15. Noteworthy, it is essential to have many CO transitions to determine all the underlying physical conditions. Finally, we note that the ratio of the total number of emitting molecules (\(\mathcal{N}_{\rm tot}\)) in the line wings and the total line profile ranges from 40% to 79% (Table 4), consistent with the overall fraction of wing emission (Section 3.2). In absolute terms, \(\log_{10}\mathcal{N}_{\rm tot}\) ranges from 51.7 to 53.6, consistent with the average \(52.4(0.1)\pm 0.5\) measured for high-mass protostars with _Herschel_/PACS (Karska et al. 2014). ### Molecular excitation in non-LTE (wings) Due to relatively low densities in the regions of ISM where outflows propagate, the LTE assumption may not hold. Non-LTE modelling is therefore necessary to determine the physical conditions responsible for the observed line emission. Here, we use the well-established code RADEX (van der Tak et al. 2007) to estimate gas temperatures, densities, and CO column densities, which reproduce the observed line wing emission of three mid- and high-\(J\) CO lines: CO 6-5, 11-10, and 16-15. We calculated model grids for a range of kinetic temperatures, \(T_{\rm kin}\), from 150 to 3000 K, H\({}_{2}\) number densities, \(n_{\rm H_{2}}\), from 10\({}^{3}\) to 10\({}^{7}\) cm\({}^{-3}\), and CO column densities, \(N\)(CO), of 10\({}^{16}\), 10\({}^{17}\) \begin{table} \begin{tabular}{l c c c c} \hline \hline \multicolumn{1}{c}{Source} & \multicolumn{2}{c}{Integrated profile} & \multicolumn{2}{c}{Line wings} \\ \cline{2-5} & \(T_{\rm rot}\)(K) & \(\log_{10}\mathcal{N}_{\rm tot}\) & \(T_{\rm rot}\)(K) & \(\log_{10}\mathcal{N}_{\rm tot}\) \\ \hline G351.16\(+\)0.7 & 199 & 51.7 & 219 & 51.4 \\ G351.25\(+\)0.7 & 172(67) & 52.2(0.6) & 165(66) & 51.8(0.6) \\ G351.44\(+\)0.7 & 151 & 52.2 & 157 & 52 \\ G351.58\(-\)0.4 & 158 & 53.6 & 166 & 53.3 \\ G351.77\(-\)0.5 & 177 & 52.6 & 177 & 52.5 \\ \hline G12.81\(-\)0.2 & 168(47) & 52.9(0.4) & 131(31) & 52.8(0.4) \\ G14.63\(-\)0.6 & 125 & 51.7 & – & – \\ G34.26\(+\)0.15 & 152 & 53.0 & 162 & 52.6 \\ G34.40\(-\)0.2 & 111 & 52.7 & – & – \\ G35.20\(-\)0.7 & 141 & 52.7 & 120 & 52.5 \\ \hline \end{tabular} \end{table} Table 4: CO rotational excitation for both the integrated line profiles and line wings only assuming LTE Figure 6: Rotational diagrams of CO for G12.81\(-\)0.2 and G351.25\(+\)0.7, which are based on the observations of full profile CO transitions with \(J_{\rm u}\) of 11, 13, and 16. The natural logarithm of the number of emitting molecules from a level u, \(\mathcal{N}_{\rm u}\) (dimensionless), divided by the degeneracy of the level, \(g_{\rm u}\), is shown as a function of the upper level energy, \(E_{\rm u}\)/\(k_{\rm B}\), in Kelvins. Detections are shown as blue circles. Dashed orange lines show linear regression fits to the data; the resulting rotational temperatures are provided on the plots with the associated errors from the fit. and \(10^{18}\) cm\({}^{-2}\). We assumed H\({}_{2}\) as the only collision partner, and a background temperature of 2.73 K. The linewidths of all lines were fixed at 19 km s\({}^{-1}\), based on the observations of CO 6-5 (Appendix B and (Navarete et al., 2019)). For comparisons of models with observations, we used peak intensities obtained from RADEX, since the wing emission does not follow a simple Gaussian; we have also converted the observations from \(T_{\rm MB}\) to \(T_{t}\) through \(T_{t}=T_{\rm MB}/\eta\). Because we do not spatially resolve the line emitting regions, we considered two cases during the computation of the beam filling factor: (i) the source that fills the entire beam (\(\eta=1\)); (ii) the source size of 2\({}^{\prime\prime}\) or \(\eta\sim 8\times 10^{-3}\)-\(5\times 10^{-2}\), consistent with size of a source in our sample, G34.26+0.15, which was measured from the Spitzer/IRAC 3.6 \(\mu\)m image. The spatial extent of CO 6-5 emission is \(\sim\)4 times larger than the APEX/CHAMP\({}^{+}\) beam, according to previous observations (Navarete et al., 2019). Figure 7 shows the comparison of non-LTE radiative transfer models with line wing observations of high-\(J\) CO lines5 (Section 3.2). The ratio of CO 16-15 and CO 11-10 depends on both \(T_{\rm kin}\) and \(n_{\rm H_{2}}\), and shows a spread of 3 orders of magnitude. On the other hand, the intensity of CO 6-5 is most sensitive to the assumed \(N\)(CO), and increases by 2-3 orders of magnitude between \(10^{16}\) and \(10^{18}\) cm\({}^{-2}\). The impact of the assumed beam filling factor is almost negligible to the CO 16-15 / CO 11-10 ratio. Footnote 5: Observations of G351.44+0.7 are not included because we could not obtain its CO 6–5 line wing due to the lack of \({}^{13}\)CO 6–5 opacity (Appendix D). The models match the observations best for the assumed CO column densities of \(10^{17}\) and \(10^{18}\) cm\({}^{-2}\) (Fig. 7). The solutions for temperature and density are degenerate and can be split into two regimes: (i) lower-density with \(n_{\rm H_{2}}\) of \(10^{3}\)-\(10^{4}\) cm\({}^{-3}\) and \(T_{\rm kin}\) of at least 1000 K, and (ii) high-density, moderate-temperature scenario with \(n_{\rm H_{2}}\) of \(10^{5}\)-\(10^{7}\) cm\({}^{-3}\) and \(T_{\rm kin}\) between 150 and 500 K. The ratio of high-\(J\) CO lines can be well-reproduced for both considered filling factors in the scenario (ii); the best-matching source size is likely larger than 2\({}^{\prime\prime}\) but depends on the assumed column density. In scenario (i), the ratio of high-\(J\) CO lines can be reproduced for a small fraction of our sample assuming \(T_{\rm kin}\) of 1000 K. Much higher temperatures would be required to match observations of the majority of targets. In general, the CO 6-5 peak intensity increases with gas density: for example, for \(n_{\rm H_{2}}\) of \(10^{3}\) cm\({}^{-3}\) and \(N\)(CO) of \(10^{18}\) cm\({}^{-2}\), models would match the observations assuming the filling factor of 1, whereas \(n_{\rm H_{2}}\) of \(10^{4}\) cm\({}^{-3}\) and \(N\)(CO) of \(10^{17}\)-\(10^{18}\) cm\({}^{-2}\), point at smaller filling factors. In conclusion, only scenario (ii) can explain the observations of all targets. The \(T_{\rm kin}\) range in this scenario is also in better agreement with \(T_{\rm rot}\) estimated under the LTE condition in Section 3.4, and consistent with detections of molecular species excited exclusively in high-density environments toward other high-mass clumps (e.g., van der Tak et al., 2013, 2019). On the other hand, scenario (i) requires temperatures in excess of 3000 K to explain the observed CO lines (\(E_{\rm up}<800\) K) at more than half of our targets; such temperatures are too high even for the outflows from high-mass stars. Therefore, we prefer the high-density, moderate-temperature scenario to describe the physical conditions toward our source sample. We note, however, that our models constrain only the ranges of temperature and density, as we cannot fully break the degeneracy between different models. To summarize, non-LTE radiative transfer models provide support to the LTE excitation of high-\(J\) CO emission in the high-mass clumps. The best match with observations is obtained for gas densities of \(10^{5}\)-\(10^{7}\) cm\({}^{-3}\), \(T_{\rm kin}\) between 150 and 500 K, and CO column densities of \(10^{17}\) and \(10^{18}\) cm\({}^{-2}\). Such conditions are consistent with CO excitation in outflows and will be discussed further in Section 4.2. ## 4 Discussion High spectral resolution observations from SOFIA/GREAT allow us to disentangle dynamical properties of high-\(J\) CO emission toward high-mass star forming clumps. The excitation conditions have been studied in the high-velocity gas component assuming both LTE and non-LTE regimes, supporting the origin in moderate-temperature, high-density gas associated with the outflows (Section 3.4-3.5). Here, we discuss our results in the context of previous observations of high-mass protostars with _Herschel_ and SOFIA. Figure 7: CO excitation conditions from RADEX models versus observations. The plots present models at different \(N\)(CO) of \(10^{17}\) and \(10^{18}\) cm\({}^{-2}\). The models are presented in empty circles, and their colors correspond to different hydrogen volume density, \(n_{\rm H_{2}}\), between \(10^{3}\) and \(10^{7}\) cm\({}^{-3}\). At each volume density level, four temperatures: 150, 250, 500, 1000, and 3000 K are sampled. Observations assuming a beam filling factor of 1 are shown in crosses, while observations assuming a tiny source of 2\({}^{\prime\prime}\), which correspond to an extreme case of small beam filling factor, are shown in triangles. ### High-\(J\) CO emission in high-mass clumps The high-\(J\) (\(J\gtrsim 10\)) CO emission in high-mass star-forming regions has been attributed to gas cooling of several physical components, including (i) a warm, dense envelope of central protostars (Ceccarelli et al., 1996; Doty & Neufeld, 1997), (ii) UV-irradiated outflow cavity walls (Bruderer et al., 2009; San Jose-Garcia et al., 2016), (iii) currently-shocked gas in the outflows (van der Wiel et al., 2013; Karska et al., 2014), (iv) photodissociation regions (Lane et al., 1990; Ossenkopf et al., 2010, 2015; Stock et al., 2015). A similarity of CO to H\({}_{2}\)O, both in spatial extent and line shapes, supported the scenario of shock excitation in similar layers composing the outflow cavity walls (see e.g., San Jose-Garcia et al., 2016; Kristensen et al., 2017; van Dishoeck et al., 2021). Broad line profiles of high-\(J\) CO lines provide a solid evidence of the outflow origin of a part of CO emission in high-mass clumps from the ATLASGAL survey (Section 3.1, see also San Jose-Garcia et al., 2013; Indriolo et al., 2017). Noteworthy, the fraction of CO emission in the line wings with respect to the total line emission is not sensitive to the evolutionary stage of the clumps (Section 3.3). In fact, a significant fraction of CO 11-10 is detected in the line wings of clumps at very early evolutionary stages (up to 76%, Section 3.3). The signposts of outflow activity in the IR-weak clumps are in agreement with the ubiquitous detection of broad line wings in the SiO 2-1 line toward ATLASGAL sources spanning all evolutionary stages, including 25% of infrared-quiet clumps (Csengeri et al., 2016). Indeed, molecular outflows are also commonly detected toward 70 \(\mu\)m dark clumps using other tracers (Urquhart et al., 2022; Yang et al., 2022). The integrated high-\(J\) CO emission shows a strong correlation with the clump bolometric luminosity (Section 3.1). The correlation extends even to low- and intermediate-mass protostars (Figure 8), suggesting a similar physical mechanism operating over a few orders of magnitude different spatial scales. In deeply-embedded low-mass objects, \(L_{\rm bol}\) is dominated by accretion luminosity, which in turn is closely related to the amount of mass ejected in the outflows (Frank et al., 2014). Thus, the tight correlation of high-\(J\) CO emission with \(L_{\rm bol}\) for ATLASGAL clumps suggests an equally high contribution of accretion luminosity in high-mass regions. The velocity-resolved SOFIA spectra provide strong support for an origin of the bulk high-\(J\) CO emission in outflows, during all evolutionary stages of high-mass clumps. The correlation of CO line fluxes with bolometric luminosity suggest common physical conditions and processes leading to high-\(J\) CO emission from low- to high-mass star forming regions. ### Excitation conditions Observations of multiple CO lines allow to study gas excitation across various source properties and evolutionary stages. In combination with other far-IR lines, they also constrain the properties of shocks responsible for the emission in broad line wings of high-\(J\) CO lines. The rotational temperatures in the high-velocity gas in the ATLASGAL clumps range from \(\sim\)120 K to 219 K, and are similar to the temperatures obtained from the full line profiles (Section 3.4). Five IR-bright clumps show mean \(T_{\rm rot}\) of \(169\pm 30\) K, whereas two H ii clumps are characterized by \(T_{\rm rot}\) of \(147\pm 16\) K. Thus, a possible decrease of gas temperature in the outflows as the clumps evolve might be present, but for the sources in our sample the difference is not significant. Rotational temperatures of \(\sim\)200-210 K, consistent with our measurements, have been estimated in the line wings of the high-mass protostar DR21(OH) assuming LTE (Leurini et al., 2015). Non-LTE modeling of multiple CO lines indicated \(T_{\rm kin}\) of 60-200 K in the outflow gas component of another high-mass source, AFGL 2591 (van der Wiel et al., 2013). In W3 IRS5, excitation temperatures of \(\sim\) 100-210 K were measured in the CO 10-9 and 3-2 lines' velocities range covered by an outflow in this region (from 5 to 20 km s\({}^{-1}\)), with the highest temperatures corresponding to the highest velocities (San Jose-Garcia et al., 2013). A similar trend of increasing gas temperature with velocity is also clearly detected in the outflow wing emission from ATLASGAL clumps observed with SOFIA/GREAT (Section 3.3), and in the SiO survey of \(\sim\)430 clumps observed with the IRAM 30 m telescope (Csengeri et al., 2016). Comparisons of gas excitation using high-\(J\) CO lines can be extended to a larger number of sources once the emission in the full line profiles is considered. The velocity-integrated _Herschel_/PACS detections of CO transitions from \(J_{\rm u}\) of 14 to Figure 8: Velocity-integrated CO line luminosity of 11–10 and 16–15 transitions versus source bolometric luminosity from low- to high-mass star-forming regions. The dashed lines show a linear fit obtained using only the sources from our study, which are shown in blue empty circles. Blue stars show observations of other high-mass protostars from Karska et al. (2014); Indriolo et al. (2017); Kázmierczak-Barthel et al. (2014), orange squares present emission from intermediate-mass objects (Matuszak et al., 2015), and red triangles show data for Class 0 protostars (Kristensen et al., 2017). \begin{table} \begin{tabular}{l l l} \hline \hline Source & \(T_{\rm rot}\)(K) & Reference \\ \hline NGC 7538 IRS1\({}^{a}\) & 160(10) & Karska et al. (2014) \\ AFGL 2591 & 1305.8 & Azmierczak-Barthel et al. (2014) \\ W498N\({}^{b}\) & 220(20) & Indriolo et al. (2017) \\ Orion S & 145(5) & Indriolo et al. (2017) \\ Orion KL & 180(25) & Indriolo et al. (2017) \\ Sgr B2(M) & 140(20) & Indriolo et al. (2017) \\ \hline G12.81–0.2 & 168(47) & this work \\ G351.25–0.7 & 172(67) & this work \\ ATLASGAL (all\({}^{c}\))\({}^{c}\) & 111–199 & this work \\ \hline \end{tabular} \end{table} Table 5: CO rotational excitation determined from integrated line profiles toward high-mass objects 30 toward 10 high-mass protostars provided an average \(T_{\rm rot}\) of \(\sim\)300(23)\(\pm\)60 K (Karska et al., 2014). Protostars with detections of higher-\(J\) lines were generally characterized by higher-\(T_{\rm rot}\), suggesting the possible presence of an additional \(T_{\rm rot}\gtrsim 700\) K gas component detected toward low-mass protostars that appears to be "hidden" in their high-mass counterparts, possibly due to the a small beam filling factor of such emission and/or optically thick continuum emission (Manoj et al., 2013; Green et al., 2013; Karska et al., 2013, 2018). Clearly, any comparisons of \(T_{\rm rot}\) should consider the similar \(J-\)levels for their calculation (Section 3.4, and e.g., Neufeld, 2012; Jimenez-Donaire et al., 2017; Yang et al., 2018). Table 5 compares \(T_{\rm rot}\) measurements for several high-mass protostars with the data of the same or similar CO transitions to our SOFIA/GREAT survey (Section 3.4). All sources with at least 3 observed transitions show \(T_{\rm rot}\) from 130 K to 220 K, consistent with the values determined for ATLASGAL clumps and hot cores. The relatively narrow range of \(T_{\rm rot}\) is qualitatively similar to that of the universal "warm", \(\sim\)300 K gas component based on CO 14-13 to 25-24 transitions toward low-, intermediate-, and high-mass protostars (Karska et al., 2014; Matsuzak et al., 2015; Karska et al., 2018; van Dishoeck et al., 2021). The CO 11-10 transition in low-mass protostars is typically associated with a "cool" gas component with \(T_{\rm rot}\sim\)100 K (e.g., Yang et al., 2018), and its inclusion in the fit causes the lower values of \(T_{\rm rot}\) (\(<300\) K). The CO rotational temperature depends on the gas density and kinetic temperature, and can be characterised with both (i) low-density, high-temperature (Neufeld, 2012; Manoj et al., 2013; Yang et al., 2018), and (ii) high-density, low-temperature regimes (Karska et al., 2013, 2018; Green et al., 2013; Kristensen et al., 2017). The first scenario requires \(n_{\rm H_{2}}\sim\)\(10^{3}\) cm\({}^{-3}\) and \(T_{\rm kin}\gtrsim 2000\) K, and has the advantage of reproducing the positive curvature of the CO diagrams over a broad range of energy levels (Neufeld, 2012). In contrast, the second scenario accounts for the similarity of \(J\gtrsim 14\) CO and H\({}_{2}\)O emission most evidently seen in low-mass protostars, with high-densities required for H\({}_{2}\)O excitation (Karska et al., 2013; van Dishoeck et al., 2021). Non-LTE modeling of massive clumps provides strong support for a high-density scenario of CO excitation (Section 3.5). In the regime of moderate gas temperatures (\(T_{\rm kin}\) from 150 to 500 K), gas densities of \(10^{5}\)-\(10^{7}\) cm\({}^{-3}\) match the data best. Such physical conditions are fully-consistent with the modeling of high-\(J\) CO and H\({}_{2}\)O emission toward high-mass protostars (San Jose-Garcia et al., 2016; van Dishoeck et al., 2021). They are also comparable to the physical conditions determined in the jet, terminal shock and cavities of the intermediate-mass protostar Cep E (Lefloch et al., 2015). The underlying mechanism behind the highly-excited CO gas has been investigated for both Cep E and its high-mass counterpart Cep A. Detailed comparisons of CO, in combination with [O I] and OH, suggests the origin in dissociative or UV-irradiated shock models with pre-shock densities above \(10^{5}\) cm\({}^{-3}\)(Gusdorf et al., 2016, 2017). Assuming a compression factor of \(\sim\)100, typical for dissociative shocks (Karska et al., 2013), such models would be also in agreement with radiative-transfer modeling for high-mass clumps (Section 3.5). However, a fraction of high-\(J\) CO emission detected at source velocity could also originate from the central hot core. ## 5 Conclusions We have characterized the SOFIA/GREAT line profiles observed toward 13 high-mass protostars selected from the ATLASGAL survey, which significantly increases the number of high-mass objects that have velocity-resolved high-\(J\) CO lines. The velocity information enables to quantify the line components and the properties of their emitting sources. We summarise and draw the following conclusions: * CO 11-10 emission is detected toward all the sources, as early as the 24d stage. 10 out of 13 clumps also show a clear detection of CO 16-15. Additionally, CO 13-12 is detected toward two sources. The lines exhibit broad line wing emission typical for outflows from YSOs. * We detect wing emission in the CO 11-10 line from 12 clumps and in the CO 16-15 line from 8 clumps, implying that the highly excited CO lines originate in outflows. The wing fraction is similar for all clump evolutionary stages. On the other hand, we find no signatures of high-velocity gas (i.e., bullets) in the far-IR spectra. * Under the LTE assumption, we find \(T_{\rm rot}\) of 110-200 K for the entire line profiles and 120-220 K for the wing component. Such temperatures are in agreement with gas densities of \(10^{5}\)-\(10^{7}\) cm\({}^{-3}\), moderate temperatures of 150 K and 500 K, and CO column densities of \(10^{17}\) and \(10^{18}\) cm\({}^{-2}\) obtained from the non-LTE models. * Significant correlations between high-\(J\) CO emission and bolometric luminosities suggest similar underlying physical processes and conditions across all evolutionary stages of high-mass clumps. The correlation extends also to low-mass protostars, where high-\(J\) CO originate in outflow shocks, consistent with our study. High angular-resolution maps of high-mass clumps would be necessary to better characterise the physical structure of the regions with strong high-\(J\) CO emission and spatially disentangle outflows and hot cores (Goicoechea et al., 2015). The MIRI instrument on board the James Webb Space Telescope could pinpoint the spatial extent of shocked gas in high-mass star-forming clumps. ###### Acknowledgements. The authors thank the anonymous referee for detailed comments that have helped us improve this paper. We thank SOFIA/GREAT staff for collecting and reducing the data. We also thank Dr. Helmut Wesemeyer for his support in reducing data from the SOFIA/AGREAT observations. AK acknowledges support from the Polish National Agency for Academic Exchange grant No. BPN/BEK/2021/1/0039/DEC/L. AY acknowledges support from the National Natural Science Foundation of China grants No. 1198811. This work is based on observations made with the NASA/DLR Stratospheric Observatory for Infrared Astronomy (SOFIA). SOFIA is jointly operated by the Universities Space Research Association, Inc., the USRA, under NASA contract NSAI/BFF53C, and the Deut Deutsches SOFIA Institut (DSI) under DLR contract 50 OK 2002 to the University of Stuttgart. _Herschel_ was an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA. This publication is based on data acquired with the Atacama Pathfinder Experiment (APEX). APEX is a collaboration between the Max-Planck-Institut fur Radioastronomie, the European Southern Observatory, and the Onsala Space Observatory.
2309.04286
Polar ionospheric currents and high temporal resolution geomagnetic field models
Estimating high resolution models of the Earth's core magnetic field and its time variation in the polar regions requires that one can adequately account for magnetic signals produced by polar ionospheric currents, which vary on a wide range of time and length scales. Limitations of existing ionospheric field models in the challenging polar regions can adversely affect core field models, which in turn has important implications for studies of the core flow dynamics in those regions. Here we implement a new approach to co-estimate a climatological model of the ionospheric field together with a model of the internal and magnetospheric fields within the CHAOS geomagnetic field modelling framework. The parametrization of the ionospheric field exploits non-orthogonal magnetic coordinates and scales linearly with external driving parameters related to the solar wind and the interplanetary magnetic field. Using this approach we derive a new geomagnetic field model from measurements of the magnetic field collected by low Earth orbit satellites, which in addition to the internal field provides estimates of the typical current system in the polar ionosphere. We find that the time derivative of the estimated internal field is less contaminated by the polar currents, which is mostly visible in the zonal and near-zonal terms at high spherical harmonic degrees. Distinctive patches of strong secular variation at the core-mantle boundary, which have important implications for core dynamics, persist. Relaxing the temporal regularisation reveals annual oscillations, which could indicate remaining ionospheric field or related induced signals in the internal field model. Using principal component analysis we find that the annual oscillations mostly affect the zonal low-degree spherical harmonics of the internal field.
Clemens Kloss, Christopher C. Finlay, Karl M. Laundal, Nils Olsen
2023-09-08T12:17:07Z
http://arxiv.org/abs/2309.04286v1
# Polar ionospheric currents and high temporal resolution geomagnetic field models ###### Abstract Estimating high resolution models of the Earth's core magnetic field and its time variation in the polar regions requires that one can adequately account for magnetic signals produced by polar ionospheric currents, which vary on a wide range of time and length scales. Limitations of existing ionospheric field models in the challenging polar regions can adversely affect core field models, which in turn has important implications for studies of the core flow dynamics in those regions. Here we implement a new approach to co-estimate a climatological model of the ionospheric field together with a model of the internal and magnetospheric fields within the CHAOS geomagnetic field modelling framework. The parametrization of the ionospheric field exploits non-orthogonal magnetic coordinates to efficiently account for the geometry of the Earth's magnetic field and scales linearly with external driving parameters related to the solar wind and the interplanetary magnetic field. Using this approach we derive a new geomagnetic field model from measurements of the magnetic field collected by low Earth orbit satellites, which in addition to the internal field provides estimates of the typical current system in the polar ionosphere and successfully accounts for previously unmodelled ionospheric signals in field model residuals. To resolve the ambiguity between the internal and ionospheric fields when using satellite data alone, we impose regularisation. We find that the time derivative of the estimated internal field is less contaminated by the polar currents, which is mostly visible in the zonal and near-zonal terms at high spherical harmonic degrees. Distinctive patches of strong secular variation at the core-mantle boundary, which have important implications for core dynamics, persist. Relaxing the temporal regularisation reveals annual oscillations, which could indicate remaining ionospheric field or related induced signals in the internal field model. Using principal component analysis we find that the annual oscillations mostly affect the zonal low-degree spherical harmonics of the internal field. keywords: Core, Magnetic field variations through time, Satellite magnetics, Inverse theory, Polar ionospheric currents + Footnote †: journal: _J. Int._ ## 1 Introduction The ionospheric magnetic field is generated by electrical currents that circulate in the Earth's ionosphere, the electrically conducting layer of the atmosphere from about \(90\,\mathrm{km}\) to \(1000\,\mathrm{km}\) altitude. The ionospheric field undergoes daily, seasonal and solar cycle variations, which depend on solar activity and illumination (Yamazaki et al., 2016). In particular in the polar regions, where the ionospheric field is highly dynamic and very sensitive to changes in the solar wind and the Interplanetary Magnetic Field (IMF) thanks to field-aligned currents that facilitate the coupling to the magnetosphere, it is a difficult task to estimate accurate ionospheric field models. Imperfect modelling and the fact that the involved time and length scales overlap with those of the large-scale time-varying internal field, which originates in the Earth's core, makes the separation between the two fields a major challenge in geomagnetic field modelling (Finlay et al., 2016). New strategies to deal with the ionospheric field are therefore crucial for studies of the core field and its time variation. Such core field models are used to infer flow patterns in Earth's core and to study the geodynamo process and its related waves and oscillations. The high latitude regions of the outer core shell, within what is known as the inner core tangent cylinder (a cylinder aligned with the rotation axis just touching the inner core in the equatorial plane), play a special role in core processes because they are dynamically separated from the remainder of the shell and so can be an important source of equatorial symmetry breaking. Recent studies indicate strong jet like flows near to the tangent cylinder (Livermore et al., 2017), while highly time-dependent turbulent polar vortices are expected within the tangent cylinder (Aurnou et al., 2003; Schaeffer et al., 2017; Sheyko et al., 2018). Detailed study of such processes requires that ionospheric signals in the polar regions are adequately separated. Earlier studies have developed different techniques to model the ionospheric field or have tried to reduce its effect on the recovered internal field during geomagnetic field modelling. One common technique to minimise the ionospheric disturbance field in geomagnetic field modelling is data selection. By focusing on data under geomagnetic quiet conditions and by choosing suitable magnetic components, one seeks to reduce ionospheric signals that are not well parametrized in the model. For example, the CHAOS model (Finlay et al., 2020; Olsen et al., 2009, 2010, 2006, 2014), which is a model of the recent geomagnetic field and provides estimates of the time-dependent and static internal fields and the quiet-time magnetospheric field, is derived from magnetic vector and total intensity observations using, among other criteria, a dark-time selection criterion based on the sun elevation angle to remove the strong ionospheric disturbances that are present under sunlit conditions. At polar latitudes only the scalar magnitude of the field is used in an effort to minimise the effect of field-aligned currents, which mainly disturb the field direction but not its scalar magnitude. Similarly, in the sequential approach of Ropp et al. 2020 for modelling the internal, quiet-time magnetospheric and associated internally-induced fields, the magnetic vector data at mid and low latitudes are selected according to local night-time and dark conditions, but in the polar regions vector data from all local times are used. Although data selection is effective, significant ionospheric signals often remain in the data, especially at polar latitudes, where strong horizontal currents in the E-layer of the ionosphere continue to disturb the scalar magnitude of the field at all local times including during dark conditions (e.g. Friis-Christensen et al., 2017). In the comprehensive modelling approach (Sabaka et al., 2002, 2004, 2015, 2018, 2020), all major sources of the geomagnetic field are parametrized and co-estimated in a single step, including the magnetic field produced by ionospheric currents. In the CM6 model (Sabaka et al., 2020), the latest in the series of Comprehensive Models (CM), the magnetic field due to the currents in the E-layer of the ionosphere are parametrized in space using special basis functions that involve projecting spherical harmonics in the quasi-dipole coordinate system (Richmond, 1995), which is believed appropriate for describing these currents, whose geometry is organised by the main magnetic field. Temporal variations are expressed in terms of specific daily and sub-daily harmonics with periods of \(24\,\mathrm{h}\), \(12\,\mathrm{h}\), \(8\,\mathrm{h}\), and \(6\,\mathrm{h}\), which are further modulated with annual and semiannual harmonics and scaled by a three-monthly average of \(F_{10.7}\) solar radiation index, which tracks long term variations in solar activity. In addition, the model takes into account the Earth-induced field using a model of the electrical conductivity of the Earth's surface. Thanks to the sophisticated parametrization, the CM models are very successful at describing the slowly varying averaged ionospheric magnetic field at mid and low latitudes. This is why the same parametrization has been adopted in the dedicated ionospheric field inversion chain (Chulliat et al., 2013) to produce spherical harmonic models of the ionospheric magnetic field at low-to-mid latitudes using the magnetic data collected by the satellites of the European Space Agency's (ESA) _Swarm_ mission (Friis-Christensen et al., 2006). However, the approach of using a finite set of specific harmonics may not be as suitable for describing the polar ionospheric field, which varies on a much wider range of frequencies in response to changes in the solar wind speed and the IMF. In addition, it is not clear whether the basis functions for parametrizing the ionospheric magnetic field are also appropriate at high latitudes. In the Kalmag geomagnetic field models (Baerenzung et al., 2022, 2020) the magnetic field associated with ionospheric currents including field-aligned currents are also co-estimated. These models are sequentially derived using a Kalman filter approach after applying data selection to reduce the dayside ionospheric field signal in the input magnetic data. For the parametrization of the ionospheric sources they use poloidal and toroidal potentials in magnetic coordinate systems and represent the evolution in time through random processes based on a-priori spatio-temporal statistics. Lesur et al. 2008 estimate ionospheric currents in the polar ionosphere as part of the first generation of the GFZ Reference Internal Magnetic Models (GRIMM). However, they did not co-estimate the ionospheric field but had to use a two-step procedure whereby they first derived the model part corresponding to the internal, large-scale external and associated induced fields from the data and then used its residuals to build the ionospheric part of the model. Apart from geomagnetic field models, there are dedicated ionospheric field models such as the Average Magnetic field and Polar current System (AMPS) model (Laundal et al., 2018) that seek to better model the fields and currents in the polar regions. Instead of using specific periodicities to model the variability of the ionospheric disturbance field explicitly in time as in the CM models, the AMPS model focuses on its climatological aspects, i.e. it seeks to model the long term average of the field as a function of external driving parameters. The AMPS model expresses the ionospheric field in terms of poloidal and toroidal potentials, which are expanded into a global basis of spherical harmonics. It exploits magnetic apex coordinates (Richmond, 1995) to efficiently take into account the geometry of the main magnetic field, which organises the large-scale spatial structure of the ionospheric field. To express the variability of the average ionospheric field in time, the model uses a combination of external driving parameters related to the solar wind speed and IMF components. It is, however, derived using vector residuals, i.e. observations of the magnetic vector taken by the CHAllenging Ministatellite Payload (CHAMP) and _Swarm_ satellites after the removal of estimates of the internal and magnetospheric fields given by the CHAOS model. In this study we combine the climatological approach of the AMPS model for modelling the ionospheric field with the CHAOS framework for modelling the internal and magnetospheric fields. More specifically, we implement a co-estimation approach, where an AMPS-type ionospheric field model is derived at the same time as a geomagnetic field model similar to the CHAOS model. Making use of satellite magnetic observations made by the CHAMP, CryoSat-2 and _Swarm_ satellites during geomagnetic quiet conditions, we derive a new model of the geomagnetic field. Using this model, we study the quiet-time ionospheric field and the associated electrical currents in the polar regions and go on to investigate the effect on the time-variation of the internal field at polar latitudes when ionospheric currents are co-estimated. In addition, we explore cases when the temporal smoothness imposed on the internal field model is considerably relaxed. Note that the goal is not to derive an all-purpose model of the ionospheric field but rather to improve the time-dependent internal field model in the CHAOS modelling framework for geomagnetic quiet conditions in the challenging polar regions. The paper is organised as follows. In Sect. 2 we describe the satellite magnetic data used and the applied data selection. In Sect. 3 we provide details about the model parametrization, giving special attention to the ionospheric part taken from the AMPS model. There, we also give the equations for the model estimation and the applied regularisation, and list the chosen regularisation parameters. In Sect. 4 we evaluate the performance of the estimated model in terms of the fit to the magnetic data, study the polar ionospheric currents during geomagnetic quiet conditions and investigate the recovered core field and its time variations at polar latitudes. In the last part of that section we study the variations in time of the internal field as given by a test model where we apply weaker temporal smoothing. We finish with a discussion of the obtained results in Sect. 5 and the conclusions in Sect. 6. ## 2 Magnetic observations and data selection We used vector observations of the magnetic field made by the CHAMP and CryoSat2 satellites, and the three satellites of the _Swarm_ constellation, Swarm-A, Swarm-B and Swarm-C, from 2001 to the end of 2021. From the CHAMP mission, we used the Level \(3\,1\,\mathrm{Hz}\) magnetic data, product CH-ME-3-MAG (Rother et al., 2019), between January 2001 and August 2010, which we downsampled to \(1\,\mathrm{min}\) values. We selected data according to the recommended quality flags that are provided in the distributed CHAMP data product files (GFZ Section 2.3 2019). However, we did not require that both star camera heads on the boom close to the vector magnetometer were active and provided attitude information at the time of measurement since this created gaps in the global distribution of magnetic data at low and mid latitudes during dusk and dawn. More specifically, we allowed data if at least one of the two star camera heads was available. To account for the corresponding increase in the uncertainty of the attitude information, we chose larger a-priori attitude errors compared to when both star camera heads were active (see Sect. 3.4.1 for details). Concerning CryoSat2, we used fully calibrated \(4\,\mathrm{s}\) magnetic vector data from the onboard fluxgate magnetometer FGM1 (CryoSat2-1), version 0103, from August 2010 to the end of 2013. These data have been calibrated as described in Olsen et al. 2020. We reduced the dataset to 1 min values through the following steps. First, we used estimates of the time-dependent internal field and the CryoSat2-1 Euler angles from CHAOS-7.9 to compute residuals in the calibrated magnetometer frame. Then, we performed a Huber-weighted linear regression of the residuals within \(20\,\mathrm{s}\) intervals and kept one fit value from each interval. Finally, we added back the previously subtracted model estimates but retained only every third value to obtain a reduced time series of approximately \(1\,\mathrm{min}\) resolution. By using \(20\,\mathrm{s}\) intervals for the linear fit, we followed Olsen et al. 2020, who recommends averaging over five successive values to reduce the intrinsic noise. In addition, we removed data if the attitude uncertainty \(q_{\mathrm{error}}\), which is provided in the CryoSat2 data product files, was larger than \(40\,\mathrm{arcseconds}\). From the _Swarm_ mission, we made use of the Level \(1\,\mathrm{b}\)\(1\,\mathrm{Hz}\) magnetic vector data from all three satellites (Swarm-A, Swarm-B and Swarm-C), versions 0505-0508 as available, from November 2013 to the end of 2021. We downsampled the magnetic data from each satellite to \(3\,\mathrm{min}\) values to have a similar amount of data per time interval as for CHAMP and CryoSat2. On the entire dataset of magnetic observations, we applied several selection criteria to focus on geomagnetic quiet-time conditions. First, we removed gross outliers for which vector residuals with respect to the CHAOS-7.9 field model were greater than \(1000\,\mathrm{nT}\). We note that this approach also removed magnetic signals in the data associated with field-aligned currents, which can reach several thousands of \(\mathrm{nT}\) in the polar regions also during geomagnetic quiet-time conditions. Similarly, the averaging of the CryoSat-2 data, as described above, removed high-frequency ionospheric magnetic signals, i.e., signals that varied along the satellite orbit on timescales much shorter than the 20-second interval used for averaging. Nevertheless, since we do not expect that our approach of modelling the average ionospheric field is able to capture intermittent high-amplitude events, we preferred to remove these data and to process the CryoSat-2 data in this way to improve the overall quality of the model. Next, to focus on geomagnetically quiet conditions, we selected data for which the \(Kp\) index (Matzka et al., 2021; Matzka et al., 2021) was below \(20\,\mathrm{a}\) and the absolute rate of change of the \(RC\) index (Olsen et al., 2014), a quantitative measure of the magnetic disturbance at equatorial and mid-latitudes similar to the \(Dst\) index (Sugiura et al., 1991), was below \(2\,\mathrm{nT}\,\mathrm{h}^{-1}\). Furthermore, we selected data if, on average over \(2\,\mathrm{h}\) prior to the time of measurement, the Newell coupling function (Newell et al., 2007, for the exact definition used in this study, see Eq. (17), measuring the rate of magnetic flux opened at the magnetopause, was below \(2.4\) and the IMF at the magnetopause was pointing northward, i.e. having a positive z-component in the Geocentric Solar Magnetic (GSM) frame. The data processing and selection resulted in \(N_{d}=2,\!472,\!746\) vector observations, which we used for estimating models of the geomagnetic field. To illustrate the data distribution in time, we show in Fig. 1 a stacked histogram of the amount of data in 3-month intervals for each satellite dataset. We did not treat data differently depending on dark and sunlit conditions during the model estimation to avoid seasonal variations in the data distribution. Otherwise using, for example, only dark data for the estimation of the internal field would adversely affect the time-dependence of the associated model parameters, unless sufficiently smoothed in time through regularisation. This is due to an annual variation in the data distribution, which is created by the periodic exclusion of the data in the polar region on the summer hemisphere. Similarly, we did not select data based on magnetic local time for the estimation of the internal field to uniformly sample the polar electrojets. Note that we did not use ground-based magnetic field observations as input data for the modelling both to allow comparisons of the model predictions with independent data and to be sure not to bias the geographical distribution of the input data. ## 3 Model parametrization and estimation In this paper, we largely follow the modelling approach of the CHAOS geomagnetic field model series (Olsen et al., 2009, 2010, 2006, 2014), version CHAOS-7.9 (Finlay et al., 2020). However, a significant difference to the CHAOS models is that we also co-estimate a model of the ionospheric currents based on the AMPS model (Laundal et al., 2018). The following summarises the parametrization of the geomagnetic field sources that are represented in our model and gives the equations used for the model parameter estimation. ### Internal magnetic field Satellites in low Earth orbit take magnetic measurements in a region that is free of electrical currents associated with the internal sources. In the quasi-static approximation, the internal magnetic field can therefore be represented by an internal scalar potential, \(V^{\rm int}\), such that \({\bf B}^{\rm int}=-\nabla V^{\rm int}\). In spherical coordinates \(V^{\rm int}\) is given by \[V^{\rm int}(t,r,\theta,\phi)=a\sum_{n=1}^{N^{\rm int}}\sum_{m=-n}^{n}\left( \frac{a}{r}\right)^{n+1}g_{n}^{m}(t)Y_{n}^{m}(\theta,\phi), \tag{1}\] where \(a=6371.2\,{\rm km}\) is the mean surface radius of the Earth, \(g_{n}^{m}\) are the spherical harmonic coefficients of degree \(n\) and order \(m\), \(Y_{n}^{m}\) are the spherical harmonic functions, and \(N^{\rm int}=55\) is the chosen truncation degree to limit the spatial resolution of the model. The spherical harmonic functions are defined as \[Y_{n}^{m}(\theta,\phi)\equiv\cases{\cos(m\phi)P_{n}^{m}(\cos\theta),\,m\geq 0 \cr\sin(|m|\phi)P_{n}^{|m|}(\cos\theta),\,m<0,\cr} \tag{2}\] where \(\theta\) and \(\phi\) are respectively the geocentric colatitude and longitude, and \(P_{n}^{m}\) are the associated Legendre functions using the Schmidt semi-normalization. We allow the spherical harmonic coefficients for \(n\leq 20\) to be time-dependent using a basis of 6th-order B-splines to account for the slow time changes of the internal field \[g_{n}^{m}(t)=\sum_{k=1}^{K}g_{n,k}^{m}\mathcal{B}_{\phi,k}(t), \tag{3}\] where \(\mathcal{B}_{6,k}\) (\(k=1,\ldots,K\)) are the B-spline basis functions defined on the model interval using a sequence of knots with a \(0.5\,{\rm yr}\) knot spacing and a 6-fold knot multiplicity at the model endpoints. The coefficients for \(21\leq n\leq N^{\rm int}\) are kept constant to represent the high-degree part of the assumed static lithospheric field. ### External magnetic field The sources of the external field are located in the space above the Earth's surface. In our model we distinguish between the magnetospheric field and the ionospheric field. The parametrization of the magnetospheric field is identical to the CHAOS model, whereas the one for the ionospheric field is basically taken from the AMPS model. In this section, we will also introduce magnetic apex coordinate systems, which are important for an efficient parametrization of the ionospheric magnetic field. Figure 1: Number of selected vector data every 3 months for each satellite shown as stacked histogram. #### 3.2.1 Ionospheric field Following the approach of Laundal et al. 2016 we write the ionospheric magnetic field as \[\mathbf{B}^{\mathrm{ion}}=\mathbf{B}^{\mathrm{pol}}+\mathbf{B}^{\mathrm{tor}}=- \nabla V^{\mathrm{ion}}+\hat{\mathbf{r}}\times\nabla T^{\mathrm{ion}}, \tag{4}\] where the poloidal magnetic field \(\mathbf{B}^{\mathrm{pol}}\), written in terms of the scalar potential \(V^{\mathrm{ion}}\), is associated with the currents in the ionospheric E-layer, which flow entirely below the measurement shell of the satellites, whereas the toroidal magnetic field \(\mathbf{B}^{\mathrm{tor}}\), written in terms of the potential \(T^{\mathrm{ion}}\), is associated with the field-aligned currents that couple the polar ionosphere to the magnetosphere (Birkeland currents). To take advantage of the fact that the currents in the ionosphere are highly organised with respect to the geomagnetic field, we specify the potentials in magnetic apex coordinate systems defined by Richmond 1995. There are two of these systems: Quasi-Dipole (QD) and Modified-Apex (MA). In QD coordinates the latitude is defined as \[\lambda_{\mathrm{QD}}=\pm\arccos\sqrt{\frac{a+h}{a+h_{\mathrm{A}}}}, \tag{5}\] where positive (negative) values refer to the northern (southern) magnetic hemisphere, \(h\) is the geodetic height of the point of interest, and \(h_{\mathrm{A}}\) is the geodetic height of the apex, which is the highest point above the Earth's ellipsoidal surface along the magnetic field line, as given by a geomagnetic field model, that passes through the point of interest. The longitude of the QD coordinate system is defined as the longitude of the apex in centred dipole coordinates, a coordinate system where the z-axis points along Earth's magnetic dipole axis towards the northern hemisphere, the y-axis is perpendicular to both the dipole axis and the rotation axis, and the x-axis completes the right-handed system (Laundal et al., 2017). In Modified-Apex (MA) coordinates the latitude is defined as \[\lambda_{\mathrm{MA}}=\pm\arccos\sqrt{\frac{a+h_{\mathrm{R}}}{a+h_{\mathrm{A} }}}, \tag{6}\] where \(h_{\mathrm{R}}\) is a chosen reference height for the mapping, which we set to \(h_{\mathrm{R}}=110\,\mathrm{km}\). The MA latitude is positive for points that map to the northern magnetic hemisphere and negative otherwise. The longitude of the MA coordinate system is identical to the QD longitude. Since both are equal, they can be used interchangeably. Coordinates and base vectors of the two magnetic apex coordinate systems can be conveniently computed with the Python software package Apexpy (Meeren et al., 2021), which is a wrapper of the Fortran library by Emmert et al. 2010. As the reference model for the field line tracing, we used the 13th generation of the International Geomagnetic Reference Field (IGRF; Alken et al., 2021) at epoch 2015.0 throughout the entire model time interval. Using a combination of the apex coordinate systems, again following Laundal et al. 2016, we express the ionospheric potentials in terms of spherical harmonic functions \[V^{\mathrm{ion}}(h,\theta_{\mathrm{QD}},\phi_{\mathrm{MLT}}) =a\sum_{n=1}^{N^{\mathrm{ion}}}\sum_{\begin{subarray}{c}m=-n\\ |m|\leq M\end{subarray}}^{n}\left(\frac{a}{a+h}\right)^{n+1}g_{n}^{\mathrm{m, ion}}Y_{n}^{m}(\theta_{\mathrm{QD}},\phi_{\mathrm{MLT}}) \tag{7a}\] \[T^{\mathrm{ion}}(\theta_{\mathrm{MA}},\phi_{\mathrm{MLT}}) =(a+h_{\mathrm{R}})\sum_{n=1}^{N^{\mathrm{tor}}}\sum_{\begin{subarray} {c}m=-n\\ |m|\leq M\end{subarray}}^{n}T_{n}^{m,\mathrm{ion}}Y_{n}^{m}(\theta_{\mathrm{MA}},\phi_{\mathrm{MLT}}), \tag{7b}\] where \(\theta_{\mathrm{QD}}=\frac{\pi}{2}-\lambda_{\mathrm{QD}}\) and \(\theta_{\mathrm{MA}}=\frac{\pi}{2}-\lambda_{\mathrm{MA}}\) are the QD and MA colatitudes, respectively. We chose to truncate the spherical harmonic representations at \(N^{\mathrm{ion}}=45\) and \(N^{\mathrm{tor}}=65\). In addition, we used a maximum spherical harmonic order of \(M=3\) for both potentials in agreement with Laundal et al. 2018. Instead of the QD and MA longitudes, we used the Magnetic Local Time (MLT) \[\phi_{\mathrm{MLT}}=\phi_{\mathrm{QD}}-\phi_{\mathrm{noon}}+\pi, \tag{8}\] where \(\phi_{\mathrm{noon}}\) is the QD longitude of the subsolar point, computed on a sphere with radius \(r\gg a\), in practice \(r=50a\). Using MLT takes account of the fact that the ionospheric field stays fixed with respect to the sun. By writing \(T^{\mathrm{ion}}\) only in dependence of the MA latitude and MLT, we assume the potential to be constant along the IGRF magnetic field lines; we do not however impose north-south symmetry. By inserting QD colatitude and MLT into the spherical harmonic functions in Eq. 7a, we assume that \(V^{\mathrm{ion}}\) defines a harmonic potential in the source-free region. To test this assumption, we performed numerical computations and found that \(-\nabla V^{\mathrm{ion}}\) is approximately but not strictly divergence-free. The deviations from zero, which are largest in the auroral regions and along the magnetic dip equator, are usually smaller in absolute value than \(1\,\mathrm{nT}\). This is smaller than typical errors due to other unmodelled sources, which remain larger in the polar regions despite co-estimating a climatological model of the ionospheric magnetic field. Our conclusions from these tests is therefore that \(V^{\mathrm{ion}}\), organised in magnetic apex coordinates and magnetic local time, satisfactorily approximates a potential field and is useful for parametrizing the geometry of the ionospheric magnetic field and current densities. Inserting the expressions for the potentials into Eq. (4) and evaluating the gradients yields \[\mathbf{B}^{\mathrm{pol}} =-\frac{1}{(a+h)\sin\theta_{\mathrm{QD}}}\frac{\partial V^{ \mathrm{ion}}}{\partial\phi_{\mathrm{MLT}}}\mathbf{f}_{2}\times\hat{\mathbf{k}} -\frac{1}{a+h}\frac{\partial V^{\mathrm{ion}}}{\partial\theta_{\mathrm{QD}}} \mathbf{f}_{1}\times\hat{\mathbf{k}}-\sqrt{|\mathbf{f}_{1}\times\mathbf{f}_{2}|} \frac{\partial V^{\mathrm{ion}}}{\partial h}\hat{\mathbf{k}} \tag{9a}\] \[\mathbf{B}^{\mathrm{tor}} =\frac{1}{(a+h_{\mathrm{R}})\sin\theta_{\mathrm{MA}}}\frac{ \partial T^{\mathrm{ion}}}{\partial\phi_{\mathrm{MLT}}}\hat{\mathbf{k}}\times \mathbf{d}_{1}+\frac{\sqrt{4-3\sin^{2}\theta_{\mathrm{MA}}}}{2(a+h_{\mathrm{R }})\cos\theta_{\mathrm{MA}}}\frac{\partial T^{\mathrm{ion}}}{\partial\theta_{ \mathrm{MA}}}\hat{\mathbf{k}}\times\mathbf{d}_{2} \tag{9b}\] where \(\{\mathbf{d}_{1},\mathbf{d}_{2},\mathbf{f}_{1},\mathbf{f}_{2}\}\) are the non-orthogonal base vectors for the magnetic apex coordinate systems (Laundal et al., 2017), and \(\hat{\mathbf{k}}\) is a unit vector in the geodetic upward direction. Here, we used that \(\hat{\mathbf{r}}\approx\hat{\mathbf{k}}\), which means that the expressions are best suited for describing the ionospheric field in the polar regions. Nevertheless we assume that they also approximate the ionospheric field at low latitudes well. Note that the last term in Eq. (9a) is multiplied with \(\sqrt{|\mathbf{f}_{1}\times\mathbf{f}_{2}|}\) based on the assumption that the vertical component of the ionospheric field scales with the linear dimension of the horizontal current system (Richmond, 1995). The ionospheric magnetic field can be related to an electric sheet current density (in units of \(\mathrm{A}\,\mathrm{m}^{-1}\)) that flows at a fixed height, chosen to be \(h_{\mathrm{R}}\), written in the form of \[\mathbf{J}^{\mathrm{th}}=\mathbf{J}^{\mathrm{df}}+\mathbf{J}^{\mathrm{cf}}= \hat{\mathbf{k}}\times\nabla\psi^{\mathrm{df}}+\nabla\psi^{\mathrm{cf}}, \tag{10}\] where \(\mathbf{J}^{\mathrm{df}}\) is the divergence-free part of the sheet current density associated with \(\mathbf{B}^{\mathrm{pol}}\) and \(\mathbf{J}^{\mathrm{cf}}\) is the curl-free part associated with \(\mathbf{B}^{\mathrm{tor}}\). The potentials of the sheet current density parts are \[\psi^{\mathrm{df}}(t,\theta_{\mathrm{QD}},\phi_{\mathrm{MLT}}) =-\frac{a}{\mu_{0}}\sum_{n=1}^{N^{\mathrm{low}}}\sum_{\begin{subarray} {c}m=-n\\ |m|\leq M\end{subarray}}^{n}\frac{2n+1}{n}\left(\frac{a}{a+h_{\mathrm{R}}} \right)^{n+1}g_{n}^{m,\mathrm{ion}}(t)Y_{n}^{m}(\theta_{\mathrm{QD}},\phi_{ \mathrm{MLT}}) \tag{11a}\] \[\psi^{\mathrm{cf}}(t,\theta_{\mathrm{MA}},\phi_{\mathrm{MLT}}) =-\frac{a+h_{\mathrm{R}}}{\mu_{0}}\sum_{n=1}^{N^{\mathrm{low}}} \sum_{\begin{subarray}{c}m=-n\\ |m|\leq M\end{subarray}}^{n}T_{n}^{m,\mathrm{ion}}(t)Y_{n}^{m}(\theta_{\mathrm{ MA}},\phi_{\mathrm{MLT}}), \tag{11b}\] which were derived by treating the apex coordinates as if they were orthogonal, following the approach of Laundal et al. (2018). The curl-free part of the sheet current density can be furthermore related to an upward current density \(J_{u}\) (in units of \(\mathrm{A}\,\mathrm{m}^{-2}\)) through \(J_{u}=-\nabla\cdot\mathbf{J}^{\mathrm{cf}}\), a statement of current continuity, which yields at the reference height \[J_{u}(t,\theta_{\mathrm{MA}},\phi_{\mathrm{MLT}})=-\frac{1}{\mu_{0}(a+h_{ \mathrm{R}})}\sum_{n=1}^{N^{\mathrm{low}}}\sum_{\begin{subarray}{c}m=-n\\ |m|\leq M\end{subarray}}^{n}n(n+1)T_{n}^{m,\mathrm{ion}}(t)Y_{n}^{m}(\theta_{ \mathrm{MA}},\phi_{\mathrm{MLT}}). \tag{12}\] At polar latitudes, where the magnetic field lines are close to vertical, \(J_{u}\) can be interpreted as field-aligned currents and \(\mathbf{J}^{\mathrm{cf}}\) as the horizontal closure of these currents in the form of a sheet current. Instead of parametrizing the expansion coefficients \(g_{n}^{m,\mathrm{ion}}\) and \(T_{n}^{m,\mathrm{ion}}\) in time explicitly, we followed the climatological approach of the AMPS model (Laundal et al., 2018) and wrote these coefficients as linear combinations of external driving parameters \(X_{i}\) (\(i=1,\ldots,19\)) so that \[g_{n}^{m,\mathrm{ion}}(t)=g_{n,0}^{m,\mathrm{ion}}+\sum_{i=1}^{19}g_{n,i}^{m, \mathrm{ion}}X_{i}(t) \tag{13}\] and similarly for \(T_{n}^{m,\mathrm{ion}}\). The \(X_{i}\) are combinations of solar wind parameters and IMF components that have been found suitable for characterising the external driving of the ionospheric current system (Laundal et al., 2018; Weimer, 2013) \[\begin{array}{llll}X_{1}=\sin\theta_{\mathrm{c}}&X_{2}=\cos\theta_{ \mathrm{c}}&X_{3}=\epsilon&X_{4}=\epsilon\sin\theta_{\mathrm{c}}\\ X_{5}=\epsilon\cos\theta_{\mathrm{c}}&X_{6}=\beta_{\mathrm{tilt}}&X_{7}= \beta_{\mathrm{tilt}}\sin\theta_{\mathrm{c}}&X_{8}=\beta_{\mathrm{tilt}}\cos \theta_{\mathrm{c}}\\ X_{9}=\epsilon\beta_{\mathrm{tilt}}&X_{10}=\epsilon\beta_{\mathrm{tilt}} \sin\theta_{\mathrm{c}}&X_{11}=\epsilon\beta_{\mathrm{tilt}}\cos\theta_{ \mathrm{c}}&X_{12}=\tau\\ X_{13}=\tau\sin\theta_{\mathrm{c}}&X_{14}=\tau\cos\theta_{\mathrm{c}}&X_{15}= \tau\beta_{\mathrm{tilt}}&X_{16}=\tau\beta_{\mathrm{tilt}}\sin\theta_{ \mathrm{c}}\\ X_{17}=\tau\beta_{\mathrm{tilt}}\cos\theta_{\mathrm{c}}&X_{18}=F_{10.7}&X_{19 }=SML,\end{array} \tag{14}\] which are all functions of time. The terms in Eq. (14) involve the clock angle \[\theta_{\mathrm{c}}=\arctan 2(B_{\mathrm{IMF},y},B_{\mathrm{IMF},z}), \tag{15}\] where the components of the IMF, \(B_{\mathrm{IMF},y}\) and \(B_{\mathrm{IMF},z}\), are with respect to the GSM frame; the dipole tilt angle \[\beta_{\mathrm{tilt}}=\arcsin(\hat{\mathbf{s}}\cdot\hat{\mathbf{m}}_{\mathrm{ dip}}), \tag{16}\] where \(\hat{\mathbf{s}}\) is a unit vector in the direction of the sun and \(\hat{\mathbf{m}}_{\mathrm{dip}}\) is the dipole moment of the IGRF magnetic field, parametrizes seasonal effects; the solar wind-magnetospheric coupling function (Newell et al., 2007) \[\epsilon=10^{-3}|v_{\mathrm{sw}}|^{4/3}{B_{t}}^{2/3}\sin^{8/3}\frac{|\theta_{ \mathrm{c}}|}{2}, \tag{17}\] where \(B_{t}=\sqrt{B_{\mathrm{IMF},y}^{2}+B_{\mathrm{IMF},z}^{2}}\) (given in \(\mathrm{nT}\)) and \(v_{\mathrm{sw}}\) (given in \(\mathrm{km}\,\mathrm{s}^{-1}\)) is the solar wind velocity component antiparallel to the x-axis of the GSM frame, maximises for southward IMF and measures the rate of reconnection on the dayside magnetopause; and the coupling function \[\tau=10^{-3}|v_{\mathrm{sw}}|^{4/3}{B_{t}}^{2/3}\cos^{8/3}\frac{\theta_{ \mathrm{c}}}{2} \tag{18}\] maximises for northward IMF and measures the rate of lobe reconnection in the magnetotail. To approximate the delay of the near-Earth space environment to adjust to changes in the external driving, we used \(20\,\mathrm{min}\) moving averages of \(\epsilon\), \(\tau\) and the clock angle, based on \(1\,\mathrm{min}\) values propagated to the magnetopause as provided by the OMNI database (King et al., 2005). The solar radiation index \(F_{10.7}\) in units of solar flux, \(\text{sfu}\equiv 10^{-22}\,\text{W}\,\text{m}^{-2}\,\text{Hz}^{-1}\), parametrizes solar cycle variations. Expanding on the original parametrization of AMPS, we included as \(X_{19}\) the \(\mathit{SML}\) index (Newell et al., 2011), developed by the SuperMAG initiative (Gjerloev, 2009, 2012), to parametrize indirectly driven currents in the polar ionosphere. With typically more than 100 contributing ground-based magnetometer stations, the \(\mathit{SML}\) index can be considered an extension of the traditionally used \(\mathit{AL}\) index, which is based on only 12 stations, to monitor nightside auroral activity. Fig. 2 shows stacked histograms of the number of selected magnetic vector data in dependence of the external driving parameters at the time of measurement. #### 3.2.2 Magnetospheric field The magnetic field \(\mathbf{B}^{\text{mag}}\) produced by electric currents in the magnetosphere can be separated into contributions due to the ring current in the near-Earth magnetosphere, \(\mathbf{B}^{\text{near}}\), and the currents in the remote magnetosphere, \(\mathbf{B}^{\text{far}}\), so that \(\mathbf{B}^{\text{mag}}=\mathbf{B}^{\text{near}}+\mathbf{B}^{\text{far}}\). We present the parametrization of each contribution in detail below. The magnetic field produced by the ring current in the near-Earth magnetosphere is written as \(\mathbf{B}^{\text{near}}=-\nabla V^{\text{near}}\) using a scalar potential in Solar Magnetic (SM) coordinates, where the z-axis is anti-parallel to the dipole axis of the Earth's magnetic field, the x-axis is in the plane spanned by the dipole axis and the Earth-Sun line, and the y-axis completes the right-handed system (Laundal et al., 2017). The scalar potential is given by \[\begin{split} V^{\text{near}}(t,r,\theta_{\text{SM}},\phi_{ \text{SM}})&=a\sum_{n=1}^{N^{\text{near}}}\sum_{m=-n}^{n}\left( \frac{r}{a}\right)^{n}q_{n}^{m,\text{SM}}(t)Y_{n}^{m}(\theta_{\text{SM}},\phi_ {\text{SM}})\\ &+a\sum_{m=-1}^{1}\hat{q}_{1}^{m,\text{SM}}\bigg{[}\mathit{RC}_{ i}(t)\left(\frac{a}{r}\right)^{2}+\mathit{RC}_{e}(t)\left(\frac{r}{a}\right) \bigg{]}Y_{1}^{m}(\theta_{\text{SM}},\phi_{\text{SM}})\\ &+\text{Earth-induced counterpart}\end{split} \tag{19}\] where \(N^{\text{near}}=2\) is the chosen truncation degree, \(\hat{q}_{1}^{m,\text{SM}}\) are constant regression parameters multiplying the \(\mathit{RC}\) index, which consists of an internal part, \(\mathit{RC}_{i}\), and an external part, \(\mathit{RC}_{e}\), so that \(\mathit{RC}=\mathit{RC}_{i}+\mathit{RC}_{e}\). We estimated the spherical harmonic coefficients \(q_{n}^{m,\text{SM}}\) with \(n=1\), called \(\mathit{RC}\)-baseline corrections, in bins of 30 days, except for a single bin covering the period from August 2010 to January 2014, when only platform magnetometer data of CryoSat2 was available. The coefficients for \(n=2\) were treated as constants over the entire model time interval. The potential of the internally induced field was not estimated separately but coupled to the external potential by means of Q-responses, which are based on models of Earth's electrical conductivity. These Q-responses have also been used for the decomposition of the \(\mathit{RC}\) index into internal and external parts. The reader is referred to Finlay et al. 2020 for details concerning the treatment of induced fields in CHAOS-7, which is also the approach used here. The magnetic field produced by the remote sources in the magnetosphere, assumed to primarily be the magnetopause and magnetotail currents, is written as \(\mathbf{B}^{\text{far}}=-\nabla V^{\text{far}}\) using an axisymmetric, static scalar potential in GSM coordinates, where the x-axis points sunward along the Earth-Sun line, the z-axis is contained within the dipole axis and the Earth-Sun line, while the y-axis completes the right-handed Figure 2: Stacked histograms showing the number of selected magnetic vector data in dependence of the driving parameters for the ionospheric field at the time of measurement. The colours indicate the different satellite data sets. system (Laundal et al., 2017). The potential is given by \[V^{\rm far}(r,\theta_{\rm GSM},\phi_{\rm GSM}) =a\sum_{n=1}^{N^{\rm drag}}\left(\frac{r}{a}\right)^{n}q_{n}^{0,\rm GSM }Y_{n}^{0}(\theta_{\rm GSM},\phi_{\rm GSM}) \tag{20}\] \[+\mbox{Earth-induced counterpart}\] where \(N^{\rm mag}=2\) is the chosen truncation degree. The treatment of the internally induced part is similar to the approach used for \(\mathbf{B}^{\rm near}\). ### Alignment parameters We estimate alignment parameters in the form of three Euler angles \(\alpha\), \(\beta\) and \(\gamma\) for each satellite to rotate the magnetic vector components from the frame of the Vector Field Magnetometer (VFM) to the Common Reference Frame (CRF), which is defined by the orientation of the onboard star cameras. The alignment can be written in matrix notation as \[\mathbf{B}_{\rm CRF}=\mathbf{R}_{3}(\gamma)\mathbf{R}_{2}(\beta)\mathbf{R}_{1} (\alpha)\mathbf{B}_{\rm VFM}, \tag{21}\] where \(\mathbf{B}_{\rm CRF}\) and \(\mathbf{B}_{\rm VFM}\) are column vectors that contain the magnetic field components with respect to the CRF and the VFM frame, respectively, and \(\mathbf{R}_{1}\), \(\mathbf{R}_{2}\) and \(\mathbf{R}_{3}\) are rotation matrices given by \[\mathbf{R}_{1}(\alpha)=\begin{pmatrix}1&0&0\\ 0&\cos\alpha&-\sin\alpha\\ 0&\sin\alpha&\cos\alpha\end{pmatrix},\quad\mathbf{R}_{2}(\beta)=\begin{pmatrix} \cos\beta&0&\sin\beta\\ 0&1&0\\ -\sin\beta&0&\cos\beta\end{pmatrix},\quad\mathbf{R}_{3}(\gamma)=\begin{pmatrix} \cos\gamma&-\sin\gamma&0\\ \sin\gamma&\cos\gamma&0\\ 0&0&1\end{pmatrix}. \tag{22}\] Another rotation based on the quaternions provided in the data product files, which describe the rotation from CRF to an Earth-fixed frame in dependence on satellite position and orientation, was then performed to obtain the vector magnetic field in terms of geocentric spherical components. We estimated the above Euler angles in bins of 30 days to allow for time variations. ### Model estimation The model parameters are arranged into a column vector \(\mathbf{m}=[\mathbf{p}^{\rm T},\mathbf{q}^{\rm T}]^{\rm T}\), where the column vector \(\mathbf{p}\) contains the parameters of the geomagnetic field model and the column vector \(\mathbf{q}\) contains the Euler angles for the alignment of the magnetic vector observations. We solved for the model parameter vector by iteratively minimising the following cost function using a quasi-Newton scheme \[\Phi(\mathbf{m})=[\mathbf{g}(\mathbf{p})-\mathbf{d}(\mathbf{q})]^{\rm T} \mathbf{C}_{d}^{-1}[\mathbf{g}(\mathbf{p})-\mathbf{d}(\mathbf{q})]+\mathbf{m} ^{\rm T}\mathbf{\Lambda}\mathbf{m} \tag{23}\] where \(\mathbf{g}(\mathbf{p})\) is a column vector containing the model estimates of the magnetic vector components, \(\mathbf{d}(\mathbf{q})\) is a column vector containing the aligned magnetic vector observations expressed in terms of spherical geocentric components, \(\mathbf{C}_{d}^{-1}\) is the inverse of the data error covariance matrix, and \(\mathbf{\Lambda}\) is the model regularisation matrix. At each iteration \(k\) the model parameter vector is updated through \[\mathbf{m}_{k+1}=\mathbf{m}_{k}+(\mathbf{G}_{k}^{\rm T}\mathbf{C}_{d}^{-1} \mathbf{G}_{k}+\mathbf{\Lambda})^{-1}[\mathbf{G}_{k}^{\rm T}\mathbf{C}_{d}^{- 1}(\mathbf{d}_{k}-\mathbf{g}_{k})-\mathbf{\Lambda}\mathbf{m}_{k}], \tag{24}\] where \(\mathbf{g}_{k}=\mathbf{g}(\mathbf{p}_{k})\), \(\mathbf{d}_{k}=\mathbf{d}(\mathbf{q}_{k})\) and \(\mathbf{G}_{k}\) is the matrix of partial derivatives of the residuals with respect to the model parameter vector \[\left(\mathbf{G}_{k}\right)_{ij}=\frac{\partial[\mathbf{g}(\mathbf{p})-\mathbf{ d}(\mathbf{q})]_{i}}{\partial(\mathbf{m})_{j}}\Big{|}_{\mathbf{m}=\mathbf{m}_{k}}. \tag{25}\] #### 3.4.1 Data error covariances In the data error covariance matrix we account for the instrument error and the uncertainty in the attitude information provided by the star trackers. The error contributions are most conveniently described in the B23 frame, which is defined by unit base vectors in the direction of \(\mathbf{B}\), \(\hat{\mathbf{n}}\times\mathbf{B}\) and \(\mathbf{B}\times(\hat{\mathbf{n}}\times\mathbf{B})\), where \(\hat{\mathbf{n}}\) is the star camera bore sight assumed not parallel to \(\mathbf{B}\). In this reference frame the data error covariance matrix is diagonal and given by Holme et al. 1996 \[\mathbf{C}_{\rm B23}=\mathrm{diag}[\sigma^{2},\sigma^{2}+B^{2}\chi^{2}-(\chi^{ 2}-\psi^{2})(\hat{\mathbf{n}}\cdot\mathbf{B})^{2},\sigma^{2}+B^{2}\psi^{2}] \tag{26}\] where \(\sigma\) (in \(\mathrm{nT}\)) is an isotropic instrument error in the vector component magnitudes, \(\chi\) (in radians) is an error in the attitude about \(\hat{\mathbf{n}}\), \(\psi\) (in radians) is an error in the attitude about the two axes perpendicular to \(\hat{\mathbf{n}}\). In the B23 frame, we multiplied the inverse of the data error covariance matrix, which is diagonal, by the Huber weights (Constable, 1988; Huber, 2004), which we recomputed from the residuals at each iteration. The use of Huber weights allows the robust estimation of model parameters in the presence of long-tailed error distributions due to sources of error besides the instrument and attitude errors. We also applied a \(\sin\theta\) weighting to compensate for the larger amount of data near the poles due to the high-inclination orbits of the satellites. Tab. 1 gives an overview of the used a-priori instrument and attitude errors, which are based on results of previous modelling efforts, most notably the CHAOS model series. The star camera bore sight \(\hat{\mathbf{n}}\), which is aligned with the z-axis of the CRF frame, was taken from the data product files. Note that star cameras often consist of several head units, in which case the bore sight direction is a weighted average of the directions of the individual head units that were active at the time of measurement. Also note that in our case, the value of \(\hat{\mathbf{n}}\) is in fact arbitrary since we assume \(\psi\) and \(\chi\) to be equal. However, \(\hat{\mathbf{n}}\) is important in the case of CHAMP when only one of the two head units was active at a time. #### 3.4.2 Model regularisation The model regularisation matrix \(\mathbf{\Lambda}\) aids the convergence of the model estimation by applying smoothing penalties on the model parameters. It is a block-diagonal matrix, where each block corresponds to a penalty measure scaled with an adjustable parameter, the regularisation parameter. To reduce the temporal variation of the internal field, we used a regularisation term based on the squared value of the third time-derivative of the radial internal field at the Core-Mantle Boundary (CMB) (\(r=3485\,\mathrm{km}\)), averaged over both the entire model time interval and the CMB, and another regularisation term based on the squared value of the second time-derivative of the radial internal field at the CMB, evaluated at the model endpoints, \(t_{\mathrm{s}}=2001.0\) and \(t_{\mathrm{e}}=2022.0\) in units of decimal years, and averaged over the CMB. The corresponding regularisation parameters are \(\lambda_{t}\), \(\lambda_{t_{\mathrm{s}}}\) and \(\lambda_{t_{\mathrm{e}}}\), respectively. The time variation of each \(RC\)-baseline correction, \(\{q_{1}^{-1,\mathrm{SM}},q_{1}^{0,\mathrm{SM}},q_{1}^{1,\mathrm{SM}}\}\), was minimised using a quadratic norm of the bin-to-bin differences, which is scaled by the regularisation parameter \(\lambda_{\mathrm{mag}}\). For the ionospheric field, we implemented two regularisation terms. For the first term, instead of directly applying a regularisation on the poloidal ionospheric magnetic field \(\mathbf{B}^{\mathrm{pol}}\), we designed a quadratic norm based on the associated divergence-free sheet currents in the ionospheric E-layer. More specifically, this regularisation term is based on the squared magnitude of the average divergence-free sheet currents as seen by an Earth-fixed observer, integrated over the spherical surface, which can be written as a quadratic form \[\mathbf{m}^{\mathrm{T}}\mathbf{\Lambda}^{\mathrm{pol}}\mathbf{m}=\sum_{s\in\{r, \theta,\phi\}}\frac{1}{4\pi}\int_{S(r_{0})}\left[\frac{1}{N_{d}}\sum_{i=1}^{N_ {d}}J_{s}^{\mathrm{d}t}(t_{i},r_{0},\theta,\phi)\right]^{2}\sin\theta\mathrm{d }\theta\mathrm{d}\phi, \tag{27}\] where \(S(r_{0})\) is the spherical surface of radius \(r=r_{0}\equiv a+h_{\mathrm{R}}\), \(J_{s}^{\mathrm{df}}\) with \(s\in\{r,\theta,\phi\}\) are the geocentric spherical components of the divergence-free sheet current density [see Eqs. (10) and (11a)] as given by the model. The surface integral was implemented by, first, computing the components of the divergence-free sheet current density on a Gauss-Legendre grid in spherical geocentric coordinates given the external driving parameter values at the times \(t_{i}\) in the input dataset, then, forming the arithmetic mean of each component, and, finally, integrating the sum of the squared component means over the sphere using the integration weights. When strongly enforced by choosing a large value of the associated regularisation parameter \(\lambda_{\mathrm{pol}}\), the regularisation pushes to zero the component means of the divergence-free sheet current density with respect to an Earth-fixed frame. Note that the currents can still change in time as required by the magnetic data but only to the extent that the time average remains small. We found that this form of regularisation helps to resolve the ambiguity between the internal field and the poloidal ionospheric field, which is caused by the fact that both fields have sources that are internal with respect to the satellites. Without the regularisation, the internal field showed artefacts in the form of near-zonal patterns that were almost time-invariant and parallel to lines of constant QD latitude and the divergence-free sheet currents were organised into a single cell of current encircling the magnetic poles, very different from the expected configuration consisting of two cells of current separated by the noon-midnight meridian. Regarding the second regularisation term, for the toroidal ionospheric field, we followed the AMPS model by using a regularisation term based on the spatial power spectrum of the toroidal field to prevent large amplitudes close to the magnetic dip equator, where the mapping of points at satellite altitude to MA coordinates leaves a gap in \(\theta_{\mathrm{MA}}\). The associated regularisation matrix \(\mathbf{\Lambda}^{\mathrm{tor}}\) is diagonal with entries \(\frac{n(n+1)}{2n+1}\), which depend on the degree of the expansion coefficients \(T_{n}^{n,\mathrm{ion}}\). The associated regularisation parameter is \(\lambda_{\mathrm{tor}}\). We derived three geomagnetic field models: the first model, referred to as _Model-A_, accounts for the ionospheric field and is our preferred model, whereas the second model, denoted _Reference_, is identical except for omitting the ionospheric part. The third model, _Model-B_ is identical to _Model-A_, but we reduced the temporal regularisation of the internal field part of the model. Tab. 2 summarises the parametrization of the three models and gives the numerical values of the regularisation parameters used in this study. The iterative minimisation for the model estimation was initialised with a starting model \(\mathbf{m}_{0}\), which we chose to be CHAOS-7.9 for the internal and magnetospheric model parts and, if included, zero-valued parameters for the ionospheric model part. Concerning the Euler angles, we used the initial values that have been determined during the pre-flight calibration on ground for CHAMP (Schwintzer et al., 2002) and Swarm (Toffner-Clausen et al., 2019), and during in-flight calibration for CryoSat2-1 (Olsen et al., 2020). Convergence was typically reached after 15 iterations. \begin{table} \begin{tabular}{l r r r} \hline \hline & \(\sigma\) (\(\mathrm{nT}\)) & \(\psi\) (arcsec) & \(\chi\) (arcsec) \\ Dataset & & & \\ \hline CHAMP & 2.5 & 10 & \(10^{\dagger}\) \\ CryoSat2-1 & 6.0 & 30 & 30 \\ Swarm-A & 2.2 & 5 & 5 \\ Swarm-B & 2.2 & 5 & 5 \\ Swarm-C & 2.2 & 5 & 5 \\ \hline \hline \end{tabular} \({}^{\dagger}\) When both head units of the star camera are active, otherwise \(60\,\mathrm{arcseconds}\). \end{table} Table 1: Adopted instrument and attitude errors for the satellite datasets. ## 4 Results In the following, we report on the achieved misfit of _Reference_ and _Model-A_, present the estimated ionospheric field of _Model-A_, and compare the estimated internal fields of _Reference_ and _Model-A_. _Model-B_ is presented in the second half of this section, where we study the effect of relaxing the temporal regularisation of the internal field model when an ionospheric field is co-estimated. \begin{table} \begin{tabular}{l l l} \hline \hline **Internal field** & & \\ Time-dependent field & S: Spherical harmonics in geographic coordinates (\(n\leq 20\)) & \\ & T: 6th-order B-splines, \(0.5\,\mathrm{yr}\) knot spacing, 6-fold endpoint multiplicity & \\ Static field & S: Spherical harmonics in geographic coordinates (\(21\leq n\leq 55\)) & \\ & T: Static in geographic coordinates & \\ \hline **Ionospheric field** & & \\ & Reference & Model-A/Model-B \\ \hline Poloidal field & n/a & S: Spherical harmonics in QD/MLT, \\ & & \(n\leq 45\), \(m\leq 3\) \\ & & T: 19 external driving parameters + constant \\ Toroidal field & n/a & S: Spherical harmonics in MA/MLT, \\ & & \(n\leq 65\), \(m\leq 3\) \\ & & T: 19 external driving parameters + constant \\ \hline **Magnetospheric field** & & \\ Near-magnetospheric field & S: Spherical harmonics in SM, \(n\leq 2\) & \\ & T: Degree-1 spherical harmonic coefficients scaled by hourly \(RC\) index, \\ & degree-2 coefficients static in SM, \(RC\)-baseline corrections estimated & \\ & in bins of 30 days, except for a single bin between 2010-08/2014-01 & \\ Far-magnetospheric field & S: Spherical harmonics in GSM, \(n\leq 2\), \(m=0\) & \\ & T: Static in GSM & \\ \hline **Alignment** & & \\ CHAMP & 3 Euler angles estimated in 118 bins of 30 days length & \\ Swarm-A & 3 Euler angles estimated in 99 bins of 30 days length & \\ Swarm-B & 3 Euler angles estimated in 99 bins of 30 days length & \\ Swarm-C & 3 Euler angles estimated in 99 bins of 30 days length & \\ CryoSat2-1 & 3 Euler angles estimated in 42 bins of 30 days length & \\ \hline **Regularisation** & & \\ & Reference & Model-A & Model-B \\ \hline \(\lambda_{t}\) (\(\mathrm{[nT/yr^{3}]^{-2}}\)) & \(1.0\) & \(1.0\) & \(0.0125\) \\ \(\lambda_{t_{x}}\) (\(\mathrm{[nT/yr^{2}]^{-2}}\)) & \(10^{-2}\) & \(10^{-2}\) & \(1.25\times 10^{-4}\) \\ \(\lambda_{t_{x}}\) (\(\mathrm{[nT/yr^{2}]^{-2}}\)) & \(10^{-2}\) & \(10^{-2}\) & \(1.25\times 10^{-4}\) \\ \(\lambda_{\mathrm{mag}}\) (\(\mathrm{[nT/yr]^{-2}}\)) & \(5\times 10^{3}\) & \(5\times 10^{3}\) & \(5\times 10^{3}\) \\ \(\lambda_{\mathrm{pol}}\) (\(\mathrm{[mA/m]^{-2}}\)) & n/a & \(10^{6}\) & \(10^{6}\) \\ \(\lambda_{\mathrm{tor}}\) (\(\mathrm{nT^{-2}}\)) & n/a & \(10^{5}\) & \(10^{5}\) \\ \hline \hline \end{tabular} \end{table} Table 2: Summary of the model parametrization and the chosen numerical values of the regularisation parameters for the three estimated models. The description of the parametrization is further divided into spatial (S) and temporal (T), when applicable. ### Fit to the magnetic data To illustrate how well the magnetic field estimates of _Model-A_ fit the magnetic data, we computed vector and scalar residuals, i.e. the component-wise differences \(\Delta B_{r}\), \(\Delta B_{\theta}\) and \(\Delta B_{\phi}\), and the difference in scalar magnitude, \(\Delta F\), between the vector estimates of the magnetic field from _Model-A_ and the magnetic observations in the dataset used for the model estimation. Note that although the scalar component was not used in constructing the models, it is a useful diagnostic and included here. Fig. 3 presents histograms of the vector and scalar residuals for each satellite in bins of \(0.5\,\mathrm{nT}\) width. Irrespective of the residual component and the satellite, the histograms are fairly symmetric and have a single maximum close to zero. The peaks of the histograms for the three _Swarm_ satellites are narrow and very similar in appearance to the extent that they practically overlap. The peaks for CHAMP are slightly broader with correspondingly lower maxima, especially regarding the radial and southward components. The histograms for CryoSat2-1 are even broader with maxima that are at approximately half the value of the _Swarm_ and CHAMP satellites, reflecting the generally higher noise level of these data. _Model-A_ clearly fits the radial and scalar components better than the \(\theta\) and \(\phi\) components, which are more influenced by rapidly varying field-aligned currents, which our parametrization does not capture. Tab. 3 presents Huber-weighted mean and Root-Mean-Square (RMS) values for each satellite and field component, distinguishing between polar (\(|\lambda_{\mathrm{QD}}|>55^{\circ}\)) and non-polar (\(|\lambda_{\mathrm{QD}}|\leq 55^{\circ}\)) latitudes. The RMS values of _Model-A_ are generally lower than those of _Reference_, especially in the case of horizontal and scalar residuals at polar latitudes. For example, the RMS values of \(\Delta B_{\theta}\), \(\Delta B_{\phi}\), and \(\Delta F\) for CHAMP in the polar regions are \(17.35\,\mathrm{nT}\), \(18.89\,\mathrm{nT}\), and \(6.20\,\mathrm{nT}\) for _Model-A_ and \(21.07\,\mathrm{nT}\), \(23.14\,\mathrm{nT}\), and \(8.52\,\mathrm{nT}\) for the reference model, respectively, which corresponds to a reduction of approximately \(20\,\mathrm{\char 37}\). This suggests that co-estimating an ionospheric field improves the fit to the data, in particular, at high latitudes. To further characterise the improvement in the data fit, we investigated median scalar residuals as a function of QD latitude and MLT, which we computed in cells of approximately equal area using a HEALPix (Gorski et al., 2005) grid in QD/MLT coordinates over the entire globe. In Fig. 4 we compare these maps of median scalar residuals for _Model-A_ and the reference model. Looking at the polar views for the reference model, median scalar residuals are organised into a pattern of two crescent-shaped cells of positive and negative values around each magnetic pole, which reflects the well-known two-cell current system in the polar ionosphere (e.g. Dungey, 1961). Likewise, in the global view for the reference model, the strongly negative values of the median scalar residuals on the dayside at the dip-equator and slightly less negative values at mid-latitudes are associated with the solar quiet current system and the equatorial electrojet (e.g. Yamazaki et al., 2016). In _Model-A_, the median value of the scalar residuals in QD/MLT coordinates are dramatically reduced not only at polar latitudes, where the AMPS approach is expected to work best, but also at low and mid-latitudes, in particular, close to the dip equator on the dayside. The remaining patterns are relatively weak and could possibly be captured by using a higher truncation degree of the ionospheric field model. The fact that the patterns found for the reference model are largely absent for _Model-A_, in the bottom panel of Fig. 4, shows that our approach accounts for previously unmodelled signals associated with ionospheric current systems in the residuals. Since the models in this study are derived only from satellite magnetic data, we can test how well time variations are modelled in comparison to independent observations made by ground-based magnetic observatories of the International Real-time Magnetic Observatory Figure 3: Histograms of vector and scalar residuals for each satellite dataset with respect to _Model-A_. The bin width is \(0.5\,\mathrm{nT}\) and the histograms are normalised to integrate to unit area. The residuals outside the range of \(\pm 30\,\mathrm{nT}\) are not shown but taken into account for the normalisation. Network (INTERMAGNET). These data are available as hourly mean values at the World Data Centres for Geomagnetism in Edinburgh and have been quality checked as explained in Macmillan et al. 2013. In Fig. 5, for example, we compare monthly means of the ionospheric field given by _Model-A_ compared with those recorded at the ground-based magnetic observatory in Hrnsund (HRN), located at \(77.0^{\circ}\)N, \(15.5^{\circ}\)E on Svalbard (Norway) near the northern edge of the auroral oval. For this comparison, we extracted monthly means of the ionospheric field from the timeseries of hourly means at HRN by applying the quiet-time selection in Sect. 2, removing the internal and magnetospheric field estimates given by _Model-A_, and centring the corrected timeseries with the component-wise average to remove the remaining crustal field. \begin{table} \begin{tabular}{c c c c c c c} \hline & & & \(N\) & Model-A & \multicolumn{2}{c}{Reference} \\ \cline{3-6} & & & Mean (nT) & RMS (nT) & Mean (nT) & RMS (nT) \\ \cline{2-6} Dataset & QD latitude & Residual & & & & \\ \hline \multirow{6}{*}{CHAMP} & \multirow{3}{*}{Non-polar} & \(\Delta B_{r}\) & 661357 & 0.43 & 4.18 & 0.38 & 5.27 \\ & & \(\Delta B_{\theta}\) & 661357 & -0.13 & 4.62 & -0.05 & 4.94 \\ & & \(\Delta B_{\phi}\) & 661357 & 0.05 & 5.92 & 0.07 & 6.48 \\ & & \(\Delta F\) & 661357 & -0.03 & 2.97 & -0.23 & 3.96 \\ \cline{2-6} & & \(\Delta B_{r}\) & 410810 & 0.14 & 6.60 & 0.25 & 8.70 \\ & & \(\Delta B_{\theta}\) & 410810 & -0.01 & 17.35 & 0.15 & 21.07 \\ & & \(\Delta B_{\phi}\) & 410810 & 0.03 & 18.89 & 0.31 & 23.14 \\ & & \(\Delta F\) & 410810 & -0.11 & 6.20 & -0.75 & 8.52 \\ \hline \multirow{6}{*}{CryoSat2-1} & \multirow{3}{*}{Non-polar} & \(\Delta B_{r}\) & 285278 & 0.02 & 4.97 & 0.03 & 5.46 \\ & & \(\Delta B_{\theta}\) & 285278 & -0.40 & 6.16 & -0.43 & 6.23 \\ & & \(\Delta B_{\phi}\) & 285278 & -0.02 & 6.32 & 0.11 & 6.66 \\ & & \(\Delta F\) & 285278 & 0.39 & 4.82 & 0.35 & 4.80 \\ \cline{2-6} & & \(\Delta B_{r}\) & 178099 & -0.61 & 7.02 & -0.57 & 7.67 \\ & & \(\Delta B_{\theta}\) & 178099 & 1.01 & 18.18 & 0.94 & 20.75 \\ & & \(\Delta B_{\phi}\) & 178099 & -0.15 & 19.45 & 0.08 & 23.52 \\ & & \(\Delta F\) & 178099 & 0.37 & 6.67 & 0.14 & 7.36 \\ \hline \multirow{6}{*}{Swarm-A} & \multirow{3}{*}{Non-polar} & \(\Delta B_{r}\) & 190878 & 0.09 & 2.78 & 0.04 & 3.96 \\ & & \(\Delta B_{\theta}\) & 190878 & -0.05 & 3.53 & -0.06 & 3.89 \\ & & \(\Delta B_{\phi}\) & 190878 & 0.03 & 5.32 & 0.06 & 5.84 \\ & & \(\Delta F\) & 190878 & -0.09 & 2.62 & -0.26 & 3.56 \\ \cline{2-6} & & \(\Delta B_{r}\) & 118640 & -0.04 & 5.58 & -0.08 & 7.42 \\ & & \(\Delta B_{\theta}\) & 118640 & 0.11 & 15.89 & 0.11 & 19.65 \\ & & \(\Delta B_{\phi}\) & 118640 & 0.05 & 18.83 & 0.39 & 23.28 \\ & & \(\Delta F\) & 118640 & -0.22 & 5.18 & -0.75 & 7.27 \\ \hline \multirow{6}{*}{Swarm-B} & \multirow{3}{*}{Non-polar} & \(\Delta B_{r}\) & 192185 & -0.02 & 2.71 & -0.08 & 3.89 \\ & & \(\Delta B_{\theta}\) & 192185 & -0.12 & 3.46 & -0.10 & 3.79 \\ & & \(\Delta B_{\phi}\) & 192185 & -0.01 & 5.27 & 0.04 & 5.78 \\ & & \(\Delta F\) & 192185 & -0.00 & 2.52 & -0.21 & 3.44 \\ \cline{2-6} & & \(\Delta B_{r}\) & 119224 & -0.12 & 5.18 & -0.13 & 6.83 \\ & & \(\Delta B_{\theta}\) & 119224 & 0.29 & 15.85 & 0.23 & 19.54 \\ & & \(\Delta B_{\phi}\) & 119224 & 0.11 & 18.61 & 0.44 & 22.99 \\ & & \(\Delta F\) & 119224 & 0.05 & 4.70 & -0.48 & 6.62 \\ \hline \multirow{6}{*}{Swarm-C} & \multirow{3}{*}{Non-polar} & \(\Delta B_{r}\) & 194920 & 0.06 & 2.79 & -0.00 & 3.93 \\ & & \(\Delta B_{\theta}\) & 194920 & -0.11 & 3.54 & -0.13 & 3.90 \\ \cline{1-1} & & \(\Delta B_{\phi}\) & 194920 & 0.03 & 5.29 & 0.07 & 5.82 \\ \cline{1-1} & & \(\Delta F\) & 194920 & -0.01 & 2.61 & -0.17 & 3.54 \\ \cline{1-1} \cline{2-6} & & \(\Delta B_{r}\) & 121355 & 0.01 & 5.54 & -0.02 & 7.38 \\ \cline{1-1} \cline{2-6} & & \(\Delta B_{\theta}\) & 121355 & 0.10 & 15.82 & 0.09 & 19.62 \\ \cline{1-1} & & \(\Delta B_{\phi}\) & 121355 & 0.05 & 18.77 & 0.41 & 23.19 \\ \cline{1-1} \cline{2-6} & & \(\Delta F\) & 121355 & -0.05 & 5.15 & -0.58 & 7.24 \\ \hline \end{tabular} \end{table} Table 3: Statistics of vector and scalar residuals with respect to _Model-A_ and _Reference_ for each satellite dataset. In the table \(N\) is the number of data, and Mean and RMS refer to the Huber-weighted mean and the Huber-weighted root-mean-square of the deviation from the mean, respectively. Polar QD latitude refers to \(|\lambda_{\rm QD}|>55^{\circ}\) and non-polar QD latitude to the opposite. Concerning the modelled ionospheric field, we produced monthly averages using the hourly estimates of the poloidal ionospheric field of _Model-A_ at the times of the quiet-time HRN timeseries after downward continuing this part of the model below the reference height to the Earth's surface. For this, we assumed that the poloidal ionospheric field below the reference height can be represented through an external potential whose radial field component is continuous across the reference height. Note that the toroidal ionospheric part of the model does not contribute to the ionospheric field on ground since it does not exist inside the non-conducting part of the atmosphere. Fig. 5 shows that _Model-A_ closely follows yearly and slower variations of the HRN timeseries, especially the fit to the southward component is encouraging. However, _Model-A_ certainly underestimates the peak values seen in the observatory timeseries, for example, for those in 2003, and cannot reproduce the more dynamic time periods between 2001 and 2005, and between 2012 and 2016, which are associated with solar maximum conditions. This shows that our approach sensibly models the typical variations of the ionospheric field but is not able to reproduce the dynamic ionospheric field produced by local currents above the observatory. To document our estimated ionospheric field at mid and low latitudes, we show in Fig. 6 the radial component of the ionospheric magnetic field from _Model-A_ and from CM6 (Sabaka et al., 2020), the sum of primary and secondary parts, at satellite altitude at \(450\,\mathrm{km}\) during noon in Greenwich on March 21, 2018. The overall pattern of the ionospheric field from _Model-A_ at low and mid latitudes is similar to the CM6 model. The radial field is stronger around local noon north and south of the magnetic dip equator and the geometry broadly follows that of the internal field. However, there are important differences between _Model-A_ and CM6. At mid and low latitudes the amplitude of the radial field from _Model-A_ is much weaker than for CM6 on the dayside, while it is comparable on the nightside but with different signs in the northern and southern hemispheres. At high latitudes the two cell pattern around the magnetic poles is clearly visible for _Model-A_, while the Figure 4: Median scalar residuals with respect to the reference model (top) and _Model-A_ (bottom) in QD/MLT coordinates using an equal-area pixelation. The global maps on the left are Mollweide projections, where the central vertical line corresponds to midnight (\(\phi_{\mathrm{MLT}}=0^{\circ}\)) and the central horizontal line to the magnetic dip equator (\(\lambda_{\mathrm{QD}}=0^{\circ}\)). The maps on the right are orthographic projections of the northern magnetic hemisphere (North) and, as if looking down on the Earth from the north pole, the southern magnetic hemisphere (South). The labels indicate noon (12), midnight (00), dawn (06) and dusk (18). The dashed lines show parallels and meridians at \(30^{\circ}\) intervals. pattern is weaker and smeared out for CM6, in particular in the southern hemisphere. This shows that our ionospheric field model captures the basic Solar-quiet pattern but has limitations on the dayside at non-polar latitudes. Note that the ionospheric field from _Model-A_ becomes more similar to CM6 at low and mid latitudes by relaxing the regularisation imposed on the divergence-free sheet currents associated with this part of the model. ### Ionospheric currents during geomagnetic quiet-time conditions In this section we report the spatial structure of the estimated currents from _Model-A_ and their response to changes in the external driving. In Fig. 7 we show polar views of the divergence-free and field-aligned current densities in QD/MLT coordinates as a function of clock angle for the quiet solar wind conditions represented in the dataset during winter in the northern hemisphere. For all clock angles, the divergence-free sheet currents circulate in two cells, roughly separated by the non-midnight meridian, whereas the field-aligned currents form concentric patterns with the centre slightly offset from the magnetic pole towards midnight, known as R1 and R2 currents (Iijima et al., 1978; Iijima et al., 1976). For northward IMF (\(\theta_{c}=0^{\circ}\)) the cell of the divergence-free sheet currents in the dawn sector is slightly more pronounced than the one in the dusk sector, and the maxima of the field-aligned currents are located close to noon. When the IMF rotates southward, the currents generally gain in strength. However, a clockwise rotation to \(\theta_{c}=90^{\circ}\) leads to stronger currents than a counter-clockwise rotation to \(\theta_{c}=-90^{\circ}\), which corresponds to an asymmetry in the currents that depends on the y-component of the IMF. The currents are strongest when the IMF is southward (\(\theta_{c}=180^{\circ}\)) with a maximum of downward field-aligned currents in the noon-dawn Figure 5: Monthly mean values of the quiet-time ionospheric field extracted from the records of the magnetic observatory in Hornsund, Svalbard in Norway (red) and those given by _Model-A_ at the same location (black). Note that we ignored the effect of induction during the downward continuation of our ionospheric field model. Figure 6: Radial component of the ionospheric magnetic field from _Model-A_ (left) and CM6 (right) at \(450\,\mathrm{km}\) altitude during noon in Greenwich on March 21, 2018. Note that we show the sum of inducing and induced parts of the ionospheric field from CM6. sector and a maximum of upward field-aligned currents in the midnight-dusk sector. The case of southward oriented IMF, however, should be interpreted with caution since it is poorly represented in the data due to the chosen data selection (see panel for the clock angle in the top centre of Fig. 2). Fig. 8 shows the divergence-free and field-aligned current densities for the same conditions as in Fig. 7 but in dependence of the \(SML\) index and only for purely northward IMF. With decreasing \(SML\) index, corresponding to an increase in auroral activity, the divergence-free sheet currents and the field-aligned currents grow in strength. The locations where the field-aligned currents are strongest move from near noon (\(SML=$-40\,\mathrm{nT}$\)) to the midnight sector (\(SML=$-160\,\mathrm{nT}$\)), reaching a configuration in which an upward directed current dominates the pre-midnight and a downward current the post-midnight sectors. This pair of strong downward and upward field-aligned currents centred on midnight agrees well with the current wedge that is thought to exist during substorms (Kepko et al., 2015; McPherron et al., 1973). Again, however, the case for \(SML=$-160\,\mathrm{nT}$\) should be considered an extrapolation given that there was a relatively small amount of data with large negative values of the \(SML\) index available during the modelling thanks to the quiet-time data selection (for the distribution of the \(SML\) see lower right panel in Fig. 2). For interpreting the variations of the patterns in Figs. 7 and 8 in terms of physical processes, it is worth mentioning that the terms in the ionospheric magnetic field parametrization related to the \(SML\) index and the \(\epsilon\) coupling function probably compete to some extent for the same signal. So a part of the structure which may go into the \(\epsilon\) terms, if the \(SML\) index was not included in the parametrization, may now be contained in the \(SML\) terms. For example in Fig. 7, the changes due to variations in the clock angle may be less than in the AMPS model, which did not include the \(SML\) index, because some of the signal for southward IMF conditions is now contained in the \(SML\) terms. Finally, to illustrate the ability of the climatological approach to allow solar cycle changes in the ionospheric currents, we produced a sequence of plots, similar to Figs. 7 and 8, that show the current densities in the northern polar region averaged in time over successive 2.5 year intervals between 2008.0 and 2020.5. Specifically, we averaged the currents over the intervals over the intervals 2008.0-2010.5 (solar minimum), 2010.5-2013.0 (ascending solar cycle), 2013.0-2015.5 (solar maximum), 2015.5-2018.0 (descending solar cycle), and 2018.0-2020.5 (again solar minimum) to illustrate different phases of the solar cycle. To further emphasise the changes in the currents, we removed the average current density for the entire solar cycle shown here. Fig. 9 shows this sequence of plots and the removed average current density along with the \(F_{10.7}\) and \(SML\) indices, averaged using the same 2.5 year intervals. The \(F_{10.7}\) index reaches a maximum around 2014, indicating solar maximum, whereas the \(SML\) index has a minimum later, close to 2017, coinciding with the descending phase of the solar cycle. Turning to the field Figure 7: Divergence-free and field-aligned current densities in the north polar region for different clock angles. Each panel shows the current densities in QD/MLT coordinates above \(60^{\circ}\) QD latitude with noon at the top, dawn on the right, midnight at the bottom and dusk on the left. The contours show the potential of the divergence-free sheet current density in steps of \(5\,\mathrm{kA}\) (solid for positive and dashed for negative values) and the colours indicate the field-aligned current density. The location of the largest (\(\triangle\)) and smallest (\(\bigtriangledown\)) field-aligned current densities are indicated with the coloured triangles, and their strength is given in the lower right corner of each panel. The total field-aligned current (\(\|\)) and the divergence-free part of the sheet current (\(\bot\)) flowing between the maximum and minimum of \(\psi^{\mathrm{dif}}\), poleward of \(60^{\circ}\) QD latitude, are given in the lower left corner. The dotted lines indicate QD parallels at \(10^{\circ}\) and MLT meridians at \(2\,\mathrm{h}\) intervals. This figure is similar in form to those originally presented by Laundal et al. 2018 for the AMPS model. aligned currents, differences with respect to the solar cycle average mostly occur in the noon sector with the exception of the descending phase, when the differences are most prominent in the midnight sector. The differences in the divergence-free sheet currents with respect to the solar cycle average are generally more complex. However, noteworthy is the two-cell pattern that is visible during the descending phase of the solar cycle, which is consistent with the enhancement of this pattern for decreasing values of the \(SML\) index as shown in Fig. 8. Overall, we conclude that our approach is able to capture at least part of the changes that occur in the strength and appearance of polar ionospheric currents during the solar cycle. ### Core field secular variation at polar latitudes We demonstrated above that the co-estimation of the polar ionospheric currents helps with accounting for previously unmodelled signals in the residuals. We now turn to the impact of co-estimating the ionospheric field on the estimated core field, by considering differences between the internal field estimates of _Model-A_ and the reference model. In Fig. 10 we show the spatial power spectra of the Secular Variation (SV) and Secular Acceleration (SA) at the CMB in 2019.0 from _Model-A_, _Reference_, CHAOS-7.9, and the difference between _Model-A_ and _Reference_. The SV spectra are very similar at low spherical harmonic degree, causing the curves for the models to overlap in the plot. Above degree 13 the spectra deviate more clearly but continue to stay closely together as the degree increases. The SA spectra increase with spherical harmonic degree and reach a maximum at degree 10. Above degree 10 the spectra decrease but much steeper for _Model-A_ and _Reference_ than for CHAOS-7.9. This difference in the high-degree SA is the result of the temporal regularisation used in CHAOS-7.9, which is tapered to allow for more power at high degrees. However, the taper was not applied in the models of this study. Since the spectra from _Model-A_ and _Reference_ are very similar, we show in Fig. 11 the sensitivity matrix of the SV and the SA in 2019.0 from _Model-A_ with respect to _Reference_. The sensitivity matrix is defined as the coefficient-wise difference between recovered spherical harmonic coefficients and chosen target coefficients, here the coefficients from _Reference_, normalised with the mean amplitude of the target coefficients at degree \(n\)(e.g. Sabaka et al., 2013). The SV coefficients of _Model-A_ and _Reference_ are very similar at low spherical harmonic degree. However, the sensitivity increases above degree 14 in particular for near-zonal (\(m\approx 0\)) and near-sectorial (\(m\approx n\)) coefficients. A similar pattern can be observed in the sensitivity matrix of the SA, but the numerical values are overall larger. Noteworthy are relatively strong sensitivity values for the zonal SA coefficients at degree 2 and 3. The spatial power spectra and the sensitivity matrix for the SV show that appreciable differences between _Model-A_ and the reference model can be expected at high spherical harmonic degrees. To examine this we plot in Fig. 12 snapshots of the radial SV for _Model-A_ and _Reference_ up to degree 19 at the CMB in the north polar region in 2007.0, 2012.0 and 2018.0, after removing the snapshot average to emphasise changes in time. The average radial SV for _Model-A_ shows small-scale flux patches of positive and negative polarity, with a Figure 8: Divergence-free and field-aligned current densities in the north polar region for different values of the \(SML\) index and purely northward IMF. The figure is otherwise identical to Fig. 7. Figure 10: Spatial power spectrum of the SV (left) and SA (right) at the CMB in 2019.0 from _Model-A_ (blue), _Reference_ (orange), CHAOS-7.9 (green), and the difference between _Model-A_ and _Reference_ (black dashed). Figure 9: Average \(F_{10,\,7}\) and \(SML\) indices (top row), and divergence-free and field-aligned current densities (middle row) in the north polar region successively averaged over 2.5 year periods after removal of the average current density (bottom row) over the solar cycle from 2008 to 2020.5. The markers on the curves of the indices are placed at the midpoints of the 2.5 year intervals used for the averaging, and the contours of the potential of the divergence-free sheet current density are shown in steps of \(5\,\mathrm{kA}\). distinct pair of patches located over Siberia. This pair is also visible in the average map for the reference model but is less prominent due to relatively strong patches west of Greenland, which are slightly elongated in longitude. For _Model-A_, the flux patches around that area are less extended in longitude, showing that the co-estimation of the ionospheric field leads to better focused patches of SV at the CMB. This can also be seen in the difference between the average radial SV from _Model-A_ and _Reference_, which exhibits near-zonal features of radial SV around the north pole. Despite this improvement in _Model-A_, the presence of a weak pattern of stripes in the SV along geographic parallels over North America may indicate that the SV at high latitudes is still contaminated by the ionospheric field at high degree. The snapshots showing the deviation of the radial SV from the average reveal that the three non-axisymmetric SV flux patches over Siberia and Alaska intensity for _Model-A_ over the model time interval, and similarly for _Reference_, by an almost linear trend. The lack of structure in the difference between _Model-A_ and _Reference_ for each of these snapshots shows that the difference between the radial SV from _Model-A_ and from _Reference_ is mostly static. Overall, it seems that the co-estimation of the ionospheric magnetic field leads to less contaminated models of the SV in the north polar region that are derived from magnetic vector data at all latitudes and local times. It is clear that the distinctive non-axisymmetric radial SV flux patches of alternating sign over Siberia and Alaska, found in earlier geomagnetic field models and used for inferring accelerating jets of core flow (Livermore et al., 2017), persist even when polar ionospheric currents are estimated. A clear limitation in examining the high degree CMB SV in these models is that they are strongly smoothed in time. The effect of reducing the strong temporal regularisation will be investigated in the next section. ### Relaxing the temporal regularisation of the internal field model Our approach for modelling the geomagnetic field relies on regularisation to smooth the spatio-temporal complexity of the model and, thus, ensures the convergence of the estimation procedure. The regularisation also allows control over magnetic signals in the data that are not accurately parametrized. However, in the case of the time-dependent internal field model, the temporal regularisation severely degrades the resolution in time, in particular, of the high-degree spherical harmonics, which limits studies of the dynamics of the Earth's interior. In this section, we therefore explore a model in which, building on _Model-A_, we in addition considerably relaxed the temporal regularisation of the internal field model. More specifically, we reduced the temporal regularisation parameters \(\lambda_{t}\), \(\lambda_{t_{s}}\) and \(\lambda_{t_{e}}\) by a factor of \(80\) to \(0.0125\), \(1.25\times 10^{-4}\) and \(1.25\times 10^{-4}\), respectively, to produce a weakly-regularised version of _Model-A_, called _Model-B_. On the left of Fig. 13, we show the spatial power spectrum of the SA at the CMB in 2019.0 for _Model-B_, _Model-A_ and CHAOS-7.9. The spectra show that there is significantly more power in the SA of _Model-B_ at all spherical harmonic degrees. Most striking is the increase in SA power for _Model-B_ above degree 10, while the power for _Model-A_ sharply decreases. But also noteworthy is the enhanced power below degree 4 for _Model-B_. For comparison we also show the spectrum for CHAOS-7.9, which overlaps below degree 10 with that for _Model-A_ but decreases more slowly as the degree further increases. On the right of Fig. 13, we show example timeseries of SV coefficients for the three models. The timeseries of \(\hat{g}_{1}^{0}\) reveals a distinct annual oscillation for _Model-B_. Similarly, we found annual oscillations in the timeseries of zonal and low-order coefficients for other low-degree spherical harmonics. They are investigated further below. We emphasise that these annual oscillations are not an effect of the applied data selection, which does not vary with season. The oscillations are not as apparent in the coefficient timeseries of _Model-A_ and CHAOS-7.9, due to the relatively strong temporal regularisation. Compared to \(\hat{g}_{1}^{0}\), the \(\hat{h}_{12}^{1}\) timeseries is much smoother for all three models and there is no oscillatory behaviour with a period close to one year visible. So _Model-B_ is still quite strongly smoothed in time at high degree. This is by construction due to the chosen regularisation norm. Note that we chose to show the SA spectrum in 2019.0 in Fig. 13 because the rate of change of the SV maximises around that time for _Model-B_. In fact, we find that the spectrum for _Model-B_ varies significantly with time below degree 8, reflecting the annual variations found in the low-order and Figure 11: Sensitivity matrix of the SV (left) and the SA (right) in 2019.0 from _Model-A_ with respect to _Reference_. low-degree SV coefficients. These temporal variations are well known and have been handled by other modelling efforts in different ways, by using regularisation [e.g., CHAOS models (Finlay et al., 2020), GRIMM (Lesur et al., 2008), or CM (Sabaka et al., 2020)], low resolution basis functions [e.g., POMME (Maus et al., 2006), CovObs (Huder et al., 2020)], or by including other internal sources [e.g., Kalmag models (Baerenzung et al., 2022), or sequential models of Ropp et al., 2020]. To better characterise the annual oscillations found in the low spherical harmonic coefficients of the time-dependent internal field model of _Model-B_, we performed a principal component analysis (PCA) of the difference between _Model-B_ and _Model-A_. The PCA is a data-based tool for extracting spatio-temporal patterns and has previously been applied to the magnetic data from ground-based observatories (e.g. Shore et al., 2016) and magnetic observations made by satellites (e.g. Domingos et al., 2019; Saturnino et al., 2021). For the analysis, we generated timeseries of vector components of the internal time-dependent field from each model at the centre points of equal-area pixels (Gorski et al., 2005), covering the entire Earth's surface at approximately \(2^{\circ}\) resolution (10800 pixels), in spherical geocentric coordinates using a sampling rate of one sample per month (201 samples in time). Here, we omitted the period during the CryoSat-2 data, from the middle of 2010 to the end of 2013, when _Model-B_ varies more strongly in time than during CHAMP and _Swarm_, and we omitted the first and last 6 months of Figure 12: Snapshots of the radial SV for \(n\leq 19\) in the north polar region at the CMB as given by _Model-A_ (left column), the reference model (middle column) and their difference (right column) in 2007.0 (second row), 2012.0 (third row), 2018.0 (fourth row), after removing the snapshot average (first row). The projection is orthographic. The dashed lines show geographic parallels and meridians at \(30^{\circ}\) intervals. Note the change in colour scale for the difference plots. the model time interval, when _Model-B_ shows sharp time variations due to the weaker data constraint at the model endpoints. At each pixel, by subtracting the vector timeseries of _Model-A_ from the one of _Model-B_, we obtained timeseries of component-wise differences, which we arranged as columns in a matrix \(\mathbf{X}\) of size \(201\times 32400\). Finally, we centred \(\mathbf{X}\) by removing the column-wise mean value, \(\tilde{\mathbf{X}}\), such that \[\tilde{\mathbf{X}}=\mathbf{X}-\tilde{\mathbf{X}}. \tag{28}\] For a data matrix such as \(\tilde{\mathbf{X}}\), PCA can be used to find a finite set of modes that maximise the variance of the data in time and are mutually orthogonal. The \(i\)th mode obtained through the PCA consists of the Empirical Orthogonal Function (EOF), \(\mathbf{v}_{i}\), which represents the spatial pattern of the time variation, and the principal component (PC) \(\mathbf{y}_{i}=\tilde{\mathbf{X}}\mathbf{v}_{i}\), which is the timeseries of variance \(\sigma_{i}^{2}\). These modes are typically sorted in decreasing order of variance and can be used to reconstruct the data matrix through \[\tilde{\mathbf{X}}=\sum_{i}\mathbf{y}_{i}\mathbf{v}_{i}^{\mathrm{T}}, \tag{29}\] or, if a subset of modes is chosen, to perform a partial reconstruction. We applied the PCA on \(\tilde{\mathbf{X}}\) and obtained 201 modes. But only a small number of modes are needed to explain most of the variance in the data. In Fig. 14, we show the PCs and the radial part of the EOFs for the first six modes, which account for \(70\,\%\) of the variance (13 modes account for \(90\,\%\)). The first PC, PC-1, is a modulated annual oscillation for which the amplitude maximises around 2002 and 2014, and minimises around 2008 and 2019. The corresponding EOF, EOF-1, consists of a large-scale pattern that varies mostly in latitude, similar in appearance to the spherical harmonic function \(Y_{2}^{0}\) except that the zero lines in latitude are shifted slightly northward. Given the spatio-temporal behaviour, we can assume that the first mode is responsible for most of the annual oscillations in the coefficient timeseries, as shown for example by \(\hat{g}_{1}^{0}\) in Fig. 13 (phase shift between \(\hat{g}_{1}^{0}\) and PC-1 reflects the difference in time derivative between the two). The amplitude modulation of PC-1 suggests a dependency on the solar cycle since the maxima in the amplitude roughly coincide with the times of solar maximum. PC-2 varies more slowly compared to PC-1 and peaks approximately every 3 years. EOF-2 shows patches of opposite sign that are centred on the geographic equator around Central America and the western Pacific Ocean. The location and appearance of these patches could indicate that the second mode is related to changes in the core field since geomagnetic impulses have been reported around these areas (Chulliat et al., 2014; Finlay et al., 2020; Olsen et al., 2007; Torta et al., 2015). The third PC, PC-3, combines a slow variation that peaks around 2002 and 2014 with an annual oscillation, which is much weaker in amplitude compared to PC-1 but likely also of external origin. EOF-3 exhibits a large number of small-scale positive and negative patches. The remaining PCs (PC-4 through PC-6) show an oscillatory behaviour with a period close to one year and the corresponding EOFs exhibit a wide range of patterns, which are difficult to interpret. Based on the results of the PCA, we tried to implement a post-processing step that removes the modes that we assume are the result of a leakage of the external field into the internal field model as is the case, for example, for the first mode, since it is a global annual oscillation with an apparent solar cycle dependency. Successful removal of these modes would provide a smoother timeseries of the coefficients of the time-dependent internal field model, which we could use to analyse the SA. Unfortunately, this approach did not work in practice since the PCA is not able to isolate the annual oscillations in terms of a single mode. Instead, we found that these oscillations are visible in most, if not all, of the modes and cannot be removed to a satisfactory level, including by further relaxing the temporal regularisation, which makes the Figure 13: Spatial power spectrum of the SA at the CMB in 2019.0 (left) and timeseries of the SV coefficients \(\hat{g}_{1}^{0}\) (top right) and \(h_{12}^{1}\) (bottom right) for _Model-B_ (blue), _Model-A_ (orange) and CHAOS-7.9 (green). annual oscillations clearer. Nonetheless, we find the PCA provides clear insight into the signals entering internal field models as the temporal regularisation is relaxed. As a simpler alternative to removing PCA modes, we resorted to filtering out the annual oscillations by computing centred annual differences of monthly values of the internal field at the CMB as given by _Model-B_ to find the SV and, by repetition, the SA. Fig. 15 shows time-longitude plots of the obtained radial SA up to degree 10 for _Model-B_ on the left and, by the same computation, for _Model-A_ and CHAOS-7.9 in the middle, and the difference between _Model-B_ and CHAOS-7.9 on the right. We see that, in comparison to _Model-A_ and also CHAOS-7.9, which is similar to _Model-A_ in Fig. 15, there is more power in the SA of _Model-B_. The patterns have sharper edges and there is generally more structure, which indicates an improved temporal resolution. This is especially apparent during the CHAMP period, until 2010, and in the longitude interval between \(0^{\circ}\) and \(90^{\circ}\) for the entire model time span. We produced a similar plot for a weakly regularised version of _Reference_ and found that it looks very similar to Fig. 15. We interpret this to mean that the increase in resolution is mostly due to the reduced temporal regularisation and not due to the co-estimation of the ionospheric field model. We acknowledge that computing annual Figure 14: PCs (left) and radial part of the corresponding EOFs (right) for the first six modes found by the PCA. The EOFs are scaled with the square-root of the mode variance to indicate the relative importance. differences has the caveat of removing genuine internal field signals that have frequencies which are integer multiples of one oscillation per year. Further work is clearly needed on better methods to remove the annual signal. To summarise, we find that reducing the temporal regularisation of the internal field model causes signals which we suspect are of external origin to leak into the estimated internal field despite the co-estimation of an AMPS-type model of the ionospheric field. It is possible that our approach to parametrize the ionospheric field lacks terms that can account for the signals like those seen in the first and third mode identified by the PCA (see discussion in Sect. 5). To ensure that the co-estimation of the ionospheric field model is not, in fact, introducing artefacts into the internal field model, we performed the same analysis of principal components but using the reference model. We found that the first two modes are similar, which suggests that these modes originate from genuine signals in the magnetic dataset used in this study. ## 5 Discussion Our results show that the co-estimation of a climatological ionospheric field as part of the CHAOS modelling approach takes into account previously unmodelled signals in the polar regions and produces geomagnetic field models that fit the magnetic input data for geomagneticly quiet conditions well. It enables the construction of high quality models of the core field while using vector field data at all latitudes and local time. Similar to the AMPS model, our approach provides estimates of the average polar ionospheric currents that are realistic in structure and able to vary in response to changes of the external driving. However, this approach is only capable to represent the long term average of the currents and not individual highly dynamic events. One aim of this study is to answer the question of whether the co-estimation of an AMPS-type ionospheric model could allow the construction of internal field models that are less contaminated by the ionospheric field in the polar regions. By comparing the recovered SV of _Model-A_ and _Reference_ at the CMB in the northern polar region (see Fig. 12), we find that the co-estimation reduces the leakage of ionospheric signals into the time-dependent internal field model. However, the improvement in the recovered SV in the polar regions, which is most apparent in the zonal terms of the high spherical harmonic degrees, is relatively small and difficult to interpret due to significant ionospheric signals that remain even when co-estimating the ionospheric field model and due to the strong time averaging effect of the temporal regularisation applied on the internal field model. We find that the regularisation of the poloidal ionospheric field is important. This mostly affects the zonal or near-zonal parts of the poloidal ionospheric model as observed within an Earth-fixed frame, which are the terms that show the largest difference between _Model-A_ and _Reference_ in the internal field model. We acknowledge that the _Reference_ model is a somewhat extreme case since it uses vector data at high latitudes without accounting for polar ionospheric signals in any way. To test whether the ionospheric leakage into the internal magnetic field model can be further reduced in the polar regions, we derived additional test models where the internal field was estimated using the scalar component of the magnetic observations instead of vector data at polar latitudes. However, since we found these test models to be very similar to the models shown here, we preferred to present the models derived from vector data at all latitudes. In non-polar regions we find that the estimated ionospheric field model captures the basic Solar-quiet pattern but lacks power on the Figure 15: Time-longitude plot of the radial SA up to degree 10 along the geographic equator at the CMB as given by repeated computation of annual differences of the radial field from _Model-B_, _Model-A_, CHAOS-7.9, and the radial field difference between _Model-B_ and CHAOS-7.9. dayside. Hence, there is a possibility of leakage into the time-dependent internal field due to the use of dayside data and an imperfectly modelled ionospheric field at mid and low latitudes. Comparing internal field coefficients \(g_{1}^{0}\) and \(g_{3}^{0}\), which are known to be the coefficients most affected by the ionospheric field at mid and low altitude, we find that the timeseries of these coefficients for _Model-A_, and similarly for _Reference_, are slightly shifted with respect to CM6, by \(2\)-\(5\,\mathrm{nT}\), while the shift is smaller with respect to CHAOS-7.9. A possible future remedy could be to estimate the time-dependent internal field only from nightside data, while the ionospheric model is fit using data from all local times. Whether the reduced ionospheric field leakage also leads to internal field models that are better resolved in time, can only be investigated by reducing the temporal regularisation that smooths the time-dependence of the internal field model, in particular affecting the high spherical harmonic degrees. For this reason we derived _Model-B_, which exhibits more power in the high degrees of the SA. However, we find that the estimated SA is now dominated at the large length-scales, approximately up to degree 8, by distinct annual oscillations. Although these annual oscillations can be filtered out of the SA estimates by using annual differences, the question of where these oscillations come from and why they are not captured by the ionospheric model remains. A possible explanation for the leakage of annual oscillations into the internal field model is related to the parametrization of the ionospheric field, which is most likely not sufficient to account for all types of ionospheric currents and their time-dependence. In addition, it is assumed that field-aligned currents are radial, which is reasonable at high latitudes but fails at mid latitudes. Hence, it is conceivable that the annual oscillations result from seasonally varying currents at mid latitudes such as interhemispheric field-aligned currents that produce magnetic signals at satellite altitude at mid latitudes on the dayside. Accounting for these currents is challenging since it requires the estimation of poloidal and toroidal potentials of the ionospheric magnetic field within the measurement shell traced by the satellite orbits (e.g. Fillion et al., 2023; Olsen, 1997). Apart from ionospheric sources, other processes cannot be ruled out as the origin of the annual oscillations. Another limitation of our models concerns the treatment of electromagnetically induced currents in Earth's interior and oceans associated with variations of the ionospheric field. Since our models are only estimated from satellite data, both the inducing and induced ionospheric fields are internal with respect to the input data. Hence, our estimated ionospheric field model also contains the induced response. In principle, through an a-posteriori analysis, the estimated ionospheric field model could be separated into induced and inducing parts using the Q-response functions (e.g. Grayver et al., 2021) for a given model of the Earth's conductivity. However, this approach would not affect the quality of the core field model or resolve the ambiguity between the sources in the ionosphere and those in the core and lithosphere. Instead of a-posteriori separating the induced and inducing parts of our estimated ionospheric field model, an alternative approach is to include the induced response during the modelling. For example, one could derive a new set of AMPS-type ionospheric field basis functions that take into account the induced counterpart via Q-response transfer functions. This would have the advantage of allowing ground-based observations to be used during the estimation of both the core and ionospheric fields, for example, using hourly mean values. For these observations, the inducing ionospheric field sources are then external, which would aid the separation of the core and ionospheric fields. ## 6 Conclusions In this study we successfully combined the climatological approach of the AMPS model, which is suitable for modelling the ionospheric magnetic field in the polar regions, and the CHAOS modelling framework to derive models of the geomagnetic field that take into account internal and magnetospheric fields as well as the climatological aspects of the ionospheric field. We used this new approach to estimate a geomagnetic field model from satellite magnetic vector data under geomagnetic quiet conditions. The derived model, called _Model-A_, shows a good fit to the input vector data and successfully removes obvious systematic errors related to ionospheric signals in the polar regions, which were previously unaccounted for in the CHAOS modelling framework. By investigating the effect of co-estimating the ionospheric field on the internal field and its time variations in the polar regions, we find only small differences, which are most visible in the zonal terms of the high-degree spherical harmonics of the estimated SV. Importantly, high latitude non-axisymmetric SV flux features stay mostly unchanged, which adds to the evidence that they are of internal origin and therefore relevant for studies of the core flow (Livermore et al., 2017). The distinct annual oscillations in the internal field from _Model-B_, which was weakly regularised in time, could indicate that there remain ionospheric or related induced signals in the modelled internal field at low-to-mid latitudes despite the co-estimation an AMPS-type ionospheric field. This suggests shortcomings of our ionospheric field parametrization in non-polar regions, and it indicates that the noise present in time-dependent internal field models, which mostly affects the low-order spherical harmonic coefficients, is not only due to the leakage of magnetic field signals produced by high-latitude currents. Identifying the physical origin of these signals and taking them into account will be important to increase the resolution of internal field models in time. The ambiguity between the internal field and poloidal ionospheric field models was reduced through a practical approach by regularising the time-averaged divergence-free part of the ionospheric currents, although this involves regularisation parameters that must be carefully chosen during model construction. In the future not only satellite magnetic data but also ground-based magnetic observations could be used in order to better resolve this ambiguity. But this requires treatment of the internally induced field due to the poloidal ionospheric field on the ground-based data. ## Acknowledgments We gratefully acknowledge ESA for providing access to the _Swarm_ L1b data and the fully calibrated CryoSat2 magnetometer data. We wish to thank the German Aerospace Center and the Federal Ministry of Education and Research for supporting the CHAMP mission. Furthermore, we would like to thank the staff of the geomagnetic observatories and INTERMAGNET for supplying high-quality observatory data. Susan Macmillan (British Geological Survey) is gratefully acknowledged for collating checked and corrected observatory hourly mean values in the AUX OBS database. We also thank the editor and two anonymous reviewers for helpful comments and suggestions, which clarified and improved the manuscript. CK and CCF were funded by the European Research Council under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 772561). The study has been partly supported as part of _Swarm_ Data Innovation and Science Cluster (_Swarm_ DISC) activities, funded by the ESA contract No. 4000109587/13/I-NB. KL was funded by the Trond Mohn Foundation, and the AMPS model is funded by ESA through _Swarm_ DISC within the reference frame of ESA contract No. 000109587/13/I-NB. CK developed the computer software used here for modelling the geomagnetic field, derived and analysed the test field models, and prepared the manuscript. CCF helped with the presentation and interpretation of the results, and assisted with the outline of the manuscript. KL helped with the interpretation of the results and provided guidance on the implementation of the part of the developed computer software that involves the modelling of the ionospheric magnetic field. NO helped with the interpretation of the results and provided the computer scripts on which parts of the developed modelling software are based. All authors read and accepted the manuscript. ## Data Availability The data underlying this article are available in the following repositories: 1. _Swarm_ and CryoSat2 data are available through ESA at [https://swarm-diss.eo.esa.int/#](https://swarm-diss.eo.esa.int/#). 2. CHAMP data are available from Rother et al. 2019. 3. Ground magnetic observatory data from INTERMAGNET are available at [ftp://ftp.nerc-murchison.ac.uk/geomag/Swarm/](ftp://ftp.nerc-murchison.ac.uk/geomag/Swarm/) AUX_OBS/hour/ or via the virtual research platform VirES [https://vires.services/](https://vires.services/). 4. \(RC\) index is available at [http://www.spacecenter.dk/files/magnetic-models/RC/](http://www.spacecenter.dk/files/magnetic-models/RC/). 5. \(SML\) index is available at [https://supermag.jhuapl.edu/info/](https://supermag.jhuapl.edu/info/). 6. Solar wind speed, interplanetary magnetic field and \(Kp\) index are available through the GSFC/SPDF OMNIWeb interface at [https://omniweb.gsfc.nasa.gov/ow.html](https://omniweb.gsfc.nasa.gov/ow.html). 7. CHAOS-7 model and its updates are available at [http://www.spacecenter.dk/files/magnetic-models/CHAOS-7/](http://www.spacecenter.dk/files/magnetic-models/CHAOS-7/). Files containing the estimated parameters of our models and the processed data are available at [https://doi.org/10.11583/DTU.24025596](https://doi.org/10.11583/DTU.24025596).
2310.20535
The ESO Science Archive Facility: Status, Impact, and Prospects
Scientific data collected at ESO's observatories are freely and openly accessible online through the ESO Science Archive Facility. In addition to the raw data straight out of the instruments, the ESO Science Archive also contains four million processed science files available for use by scientists and astronomy enthusiasts worldwide. ESO subscribes to the FAIR (Findable, Accessible, Interoperable, Reusable) guiding principles for scientific data management and stewardship. All data in the ESO Science Archive are distributed according to the terms of the Creative Commons Attribution 4.0 International licence (CC BY 4.0).
Martino Romaniello, Magda Arnaboldi, Mauro Barbieri, Nausicaa Delmotte, Adam Dobrzycki, Nathalie Fourniol, Wolfram Freudling, Jorge Grave, Laura Mascetti, Alberto Micol, Jörg Retzlaff, Nicolas Rosse, Tomas Tax, Myha Vuong, Olivier Hainaut, Marina Rejkuba, Michael Sterzik
2023-10-31T15:18:45Z
http://arxiv.org/abs/2310.20535v1
# The ESO Science Archive Facility: Status, Impact, and Prospects ###### Abstract Scientific data collected at ESO's observatories are freely and openly accessible online through the ESO Science Archive Facility. In addition to the raw data straight out of the instruments, the ESO Science Archive also contains four million processed science files available for use by scientists and astronomy enthusiasts worldwide. ESO subscribes to the FAIR[8] (Findable, Accessible, Interoperable, Reusable) guiding principles for scientific data management and stewardship. All data in the ESO Science Archive are distributed according to the terms of the Creative Commons Attribution 4.0 International licence (CC BY 4.0). The science data collected at ESO's La Silla Paranal Observatory (LPO) are accessible through the ESO Science Archive Facility (SAF). The observatory comprises three sites in northern Chile's Atacama region, namely La Silla[2], Paranal[3] and the Chajnantor plateau (the Atacama Pathfinder Experiment, or APEX telescope[9]). Data from the Atacama Large Millimeter/submillimeter Array (ALMA) observatory[4] are also directly accessible from the ESO Science Archive, so that they can be conveniently queried together with the data from LPO. In addition, ESO also hosts and operates the European copy of the dedicated ALMA Science Archive[9], which provides extended search capabilities tailored to these data; it was recently described by Stoehr et al. (2022). At the time of writing, the ESO Science Archive contains, in a uniform and consistent form, data from more than 30 instruments (and counting), covering a wide range of observing techniques, data types and formats, and their metadata. It stores all the raw science data and the related calibrations. A growing selection of processed data is also available, on which science measurements can be readily performed. The archive home page is at: archive.eso.org. User support and a knowledgebase database are provided at support.eso.org. ESO has a long tradition of fostering "Open Access" to scientific data, and it endorses the European EOS initiative[7]. As an overarching principle, ESO subscribes to the FAIR[8] (Findable, Accessible, Interoperable, Reusable) guiding principles for scientific data management and stewardship (Wilkinson et al., 2016). Access to the ESO Science Archive is regulated by policy[9]. In general terms, the Principal Investigators (PIs) of successful proposals for observing time on ESO telescopes, along with their delegates, have exclusive access to their scientific data for a proprietary period, after which the scientific data management and stewardship. All data in the ESO Science Archive are distributed according to the terms of the Creative Commons Attribution 4.0 International licence (CC BY 4.0). ## Introduction The science data collected at ESO's La Silla Paranal Observatory (LPO) are accessible through the ESO Science Archive Facility (SAF). The observatory comprises three sites in northern Chile's Atacama region, namely La Silla[2], Paranal[3] and the Chajnantor plateau (the Atacama Pathfinder Experiment, or APEX telescope[9]). Data from the Atacama Large Millimeter/submillimeter Array (ALMA) observatory[4] are also directly accessible from the ESO Science Archive, so that they can be conveniently queried together with the data from LPO. In addition, ESO also hosts and operates the European copy of the dedicated ALMA Science Archive[9], which provides extended search capabilities tailored to these data; it was recently described by Stoehr et al. (2022). At the time of writing, the ESO Science Archive contains, in a uniform and consistent form, data from more than 30 instruments (and counting), covering a wide range of observing techniques, data types and formats, and their metadata. It stores all the raw science data and the related calibrations. A growing selection of processed data is also available, on which science measurements can be readily performed. The archive home page is at: archive.eso.org. User support and a knowledgebase database are provided at support.eso.org. ESO has a long tradition of fostering "Open Access" to scientific data, and it endorses the European EOS initiative[7]. As an overarching principle, ESO subscribes to the FAIR[8] (Findable, Accessible, Interoperable, Reusable) guiding principles for scientific data management and stewardship (Wilkinson et al., 2016). At this stage, users can choose whether they want raw and/or pre-processed master calibrations. They can also select to include night-long information, such as weather conditions and notes from the observer. Once downloaded, users can process raw data along with the associated calibrations to remove signatures from the telescope, instrument and Earth's atmosphere, and to calibrate the resulting data products in physical units. For this, dedicated software tools to process and organise the data and the execution sequence are made available[9]. At this point, the data are ready for extraction of the science signal and its subsequent analysis. The current era in astronomy research is characterised by an abundance of data and the need to combine them across facilities, wavelengths, and messengers. It is, therefore, imperative to lower as much as possible the user's access barrier to the data. The goal is to reach as wide an audience as possible, providing a complete overview of the content of the archive, while requiring as little overhead as possible on the part of the researcher. To this end, the ESO Science Archive provides access to processed data. Via this route, users can download data that have already gone through most of the processing needed in preparation for extracting the science signal and are thus free from atmospheric and instrumental effects and are calibrated in physical units. The main science files are accompanied and complemented by ancillary ones that provide additional information useful to their exploitation (for example, 2D calibrated spectra are often provided with the main 1D product to allow custom spectrum extraction; and white-light images go with data cubes). Each data collection comes with extensive textual documentation in the form of a Release Description. In most cases, processed data from the archive are directly ready for science analysis. There are two main sources of such processed data. One is provided by users who carried out the processing, typically, but not always, for their own projects, and returned the results to the SAF. This is mandatory for observing programmes that require large, coordinated amounts of telescope time, namely Public Surveys[15] and Large Programmes[16], as well as for Hosted Telescopes where there is a signed agreement with ESO for this. In these cases, generating data, both raw and processed, with a long-lasting legacy value is an important criterion in the programme selection process. Voluntary contributions from individual users are much encouraged; this provides a great way to give data, and their authors, enhanced visibility and etability. To this end, each data collection in the ESO Science Archive is assigned a unique persistent Digital Object Identifier[17] (DOI). Its processing is tailored to the science cases of the originating observing programmes, and results are often described and used by the team in refereed publications. Typical data products include calibrated deep and/or mosaicked images and data cubes, stacked spectra, and flux maps. In several cases, these are used to generate source catalogues. These are the highest-level processed data and contain directly the physical parameters of the celestial sources. The other main channel of influx of processed data for the Science Archive is carried out at ESO. Here, the data histories of instruments, or instrument modes, are processed as consistently and as completely as possible and ingested into the SAF. By its very nature, this data processing is not tailored to any specific science case, but is focused on removing the instrumental and atmospheric signatures and on calibrating in physical units large, coherent datasets. The tools used in-house to process the data are the same ones that are made publicly available[14]. The impact of archival processed data is discussed further below. At the time of writing, the ESO Science Archive contains four million processed science files from nearly 80 data collections, covering virtually all data types and observational techniques enabled by the slew of more than 30 instruments that ESO operates at LPO. They cover a correspondingly large range of observing techniques, data types, formats, and metadata. Without science-oriented data stewardship, curation and homogenisation, the archive would be just a big bucket of bits and bytes, where finding data would be exceedingly hard and reserved to a few experts. Therefore, before ingestion into the archive, the processed data undergo an auditing process for completeness, compliance, consistency, and documentation[18] (Arnaboldi et al., 2011). This process is a collaborative effort between the data provider and ESO and is called Phase 3, reflecting the fact that it closes the loop after the solicitation and handling of observing proposals (Phase 1) and the observation preparation and execution at the telescope (Phase 2). To ensure data consistency and accessibility throughout this broad variety of archive holdings, the Phase 3 process enforces the use of the ESO Science Data Product Standard[19], an interface document that defines the data format and metadata (content and definition) for the various types (images, spectra, cubes, interferometric visibilities, catalogues, and so on). It specifies how to encode the level of calibration, scientific quality, originating files (provenence), ancillary data and product versioning. Its content is continuously evolving to reflect the evolving data landscape, for example to incorporate new data types produced by different observing techniques and facilities. The ESO Science Data Product Standard incorporates accepted Virtual Observatory (VO) standards, making ESO data interoperable with the other VO resources. Compliance with the standard is a fundamental requirement which ensures that data can be easily located among the broader ESO holdings and in the general global data scenario. This is a must in the current era of multi-instrument, multi-wavelength, multi-messenger astronomy. ### The Archive Science Portal An important driver for the metadata curation and homogenisation ensuing from the Phase 3 process is that data can be presented and queried uniformly across collections, independently of their origins and specificities. We have built different ways to browse the processed data in the SAF, namely web interfaces, and programmatic and scripted access. The web interfaces offer a low-barrier access by presenting the data and metadata in an intuitive graphical interface[20]. Query parameters are represented by elements in the page that have the dual function of visually expressing the content of the archive and rendering the user's choices. The results are then rendered on the backdrop of the celestial sphere and included in a tabular form, which summarises their main characteristics (see Figure 2, top panel). Given the underlying compatibility with VO protocols, the results can be sent to VO-aware tools, such as, for example, TOPCAT or Aladin. Upon request, individual datasets can be explored in detail through previews that are customised by data type. As an example the preview of a Multi Unit Spectroscopic Explorer (MUSE) datacube is shown in the right panel of Figure 2. A dedicated web interface is available to query source catalogue data[21]. Once users select the catalogue they are interested in, they can constrain the search on any combination of its columns. Repetitive or otherwise particularly demanding tasks can be coded and automatised by using the provided Figure 2: Top panel: The landing page of the web interface to the ESO Archive Science Portal(r). Bottom panel: Example of the web page where individual datasets can be explored in detail. In this case, the file is a MUSE datacube. All of the assets in the ESO SAF, i.e., raw data and products generated at ESO or contributed by the community and catalogues, are in great and increasing demand. This is shown in Figure 3, where the number of unique IP addresses, a proxy for the users downloading data, is plotted as a function of time for processed files and source catalogues (left and right panel, respectively). Interestingly, the increase in the download of processed data did not come at the expense of the access to raw data, just as the fast increase in the number of users of data processed by ESO has not hindered the need for data generated externally (bottom panel in Figure 3). The different types of data are, then, highly complementary. Figure 4 shows the contribution of the SAF to the science output of LPO. This is expressed in terms of the fraction of refereed papers using LPO data that made use of the archive (a referred paper is classified by the ESO Library as archival if there is no overlap between its authors and the members of the original observing proposal[24]). There is a clear upward trend since the inception of the ESO Science Archive in its current form at the end of the 1990s. Currently, about 4 papers out of 10 utilising LPO data make use of the ESO Science Archive. The ESO Science Archives, both LPO and ALMA, featured as a Special Topic at the 47th meeting of the ESO Users Committee, which was held on 20 and 21 April 2023[25]. The high level of user satisfaction with the archives was confirmed in the discussions during the meeting, as well as in the Committee's report[26], which is based on a poll of the science community. ### What's next As discussed above, the availability of processed data has led to a tangible boost to the access and usefulness of the ESO Science Archive. The engagement of the community at large in providing reduced data has been very successful, and the archive now provides more than 60 out of 80 collections secured in this way. This number is, of course, poised to increase as the policy mandating the delivery of processed data for new Public Surveys, Large Programmes and Hosted Telescopes/Instruments continues and will also include data from ESO's Extremely Large Telescope (ELT) in the future. In addition to maintaining support for the Phase 3 process for data provided by internal and external users, we are also exploring new ways of collaborating with the community that are not linked to specific observing programmes. Prominent examples include the data stream for the Precision Integrated-Optics Near-infrared Imaging ExpeHiment (PIONHEP; Le Bouquin et al., 2011), and the VSTA EXtension to Auxiliary Surveys (VEXAS; Spinello & Agnello, 2019) and Ultraviolet and Visual Echelle Spectrograph Spectral Oluasar Absorption Database (UVES SQUAD; Murphy et al., 2019) collections. We have established a collaboration with the High Contrast Data Centre[27] (HC-DC, previously the SPHERE Data Center) in Grenoble. Data from the Spectro-Polarimetric High-contrast Exoplanet REsearch instrument (SPHERE) are regularly processed there, leveraging the considerable science expertise available, and delivered to ESO for wide dissemination (a public archive copy is also maintained Figure 4: The fraction of refereed publications using La Silla Paranal data that made use of the archive, entirely (data orange bars) or partially (left) yellow bars). Source: ESO Telescope Bibliography[24]. Figure 3: The differential number of unique IP addresses as a function of time from which processes fees (get panel) and source catalogues (right panel) in the ESO Science Archive are accessed. The IP addresses are a proxy for the number of users, with each using an average 1.5 μs. Resorting to IP addresses as a proxy for users in made necessary by the fact that the vast majority of downloads are anonymous. at the HC-DC). The first data products were published in December 2022 in the ESO Science Archive Facility and include imaging data from the InfraRed Dual Imaging and Spectrograph (IRDIS) subsystem, observed during the ESO Periods 103 and 104, i.e., acquired between April 2019 and March 2020. Both the time and the instrumental modes covered will expand in future releases. Similarly, a new collaboration is being set up with the Very Large Telescope Interferometer (VLTI) Expertise Centres of the OPTICON Radiomet Pilot[28]. In this case, the processing of GRAVITY data up to calibrated visibilities is performed at ESO, while the Expertise Centres provide scientific guidance and quality control. The evolution of data generated internally at ESO for publication in the archive will be along two main directions. Firstly, we are working towards making the processed data for the new instruments available much sooner than was possible in the past. The goal is to do so at the same time as the raw data first become public, typically about a year after the start of science operations. This initial delay is determined by the need to characterise the data calibration and processing well enough to provide products of known and documented quality and accuracy. And secondly, to increase the quality of the data processed at ESO we are implementing a more extensive in-depth quality control aimed at identifying ways to improve the products. This is complementing the reprocessing of entire data streams in case of significant improvement of the pipeline and/or calibrations. With the data content increasing in quality, quantity and complexity, the archive tools to browse and access them must evolve too. As an example, the spectroscopic surveys with the 4-metre Multi-Object Spectrograph Telescope (4MOST) alone will return each year more than three times as many individual processed files as we have collected in the last ten years. The main drivers for this evolution are towards a unification of the web query interfaces to the data, and towards querying the archive content by (selected) physical properties of the astronomical sources. The former aims to reduce the complexity for users by providing as far as possible a single experience where currently different interfaces are in place (for example for processed data[20] and source catalogues[21]). With the latter, we instead aim to provide query capabilities that are closer to the science questions that archive users have. In both cases, the overarching objective is to help scientists to get quickly, efficiently and accurately to the data of interest among the millions of assets that are stored and preserved in the treasure trove which is the ESO Science Archive. ###### Acknowledgements. The list of people who have made possible the growth and success of the ESO Science Archive Facility goes far beyond the authors of this article. We would like to extend our thanks to the ESO colleagues who have worked with us throughout the years, especially the software development and testing team (Vincenzo Forch, Ahmed Mushak Khan, Uwe Lange, Stanislaw Podgorski, Fabio Sogni, Majorana Steller, and Stefano Zammeri). The work, time and dedicated on colleagues from the scientific community who have provided processed data to the ESO Science Archive is gratefully acknowledged: their contributions represent a truly invaluable science resource. This article is dedicated to the memory of our colleague Jorg Retzit. His hard work, dedication and talent were fundamental in making the ESO Science Archive Facility the powerful science resource for the whole community that it is today. It was an honour and a pleasure to work with him, he will be fondy remembered.
2302.14481
Dumont-Thomas complement numeration systems for $\mathbb{Z}$
We extend the well-known Dumont--Thomas numeration systems to $\mathbb{Z}$ using an approach inspired by the two's complement numeration system. Integers in $\mathbb{Z}$ are canonically represented by a finite word (starting with $\mathtt{0}$ when nonnegative and with $\mathtt{1}$ when negative). The systems are based on two-sided periodic points of substitutions as opposed to the right-sided fixed points. For every periodic point of a substitution, we construct an automaton which returns the letter at position $n\in\mathbb{Z}$ of the periodic point when fed with the representation of $n$ in the corresponding numeration system. The numeration system naturally extends to $\mathbb{Z}^d$. We give an equivalent characterization of the numeration system in terms of a total order on a regular language. Lastly, using particular periodic points, we recover the well-known two's complement numeration system and the Fibonacci analogue of the two's complement numeration system.
Sébastien Labbé, Jana Lepšová
2023-02-28T10:38:43Z
http://arxiv.org/abs/2302.14481v4
# Dumont-Thomas numeration systems for \(\mathbb{Z}\) ###### Abstract We extend the well-known Dumont-Thomas numeration system by considering two-sided periodic points of a substitution, thus allowing to represent any integer in \(\mathbb{Z}\) by a finite word (starting with \(\mathtt{0}\) when nonnegative and with \(\mathtt{1}\) when negative). We show that an automaton returns the letter at position \(n\in\mathbb{Z}\) of the periodic point when fed with the representation of \(n\). The numeration system naturally extends to \(\mathbb{Z}^{d}\). Finally, using particular periodic points, we recover the well-known two's complement numeration system and the Fibonacci's complement numeration system. Keywords:substitution numeration system automaton two's complement. ## 1 Introduction A numeration system for representing nonnegative integers as well as real numbers in a certain interval was proposed in [1]. It is based on right-infinite fixed points of substitutions. Dumont-Thomas numeration system was later explained by so-called prefix-suffix automata associated to a primitive substitution [11]. It is related to the order of a Bratteli Vershik diagram and its natural successor map [10, 15, 14]. Their main motivation was to prove that every dynamical system generated by a substitution of Pisot type on \(d\) letters admits as a topological factor a minimal translation on the torus \(\mathbb{T}^{d-1}\)[11]. As a consequence, we obtain a numeration system representing the elements of \(\mathbb{T}^{d-1}\) by infinite paths in the prefix-suffix automaton, see [20]. Note that the Dumont-Thomas numeration system for \(\mathbb{N}\) is deduced by considering the cylinders of finite length words [11, Corollary 6.2]. It was also generalized to numeration systems for \(\mathbb{N}\)[13] and for real numbers in an interval [13] associated to arbitrary regular languages, see also [12, SS7], [14, SS4], [15], [16], [17], [18], [19], [20], [21]. A recent article extending Dumont-Thomas also concerns \(\beta\)-numeration of real numbers in an interval [21]. A very general framework was proposed recently to extend the Dumont-Thomas numeration system to all integers based on the notion of coding prescription which allows the image of letters to be scattered words of nonconsecutive letters [21]. The goal of this contribution is to propose a simple extension of Dumont-Thomas numeration system to all integers in \(\mathbb{Z}\). It is based on two-sided periodic points of a substitution as opposed to right-infinite fixed points. Intuitively, it is inspired by the adic transformation of a Bratteli-Vershik diagram in the sense that we want to represent the integer \(-n\) by a finite word which is the \(n\)-th predecessor of \(0\) under the successor map. The proofs provided here are simple and follow as much as possible the approach originally proposed by Dumont-Thomas [1]. The motivation for this work comes from the study of substitutive Wang tilings of the plane in general (a preliminary work was done in [1]) where a theory of substitutive numeration systems for \(\mathbb{Z}^{2}\) is needed. ## 2 Preliminaries An _alphabet_\(A\) is a finite set and its elements \(a\in A\) are called _letters_. A _finite word_\(u=u_{0}u_{1}\cdots u_{n-1}\) is a concatenation of letters \(u_{i}\in A\) for every \(i\in\{0,1,\ldots,n-1\}\) and \(\left|u\right|\) denotes its _length_. The _empty word_ is denoted \(\varepsilon\). The set of all finite words over alphabet \(A\) is denoted \(A^{*}\) and the set of all nonempty words over alphabet \(A\) is denoted \(A^{+}=A^{*}\setminus\{\varepsilon\}\). The set \(A^{*}\) with concatenation as the monoid operation and \(\varepsilon\) as the neutral element forms a monoid. A _morphism_ over \(A\) is a map \(\eta:A^{*}\to A^{*}\) such that \(\eta(uv)=\eta(u)\eta(v)\) for all words \(u,v\in A^{*}\). A _substitution_\(\eta:A^{*}\to A^{*}\) is a morphism such that \(\eta(a)\in A^{+}\) is nonempty for every \(a\in A\) and there exists \(a\in A\) such that \(\lim_{n\to+\infty}\left|\eta^{n}(a)\right|=+\infty\). A morphism \(\eta\) is said _primitive_ if there exists \(k\in\mathbb{N}\) such that for every \(a,b\in A\) the letter \(a\) appears in \(\eta^{k}(b)\). We call \(u_{0}u_{1}u_{2}\cdots\in A^{\mathbb{Z}_{\geq 0}}\) a _right-infinite word_ and \(\cdots u_{-3}u_{-2}u_{-1}\in A^{\mathbb{Z}_{<0}}\) a _left-infinite word_. We call \(u\in A^{\mathbb{Z}}\) a _two-sided word_ and we separate by a dot its elements \(u_{-1}\) and \(u_{0}\) to indicate the origin, i.e., \(u=\cdots u_{-3}u_{-2}u_{-1}.u_{0}u_{1}u_{2}\cdots\). Substitutions can be applied naturally to two-sided words \(u\in A^{\mathbb{Z}}\) by setting \[\eta(\ldots u_{-3}u_{-2}u_{-1}.u_{0}u_{1}u_{2}\cdots)=\cdots\eta(u_{-3})\eta(u _{-2})\eta(u_{-1}).\eta(u_{0})\eta(u_{1})\eta(u_{2})\cdots.\] Let \(\mathbb{D}\in\{\mathbb{Z},\mathbb{Z}_{\geq 0},\mathbb{Z}_{<0}\}\). A word \(u\in A^{\mathbb{D}}\) is called _periodic point_ of the substitution \(\eta\) if there exists an integer \(p\geq 1\) such that \(\eta^{p}(u)=u\). Such an integer \(p\) is called a _period_. A periodic point with a period \(p=1\) is called a _fixed point_ of \(\eta\). The set of periodic points of \(\eta\) is denoted \(\mathrm{Per}_{\mathbb{D}}(\eta)=\{u\in A^{\mathbb{D}}\mid\eta^{p}(u)=u\}\). Since we are mostly interested in two-sided words in this contribution, we omit the domain when \(\mathbb{D}=\mathbb{Z}\) and we write \(\mathrm{Per}(\eta)=\mathrm{Per}_{\mathbb{Z}}(\eta)\). A word \(u\in\mathrm{Per}(\eta)\) of period \(p\) is of the form \(u=\lim_{n\to+\infty}\eta^{pn}(b).\eta^{pn}(a)\) where \(a,b\in A\) are letters such that \(\eta^{p}(a)=au\) and \(\eta^{p}(b)=vb\) for some nonempty words \(u,v\in A^{+}\). In this case, we call \(s=b.a\) the _seed_ of the periodic point \(u\). Let \(u=u_{0}u_{1}\cdots\in\mathrm{Per}_{\mathbb{N}}(\eta)\) with \(u_{0}=a\). The previous terminology is inspired by [1] where a prefix-suffix automaton is associated with \(\eta\). However, for our goal an automaton associated with \(\eta\) as in [1] is sufficient. Let \(\mathcal{D}=\left\{0,...,\max_{c\in A}\left|\eta(c)\right|-1\right\}\) be an alphabet. The deterministic finite automaton with output (DFAO) associated to the substitution \(\eta\) and letter \(a\) is the \(5\)-tuple, \(\mathcal{A}_{\eta,a}=(A,\mathcal{D},\delta,a,A)\), where the transition function \(\delta:A\times\mathcal{D}\to A\) is a partial function such that \(\delta(b,i)=c\) if and only if \(c=w_{i}\) and \(\eta(b)=w_{0}\ldots w_{\left|\eta(b)\right|-1}\) We denote by \(\mathcal{L}(\mathcal{A}_{\eta,a})\) the words accepted by the automaton \(\mathcal{A}_{\eta,a}\) and for \(q\in\mathbb{N}\) we denote \(\mathcal{L}_{q}(\mathcal{A}_{\eta,a})\) the set of words \(w\in\mathcal{L}(\mathcal{A}_{\eta,a})\) such that \(|w|=q\). ## 3 Dumont-Thomas numeration system for \(\mathbb{N}\) In this section, we recall Dumont-Thomas numeration system for \(\mathbb{N}\) which was based on substitutions having a right-infinite fixed point [1]. It uses the definition of admissible sequences. Definition 1 (admissible sequence): [1] Let \(a\in A\) be a letter, \(k\) an integer and, for each integer \(i\), \(0\leq i\leq k\), \((m_{i},a_{i})\) be an element of \(A^{*}\times A\). We say that the finite sequence \((m_{i},a_{i})_{i=0,\ldots,k}\) is _admissible_ if and only if, for all \(i\), \(1\leq i\leq k\), \(m_{i-1}a_{i-1}\) is a prefix of \(\eta(a_{i})\). We say that this sequence is _a-admissible_ if it is admissible and, moreover, \(m_{k}a_{k}\) is a prefix of \(\eta(a)\). Dumont-Thomas proved the following result which we rewrite in our notation. Theorem 3.1: [1, Theorem 1.5] _Let \(a\in A\) and let \(\eta:A^{*}\to A^{*}\) be a substitution. Let \(u=\eta(u)\) be a right-infinite fixed point of \(\eta\) with \(u_{0}=a\). For every integer \(n\geq 1\), there exists a unique integer \(k=k(n)\) and a unique sequence \((m_{i},a_{i})_{i=0,\ldots,k}\) such that_ * _this sequence is_ \(a\)_-admissible and_ \(m_{k}\neq\varepsilon\)_,_ * \(u_{0}u_{1}\cdots u_{n-1}=\eta^{k}(m_{k})\eta^{k-1}(m_{k-1})\cdots\eta^{0}(m_{ 0})\)_._ The proof of the above theorem was based on the following lemmas which we cite here as we need them in what follows. Lemma 3: [1, Lemma 1.1] _Let \(k\) be an integer. If \((m_{i},a_{i})_{i=0,\ldots,k}\) is an admissible sequence, then \(\sum_{j=0}^{k}|\eta^{j}(m_{j})|<|\eta^{k}(m_{k}a_{k})|\)._ Lemma 4: [1, Lemma 1.3] _Let \(k\) be an integer. Let \(b\in A\), \((m_{i},a_{i})_{i=0,\ldots,k}\) and \((m^{\prime}_{i},a^{\prime}_{i})_{i=0,\ldots,k}\) be two \(b\)-admissible sequences and \(n\) be an integer such that_ \[n=\sum_{j=0}^{k}|\eta^{j}(m_{j})|=\sum_{j=0}^{k}|\eta^{j}(m^{\prime}_{j})|.\] _Then for every \(i\), \(0\leq i\leq k\), we have \((m_{i},a_{i})=(m^{\prime}_{i},a^{\prime}_{i})\)._ Lemma 5: [1, Lemma 1.4] _Let \(\eta\colon A^{*}\to A^{*}\) be a substitution. Let \(\ell\geq 1\) be an integer, \(a\in A\) a letter and \(m\in A^{*}\) a proper prefix of the word \(\eta^{\ell}(a)\). Then there exist \((m^{\prime},a^{\prime})\in A^{*}\times A\) and \(m^{\prime\prime}\in A^{*}\) such that \(m^{\prime}a^{\prime}\) is a prefix of \(\eta(a)\), \(m^{\prime\prime}\) is a proper prefix of \(\eta^{\ell-1}(a^{\prime})\) and \(m=\eta^{\ell-1}(m^{\prime})m^{\prime\prime}\)._ ### Few extensions of Dumont-Thomas results In this subsection, we propose few small extensions of Dumont-Thomas lemmas. Firstly, we observe that admissible sequences are related to automata as follows. Lemma 6: _If \((m_{i},a_{i})_{i=0,\ldots,k}\) is a \(x\)-admissible sequence, then_ \[a_{i}=\mathcal{A}_{\eta,x}(|m_{k}|,|m_{k-1}|,\ldots,|m_{i}|),\qquad\text{ for every }i=0,\ldots,k.\] Proof: The proof is done by induction on \(i\). If \(i=k\), then \(a_{i}=a_{k}=\eta(x)[|m_{k}|]=\mathcal{A}_{\eta,x}(|m_{k}|)\). If \(i<k\), then \[a_{i} =\eta(a_{i+1})[|m_{i}|]=\eta\left(\mathcal{A}_{\eta,x}(|m_{k}|, \ldots,|m_{i+1}|)\right)[|m_{i}|]\] \[=\mathcal{A}_{\eta,x}(|m_{k}|,\ldots,|m_{i+1}|,|m_{i}|).\quad \sqcap\] The following lemma reformulates and strenghtens Lemma 1.2 from [1]. It allows to conclude that two admissible sequences are of the same length. Lemma 7: _Let \(\eta:A^{*}\to A^{*}\) be a substitution and \(x\in A\). Let \((m_{i},a_{i})_{i=0,\ldots,k}\) be an \(x\)-admissible sequence._ 1. _If_ \(m_{k}\neq\varepsilon\)_, then_ \(|\eta^{k}(x)|\leq\sum_{j=0}^{k}|\eta^{j}(m_{j})|<|\eta^{k+1}(x)|\)_._ 2. _If_ \(m_{k}m_{k-1}\cdots m_{k-p+1}\neq\varepsilon\)_, then_ \(|\eta^{k-p+1}(x)|\leq\sum_{j=0}^{k}|\eta^{j}(m_{j})|<|\eta^{k+1}(x)|\)_._ 3. _If_ \(m_{k}a_{k}\neq\eta(x)\) _and_ \(x\) _is a suffix of_ \(\eta(x)\)_, then_ \[-|\eta^{k+1}(x)|\leq-|\eta^{k+1}(x)|+\sum_{j=0}^{k}|\eta^{j}(m_{j})|<-|\eta^{k }(x)|.\] 4. _If_ \(\eta^{p-1}(m_{k})\eta^{p-2}(m_{k-1})\cdots\eta^{0}(m_{k-p+1})a_{k-p+1}\neq\eta ^{p}(x)\) _and_ \(x\) _is a suffix of_ \(\eta^{p}(x)\)_, then_ \[-|\eta^{k+1}(x)|\leq-|\eta^{k+1}(x)|+\sum_{j=0}^{k}|\eta^{j}(m_{j})|<-|\eta^{k -p+1}(x)|.\] Proof: (i) Since \(m_{k}\neq\varepsilon\), the word \(m_{k}\) starts with letter \(x\), thus using Lemma 3, we have \[|\eta^{k}(x)|\leq|\eta^{k}(m_{k})|\leq\sum_{j=0}^{k}|\eta^{j}(m_{j})|<|\eta^{k }(m_{k}a_{k})|\leq|\eta^{k+1}(x)|.\] (ii) If \(m_{k}m_{k-1}\cdots m_{k-p+1}\neq\varepsilon\), then the word \(w=\eta^{p-1}(m_{k})\eta^{p-2}(m_{k-1})\cdots\eta^{0}(m_{k-p+1})\) is nonempty and starts with letter \(x\). Thus \[|\eta^{k-p+1}(x)|\leq|\eta^{k-p+1}(w)|\leq\sum_{j=0}^{k}|\eta^{j}(m_{j})|<| \eta^{k}(m_{k}a_{k})|\leq|\eta^{k+1}(x)|.\] (iii) If \(m_{k}a_{k}\neq\eta(x)\) and \(x\) is a suffix of \(\eta(x)\), and using Lemma 3, we get \[-|\eta^{k+1}(x)|\leq-|\eta^{k+1}(x)|+\sum_{j=0}^{k}|\eta^{j}(m_{j})|<-|\eta^{ k+1}(x)|+|\eta^{k}(m_{k}a_{k})|<-|\eta^{k}(x)|.\] (iv) Assume \(w=\eta^{p-1}(m_{k})\eta^{p-2}(m_{k-1})\cdots\eta^{0}(m_{k-p+1})a_{k-p+1}\neq \eta^{p}(x)\) and \(x\) is a suffix of \(\eta^{p}(x)\). Observe that \(w\) is a prefix of \(\eta^{p}(x)\) and \(|\eta^{k+1}(x)|=|\eta^{k-p+1}(\eta^{p}(x))|\geq|\eta^{k-p+1}(w)|+|\eta^{k-p+1} (x)|\). Thus \[-|\eta^{k+1}(x)| \leq-|\eta^{k+1}(x)|+\sum_{j=0}^{k}|\eta^{j}(m_{j})|\] \[=-|\eta^{k+1}(x)|+\sum_{j=0}^{k-p}|\eta^{j}(m_{j})|+\sum_{j=k-p+1} ^{k}|\eta^{j}(m_{j})|\] \[<-|\eta^{k+1}(x)|+|\eta^{k-p}(m_{k-p}a_{k-p})|+|\eta^{k-p+1}(w)| -|\eta^{k-p+1}(a_{k-p+1})|\] \[\leq-|\eta^{k+1}(x)|+|\eta^{k-p+1}(w)|\leq-|\eta^{k-p+1}(x)|.\qquad\sqcap\] Lemma 5 can be used to construct an admissible sequence from a prefix of the image of a letter under the \(p\)-th power of a substitution. Lemma 8: _Let \(\eta:A^{*}\to A^{*}\) be a substitution and \(p\geq 1\) be an integer. If \(m\in A^{*}\) and \(x\in A\) are such that \(m\) is a prefix of \(\eta^{p}(x)\), then there exists a unique \(x\)-admissible sequence \((m_{i},a_{i})_{i=0,\ldots,p-1}\) such that_ \[m=\eta^{p-1}(m_{p-1})\eta^{p-2}(m_{p-2})\cdots\eta^{0}(m_{0}). \tag{1}\] Proof: (Unicity) Let \((m_{i},a_{i})_{i=0,\ldots,p}\) and \((m^{\prime}_{i},a^{\prime}_{i})_{i=0,\ldots,p}\) be two \(x\)-admissible sequences satisfying the hypothesis. Let \(k=\max\{i=0,\ldots,p\mid m_{i}\neq\varepsilon\}\) and \(k^{\prime}=\max\{i=0,\ldots,p\mid m^{\prime}_{i}\neq\varepsilon\}\). Then \[\sum_{j=0}^{k}|\eta^{j}(m_{j})|=|m|=\sum_{j=0}^{k^{\prime}}|\eta^{j}(m^{\prime }_{j})|.\] From Lemma 7, we have \(|\eta^{k}(a)|\leq|m|<|\eta^{k+1}(a)|\) and \(|\eta^{k^{\prime}}(a)|\leq|m|<|\eta^{k^{\prime}+1}(a)|\), so that \(k=k^{\prime}\). The unicity of the sequence \((m_{i},a_{i})\) follows from Lemma 4. (Existence) We do the proof by induction on \(p\). If \(p=1\), then \(m\) is a prefix of \(\eta(x)\). Let \(m_{0}=m\) and \(a_{0}=x\). The length-1 sequence \((m_{i},a_{i})_{i=0}\) is \(x\)-admissible and satisfies \(m=\eta^{0}(m_{0})\). Now let \(m\in A^{*}\) and \(x\in A\) be such that \(m\) is prefix of \(\eta^{p+1}(x)\). From Lemma 5, there exist \((m_{p},a_{p})\in A^{*}\times A\) and \(m^{\prime\prime}\in A^{*}\) such that \(m_{p}a_{p}\) is a prefix of \(\eta(x)\), \(m^{\prime\prime}\) is a proper prefix of \(\eta^{p}(a_{p})\) and \(m=\eta^{p}(m_{p})m^{\prime\prime}\). By the induction hypothesis, there exists a \(a_{p}\)-admissible sequence \((m_{i},a_{i})_{i=0,\ldots,p-1}\) such that \[m^{\prime\prime}=\eta^{p-1}(m_{p-1})\eta^{p-2}(m_{p-2})\cdots\eta^{0}(m_{0}).\] Therefore, \[m=\eta^{p}(m_{p})m^{\prime\prime}=\eta^{p}(m_{p})\eta^{p-1}(m_{p-1})\eta^{p-2 }(m_{p-2})\cdots\eta^{0}(m_{0}).\] The extended sequence \((m_{i},a_{i})_{i=0,\ldots,p}\) is \(x\)-admissible since \(m_{p-1}a_{p-1}\) is a prefix of \(\eta(a_{p})\) and \(m_{p}a_{p}\) is a prefix of \(\eta(x)\). Let \(\eta:A^{*}\to A^{*}\) be a substitution and \(\mathcal{D}=\{0,...,\max_{c\in A}|\eta(c)|-1\}\). Lemma 8 allows to define a map for every integer \(p\geq 1\) and \(x\in A\) as follows \[\begin{array}{r@{\quad}l}\operatorname{tail}_{\eta,p,x}:\{0,1,\ldots,|\eta^ {p}(x)|-1\}\quad&\rightarrow\mathcal{D}^{p}\\ n\quad&\mapsto|m_{p-1}|,|m_{p-2}|,\ldots,|m_{0}|.\end{array}\] where \((m_{i},a_{i})_{i=0,\ldots,p-1}\) is the unique \(x\)-admissible sequence satisfying Equation (1) with \(m\) being the prefix of length \(n\) of \(\eta^{p}(x)\). The map \(\operatorname{tail}_{\eta,p,x}\) will be used in Section 5. Lemma 9: _Let \(\eta:A^{*}\to A^{*}\) be a substitution and \(p\geq 1\) be an integer. Let \(x\in A\). Then for every \(\ell\in\{0,1,\ldots,|\eta^{p}(x)|-1\}\) we have_ \[\eta^{p}(x)[\ell]=\mathcal{A}_{\eta,x}(\operatorname{tail}_{\eta,p,x}(\ell)).\] Proof: Let \(m\) be the prefix of \(\eta^{p}(x)\) of length \(\ell\). From Lemma 8, there exists a unique \(x\)-admissible sequence \((m_{i},a_{i})_{i=0,\ldots,p-1}\) such that \[m=\eta^{p-1}(m_{p-1})\eta^{p-2}(m_{p-2})\cdots\eta^{0}(m_{0}).\] The word \(ma_{0}\) is a prefix of \(\eta^{p}(x)\), thus \(\eta^{p}(x)[\ell]=a_{0}\). From Lemma 6, \[\eta^{p}(x)[\ell]=a_{0}=\mathcal{A}_{\eta,x}(|m_{p-1}|,|m_{p-2}|,\ldots,|m_{0}| )=\mathcal{A}_{\eta,x}(\mathrm{tail}_{\eta,p,x}(\ell)).\qed\] ## 4 Dumont-Thomas numeration system for \(\mathbb{Z}\) In this section, we prove extensions of Theorem 2.2 to right-infinite and left-infinite periodic points of substitutions. Theorem 4.1: _Let \(a\in A\) and \(\eta:A^{*}\to A^{*}\) be a substitution. Let \(u\in\mathrm{Per}_{\mathbb{Z}_{\geq 0}}(\eta)\) with a period \(p\geq 1\) such that \(u_{0}=a\). For every integer \(n\geq 1\), there exists a unique integer \(k=k(n)\) such that \(p\) divides \(k+1\) and a unique sequence \((m_{i},a_{i})_{i=0,\ldots,k}\) such that_ 1. _this sequence is_ \(a\)_-admissible and_ \(m_{k}m_{k-1}\cdots m_{k-p+1}\neq\varepsilon\)_,_ 2. \(u_{0}u_{1}\cdots u_{n-1}=\eta^{k}(m_{k})\eta^{k-1}(m_{k-1})\cdots\eta^{0}(m_{ 0})\)_._ Proof: Let \(n\geq 1\) be an integer. There exists a unique integer \(k\in\mathbb{N}\) such that \(p\) divides \(k+1\) and \[|\eta^{k-p+1}(a)|\leq n<|\eta^{k+1}(a)|.\] The word \(m=u_{0}u_{1}\cdots u_{n-1}\) is thus a proper prefix of \(\eta^{k+1}(a)\). From Lemma 8, there exists a unique \(a\)-admissible sequence \((m_{i},a_{i})_{i=0,\ldots,k}\) such that \[m=\eta^{k}(m_{k})\eta^{k-1}(m_{k-1})\cdots\eta^{0}(m_{0}).\] Assume by contradiction that \(m_{k}m_{k-1}\cdots m_{k-p+1}=\varepsilon\). Then \(a_{k-p+1}=a\) and from Lemma 3, we have \[n=|m|=\sum_{j=0}^{k-p}|\eta^{j}(m_{j})|<|\eta^{k-p}(m_{k-p})|\leq|\eta^{k-p+1} (a_{k-p+1})|,\] a contradiction. Thus \(m_{k}m_{k-1}\cdots m_{k-p+1}\neq\varepsilon\). We now adapt Dumont-Thomas's theorem to the left-infinite periodic points. Theorem 4.2: _Let \(b\in A\) and \(\eta:A^{*}\to A^{*}\) be a substitution. Let \(u\in\mathrm{Per}_{\mathbb{Z}_{<0}}(\eta)\) with a period \(p\geq 1\) such that \(u_{-1}=b\). For every integer \(n\leq-2\), there exists a unique integer \(k=k(n)\) such that \(p\) divides \(k+1\) and a unique sequence \((m_{i},a_{i})_{i=0,\ldots,k}\) such that_ 1. _this sequence is_ \(b\)_-admissible and_ \[\eta^{p-1}(m_{k})\eta^{p-2}(m_{k-1})\cdots\eta^{0}(m_{k-p+1})a_{k-p+1}\neq\eta ^{p}(b),\] (2) 2. \(u_{-|\eta^{k+1}(b)|}\cdots u_{n-2}u_{n-1}=\eta^{k}(m_{k})\eta^{k-1}(m_{k-1}) \cdots\eta^{0}(m_{0})\)_._ Proof: Let \(n\leq-2\) be an integer. There exists a unique integer \(k\in\mathbb{N}\) such that \(p\) divides \(k+1\) and \[-|\eta^{k+1}(b)|\leq n<-|\eta^{k-p+1}(b)|.\] Therefore the word \(m=u_{-|\eta^{k+1}(b)|}\cdots u_{n-2}u_{n-1}\) of length \[|m|=|\eta^{k+1}(b)|+n<|\eta^{k+1}(b)|-|\eta^{k-p+1}(b)|<|\eta^{k+1}(b)|\] is a proper prefix of the word \(\eta^{k+1}(b)\). From Lemma 8, there exists a unique \(b\)-admissible sequence \((m_{i},a_{i})_{i=0,...,k}\) such that \[m=\eta^{k}(m_{k})\eta^{k-1}(m_{k-1})\cdots\eta^{0}(m_{0}).\] By contradiction, assume that (2) is an equality. Then \(a_{k-p+1}=b\) and \[|m| =|\eta^{k-p+1}(\eta^{p}(b))|-|\eta^{k-p+1}(a_{k-p+1})|+\sum_{j=0}^{ k-p}|\eta^{j}(m_{j})|\] \[\geq|\eta^{k+1}(b)|-|\eta^{k-p+1}(b)|,\] a contradiction. ## 5 Numeration systems for \(\mathbb{Z}\) based on periodic points In this section, we define a numeration system for \(\mathbb{Z}\) using the results proved in the previous section. Definition 12 (Numeration system): Let \(\eta:A^{*}\to A^{*}\) be a substitution and let \(u\in\operatorname{Per}(\eta)\). Let \(\mathcal{D}=\{0,...,\max_{c\in A}|\eta(c)|-1\}\). We define \[\operatorname{rep}_{u}:\mathbb{Z} \to\mathcal{D}^{*}\] \[n \mapsto\begin{cases}0,|m_{k}|,|m_{k-1}|,\ldots,|m_{0}|&\text{ if }n \geq 1,\\ 0&\text{ if }n=0,\\ 1&\text{ if }n=-1,\\ 1,|m_{k}|,|m_{k-1}|,\ldots,|m_{0}|&\text{ if }n\leq-2.\end{cases}\] where \(k=k(n)\) is the unique integer and \((m_{i},a_{i})_{i=0,...,k}\) is the unique sequence obtained from Theorem 5.1 (Theorem 5.1) applied on \(u|_{\mathbb{Z}_{\geq 0}}\) (\(u|_{\mathbb{Z}_{<0}}\)) if \(n\geq 1\) (if \(n\leq-2\), respectively). Note that if \(p\in\mathbb{N}\) is a period of \(u\), then it divides \(|\operatorname{rep}_{u}(n)|-1\) for every \(n\in\mathbb{Z}\). When \(u=\eta^{p}(u)\) is a periodic point of a substitution \(\eta\), then it is also a fixed point of the substitution \(\eta^{p}\). Thus, Theorem 2.2 may be used to define a numeration system for \(\mathbb{N}\), but it leads to a much larger alphabet size \(\#\mathcal{D}\). One advantage of Definition 12 is that the size of the alphabet \(\mathcal{D}\) is independent of the period \(p\). **Definition 13** (quotient, remainder): _Let \(\eta:A^{*}\to A^{*}\) be a substitution and let \(u\in\operatorname{Per}(\eta)\). Let \(n\in\mathbb{Z}\setminus\{-1,0\}\) be an integer and \(k=k(n)\) be the unique integer and \((m_{i},a_{i})_{i=0,\ldots,k}\) be the unique sequence obtained from Theorem 10 (Theorem 11) applied on \(u|_{\mathbb{Z}_{\geq 0}}\) (\(u|_{\mathbb{Z}_{<0}}\)) if \(n\geq 1\) (if \(n\leq-2\), respectively). We define the \(u\)-quotient of \(n\) as_ \[q=\begin{cases}|\eta^{k-p}(m_{k})\eta^{k-p-1}(m_{k-1})\cdots\eta^{0}(m_{p})|& \text{ if }n\geq 1,\\ |\eta^{k-p}(m_{k})\eta^{k-p-1}(m_{k-1})\cdots\eta^{0}(m_{p})|-|\eta^{k-p+1}(b)| &\text{ if }n\leq-2,\end{cases}\] _and the \(u\)-remainder of \(n\) as \(r=|\eta^{p-1}(m_{p-1})\eta^{p-2}(m_{p-2})\cdots\eta^{0}(m_{0})|\)._ Notice that the \(u\)-quotient \(q\) and \(u\)-remainder \(r\) of an integer \(n\in\mathbb{Z}\setminus\{-1,0\}\) fulfil that if \(n\geq 1\) then \(0\leq q<n\) and if \(n\leq-2\) then \(-1\geq q>n\). Consequently, \(|q|<|n|\). Also, if \(\eta\) is \(d\)-uniform, then the \(u\)-quotient and \(u\)-remainder of \(n\) correspond to the quotient and remainder of the division of \(n\) by \(d^{p}\). Remark 14: Note that if we know the \(u\)-quotient \(q\) and the \(u\)-remainder \(r\), we can recover the sequence \(|m_{p-1}|,|m_{p-2}|,\cdots,|m_{0}|\). Indeed, it is equal to \(\operatorname{tail}_{\eta,p,u_{q}}(r)\). Lemma 15: _Let \(\eta:A^{*}\to A^{*}\) be a substitution and \(u\in\operatorname{Per}(\eta)\) with period \(p\geq 1\). Let \(n\in\mathbb{Z}\setminus\{-1,0\}\) be an integer. If \(q\in\mathbb{Z}\) is the \(u\)-quotient and \(r\in\mathbb{N}\) is the \(u\)-remainder of \(n\), then_ \[u_{n}=\eta^{p}(u_{q})[r]\quad\text{ and }\quad\operatorname{rep}_{u}(n)= \operatorname{rep}_{u}(q)\cdot\operatorname{tail}_{\eta,p,u_{q}}(r).\] Proof: Let \(n\in\mathbb{Z}\setminus\{-1,0\}\) and let \(q\) be the \(u\)-quotient and \(r\) the \(u\)-remainder of \(n\). Let \(u\in\operatorname{Per}(\eta)\) and let \(a,b\in A\) denote letters such that \(u_{-1}=b\) and \(u_{0}=a\). Suppose \(n\geq 1\). From Theorem 10, there exists a unique \(a\)-admissible sequence \((m_{i},a_{i})_{i=0,\ldots,k}\) such that \(u_{0}\ldots u_{n-1}=\eta^{k}(m_{k})\ldots\eta^{0}(m_{0})\). Also \(\eta^{k}(m_{k})\ldots\eta^{0}(m_{0})a_{0}\) is a prefix of \(\eta^{k+1}(a)\) which is a prefix of \(u\), thus \(u_{n}=a_{0}\). Since \(u\) has period \(p\), the word \[\eta^{k-p}(m_{k})\eta^{k-p-1}(m_{k-1})\cdots\eta^{0}(m_{p})a_{p}\] is a prefix of \(\eta^{k+1-p}(a)\) which is a prefix of \(u\). Thus \(a_{p}=u_{q}\). Since \(\eta^{p-1}(m_{p-1})\cdots\eta^{0}(m_{0})a_{0}\) is a prefix of \(\eta^{p}(a_{p})\), we deduce that \(u_{n}=a_{0}=\eta^{p}(a_{p})[r]=\eta^{p}(u_{q})[r]\). Suppose \(n\leq-2\). From Theorem 11, there exists a unique \(b\)-admissible sequence \((m_{i},a_{i})_{i=0,\ldots,k}\) such that \(u_{-|\eta^{k+1}(b)|}\ldots u_{n-1}=\eta^{k}(m_{k})\ldots\eta^{0}(m_{0})\). Also \(\eta^{k}(m_{k})\ldots\eta^{0}(m_{0})a_{0}\) is a prefix of \(\eta^{k+1}(b)\), which is a prefix of \(u_{-|\eta^{k+1}(b)|}\ldots u_{-1}\), thus \(u_{n}=a_{0}\). Since \(u\) has period \(p\), the word \[\eta^{k-p}(m_{k})\eta^{k-p-1}(m_{k-1})\cdots\eta^{0}(m_{p})a_{p}\] is a prefix of \(\eta^{k+1-p}(a)\) which is a prefix of \(u_{-|\eta^{k-p+1}(b)|}\ldots u_{-1}\), thus \(a_{p}=u_{q}\). Since \(\eta^{p-1}(m_{p-1})\cdots\eta^{0}(m_{0})a_{0}\) is a prefix of \(\eta^{p}(a_{p})\), we deduce that \(u_{n}=a_{0}=\eta^{p}(a_{p})[r]=\eta^{p}(u_{q})[r]\). To finish the proof for both cases simultaneously, if \(n\geq 1\) (\(n\leq-2\)), applying Theorem 10 (Theorem 11) on the \(u\)-quotient \(q\) gives for \(\mathtt{d}=\mathtt{0}\) (\(\mathtt{d}=\mathtt{1}\)) \[\operatorname{rep}_{u}(q)=\mathtt{d},|m_{k}|,|m_{k-1}|,\ldots,|m_{p}|.\] As \(n\geq 1\) if and only if \(q\geq 0\), we have \[\operatorname{rep}_{u}(n) =\mathsf{d},\ |m_{k}|,|m_{k-1}|,\ldots,|m_{p}|,|m_{p-1}|,\ldots,|m_{0}|\] \[=\operatorname{rep}_{u}(q)\cdot|m_{p-1}|,\ldots,|m_{0}|= \operatorname{rep}_{u}(q)\cdot\operatorname{tail}_{\eta,p,u_{q}}(r).\qquad\sqcap\] ## 6 Periodic points as Automatic Sequences Let \(\eta:A^{*}\to A^{*}\) be a substitution and let \(u\in\operatorname{Per}(\eta)\). Denote \(s=(u_{-1},u_{0})\) and \(\mathcal{D}=\{0,...,\max_{c\in A}|\eta(c)|-1\}\). We associate an automaton \(\mathcal{A}_{\eta,s}\) with \((\eta,s)\) by adding a new state \(\mathtt{start}\) and two additional edges to the automaton \(\mathcal{A}_{\eta,a}\) defined in [1]. The automaton \(\mathcal{A}_{\eta,s}=(A\cup\{\mathtt{start}\}\,,\mathcal{D},\delta,\mathtt{ start},A)\) has the transition function \(\delta:A\cup\{\mathtt{start}\}\to A\) such that * \(\delta(\mathtt{start},\mathtt{0})=s_{0}=a,\quad\delta(\mathtt{start}, \mathtt{1})=s_{-1}=b\), * for every \(c,d\in A\), every \(w=w_{0}w_{1}\ldots w_{\ell-1}\in A^{\ell}\) and every \(i\in\mathcal{D}\), it holds that \(\delta(c,i)=d\) if and only if \(\eta(c)=w\) and \(w_{i}=d\). Examples of automata associated to the Fibonacci substitution are shown in Figure 1. The automaton \(\mathcal{A}_{\eta,s}\) is related to the usual automata \(\mathcal{A}_{\eta,a}\) and \(\mathcal{A}_{\eta,b}\) according to the following equalities for every \(w\in A^{*}\): \[\mathcal{A}_{\eta,s}(\mathtt{0}w)=\mathcal{A}_{\eta,a}(w)\qquad\text{and} \qquad\mathcal{A}_{\eta,s}(\mathtt{1}w)=\mathcal{A}_{\eta,b}(w). \tag{3}\] Also if \(\mathcal{A}_{\eta,s}(w)=a\) for some \(w\in A^{+}\), then for every \(u\in A^{*}\) \[\mathcal{A}_{\eta,a}(u)=\mathcal{A}_{\eta,s}(wu). \tag{4}\] A theorem of Cobham [11] says that a sequence \(u=(u_{n})_{n\geq 0}\) is \(k\)-automatic with \(k\geq 2\) if and only if it is the image, under a coding, of a fixed point of a \(k\)-uniform morphism [1, SS6]. It was extended to non-uniform morphisms [15], see also [1, SS3]. The following extends the above results to the case of two-sided periodic points of non-uniform substitutions. Theorem 6.1: _Let \(\eta:A^{*}\to A^{*}\) be a substitution. Let \(u\in\operatorname{Per}(\eta)\) and denote \(s=(u_{-1},u_{0})\). Then for every \(n\in\mathbb{Z}\)_ \[u_{n}=\mathcal{A}_{\eta,s}(\operatorname{rep}_{u}(n)).\] Proof: If \(n\in\{0,-1\}\) then by definition we have \(u_{n}=s_{n}=\mathcal{A}_{\eta,s}(\operatorname{rep}_{u}(n))\). Let \(n\in\mathbb{Z}\backslash\{0,-1\}\). Induction hypothesis: for every \(m\in\mathbb{Z}\) such that \(|m|<|n|\) it holds that \(x_{m}=\mathcal{A}_{\eta,s}(\operatorname{rep}_{u}(m))\). Let \(q\) be the \(u\)-quotient and \(r\) the \(u\)-remainder of \(n\). As \(|q|<|n|\), \(q\) fulfils the induction hypothesis, i.e., \(u_{q}=\mathcal{A}_{\eta,s}(\operatorname{rep}_{u}(q))\). From Lemma 15 we have \(u_{n}=\eta^{p}(u_{q})[r]\) and \(\operatorname{rep}_{u}(n)=\operatorname{rep}_{u}(q)\cdot\operatorname{tail}_ {\eta,p,u_{q}}(r)\). Using Lemma 9 and Equation (4), we have \[u_{n} =\eta^{p}(u_{q})[r]=\mathcal{A}_{\eta,u_{q}}(\operatorname{tail}_ {\eta,p,u_{q}}(r))\] \[=\mathcal{A}_{\eta,s}(\operatorname{rep}_{u}(q)\cdot\operatorname {tail}_{\eta,p,u_{q}}(r))=\mathcal{A}_{\eta,s}(\operatorname{rep}_{u}(n)). \qed\] ## 7 Numeration systems for \(\mathbb{Z}^{d}\) based on periodic points A numeration system for \(\mathbb{Z}^{d}\) can be deduced from the numeration system for \(\mathbb{Z}\) based on a periodic point. Since not all integers are represented by words of the same length, we propose here a way to pad them to a common length. Let \(\eta:A^{*}\to A^{*}\) be a substitution and \(u\in\operatorname{Per}(\eta)\) with period \(p\geq 1\). Let \(\mathtt{W}_{\min}\) and \(\mathtt{W}_{\max}\) be the following minimum and the maximum element under the tail map with particular parameters: \[\mathtt{W}_{\min} =\operatorname{tail}_{\eta,p,u_{0}}(0)=\mathtt{0}^{p},\] \[\mathtt{W}_{\max} =\operatorname{tail}_{\eta,p,u_{-1}}(|\eta^{p}(u_{-1})|-1).\] The words \(\mathtt{W}_{\min}\) and \(\mathtt{W}_{\max}\) play the role of neutral words in the numeration system as illustrated in the next lemma. Lemma 17: _Let \(\eta:A^{*}\to A^{*}\) be a substitution. Let \(u\in\operatorname{Per}(\eta)\) with period \(p\geq 1\) and denote \(s=(u_{-1},u_{0})\). Let \(w\in\mathcal{L}(\mathcal{A}_{\eta,s})\). Then_ \[\mathcal{A}_{\eta,s}(w)=\begin{cases}\mathcal{A}_{\eta,s}(\mathtt{0}(\mathtt{W }_{\min})^{*}v)&\text{ if }w=\mathtt{0}v,\\ \mathcal{A}_{\eta,s}(\mathtt{1}(\mathtt{W}_{\max})^{*}v)&\text{ if }w=\mathtt{1}v. \end{cases}\] Proof: It holds that \(\mathcal{A}_{\eta,s}(u_{0},\mathtt{0}^{p})=u_{0}\) and \(\mathcal{A}_{\eta,s}(u_{-1},\mathtt{W}_{\max})=u_{-1}\). It is useful to pad words to a certain length using neutral words as follows using a pad function. Let \(s=(u_{-1},u_{0})\). Let \(w\in\mathcal{L}_{\ell p+1}(\mathcal{A}_{\eta,s})\) for some \(\ell\in\mathbb{N}\). Let \(t\in\mathbb{N}\) such that \(t\geq|w|\) and \(t\bmod p=1\). We define \[\operatorname{pad}_{t}(w)=\begin{cases}\mathtt{0}(\mathtt{W}_{\min})^{m}v& \text{ if }w=\mathtt{0}v,\\ \mathtt{1}(\mathtt{W}_{\max})^{m}v&\text{ if }w=\mathtt{1}v\end{cases}\] where \(m=(t-|w|)/p\). The padding map can be used to pad words so that they all have the same length. For instance, it allows to represent coordinates in \(\mathbb{Z}^{d}\) in dimension \(d\geq 1\). Definition 18 (Numeration system for \(\mathbb{Z}^{d}\)): Let \(\mathbf{n}=(n_{1},n_{2},\ldots,n_{d})\in\mathbb{Z}^{d}\). We define \[\operatorname{rep}_{u}(\mathbf{n})=\begin{pmatrix}\operatorname{pad}_{t}( \operatorname{rep}_{u}(n_{1}))\\ \operatorname{pad}_{t}(\operatorname{rep}_{u}(n_{2}))\\ \ldots\\ \operatorname{pad}_{t}(\operatorname{rep}_{u}(n_{d}))\end{pmatrix}\in\{ \texttt{0},\texttt{1}\}^{d}(\mathcal{D}^{d})^{*}\] where \(t=\max\{|\operatorname{rep}_{u}(n_{i})|\colon 1\leq i\leq d\}\). ## 8 A Total Order The radix order is a total order such that \(u<_{rad}v\) if and only if \(|u|<|v|\) or \(|u|=|v|\) and \(u<_{lex}v\). We define reversed-radix order as a total order such that \(u<_{rev}v\) if and only if \(|u|>|v|\) or \(|u|=|v|\) and \(u<_{lex}v\). We define a total order on \(\{\texttt{0},\texttt{1}\}\mathcal{D}^{*}\) and we show that \(\operatorname{rep}_{u}\) is an increasing bijection with respect to it. Definition 19 (total order \(\prec\)): For every \(u,v\in\{\texttt{0},\texttt{1}\}\mathcal{D}^{*}\), we define \(u\prec v\) if and only if * \(u\in\texttt{1}\mathcal{D}^{*}\) and \(v\in\texttt{0}\mathcal{D}^{*}\), or * \(u,v\in\texttt{0}\mathcal{D}^{*}\) and \(u<_{rad}v\), or * \(u,v\in\texttt{1}\mathcal{D}^{*}\) and \(u<_{rev}v\). Proposition 20: _Let \(\eta:A^{*}\to A^{*}\) be a substitution and let \(u\in\operatorname{Per}(\eta)\) with period \(p\geq 1\) and the seed \(s\). The map \(\operatorname{rep}_{u}:\mathbb{Z}\to\bigcup_{\ell\in\mathbb{N}}\mathcal{L}_{ \ell p+1}(\mathcal{A}_{\eta,s})\backslash\{\texttt{0W}_{\min},\texttt{1W}_{ \max}\}\mathcal{D}^{*}\) is an increasing bijection with respect to the order \(\prec\)._ The proof of Proposition 20 is ommited and will be included in an extended version of this article. It follows the proof of similar results, see [10, SS5] and [12, SS4]. In some other work on numeration systems, such increasing bijection is not a consequence, but rather a hypothesis. For example, a bijection \(\mathbb{N}\to\mathcal{L}\) serves as the definition of abstract numeration systems in [10]. ## 9 Examples Let \(\psi_{2}:a\mapsto ab,b\mapsto cb,c\mapsto ac\) denote some \(2\)-uniform substitution, \(\psi_{TM}:a\mapsto ab,b\mapsto ba\) denote the Thue-Morse, \(\varphi_{F}:a\mapsto ab,b\mapsto a\) the Fibonacci, and \(\varphi_{T}:a\mapsto ab,b\mapsto ac,c\mapsto a\) the Tribonacci substitution. Let also \(\rho:a\mapsto ac,b\mapsto cb,c\mapsto c\) be a non-primitive substitution. We denote \(\alpha\in\operatorname{Per}(\psi_{2})\) with the seed \(b.a\) and period \(1\), \(\beta\in\operatorname{Per}(\psi_{TM})\) with the seed \(a.a\) and period \(2\), \(\gamma,\delta\in\operatorname{Per}(\varphi_{F})\) with the seed \(b.a\), \(a.a\) and period \(2\), \(\tau\in\operatorname{Per}(\varphi_{T})\) with the seed \(c.a\) and period \(3\), and \(\xi\in\operatorname{Per}(\rho)\) with the seed \(b.a\) and period \(1\). The numeration systems derived from these periodic points are in Table 1. ### Two's complement numeration system Let \(\Sigma=\{\texttt{0},\texttt{1}\}\). In the two's complement representation of integers the value of a binary word \(w=w_{k-1}w_{k-2}\cdots w_{0}\in\Sigma^{k}\) is \(\mathrm{val}_{2c}(w)=\sum_{i=0}^{k-1}w_{i}2^{i}-w_{k-1}2^{k}\), see [10, SS4.1]. For every \(n\in\mathbb{Z}\) there exists a unique word \(w\in\Sigma^{+}\setminus(\texttt{00}\Sigma^{*}\cup\texttt{11}\Sigma^{*})\) such that \(n=\mathrm{val}_{2c}(w)\). The word \(w\) is called the _two's complement representation_ of the integer \(n\), and we denote it by \(\mathrm{rep}_{2c}(n)\). It can be shown that the map \(\mathrm{rep}_{2c}:\mathbb{Z}\to\Sigma^{+}\setminus(\texttt{00}\Sigma^{*}\cup \texttt{11}\Sigma^{*})\) is an increasing bijection with respect to the order \(\prec\). This implies the following. Proposition 21: _If \(\alpha=\psi(\alpha)\) is a two-sided fixed point for some \(2\)-uniform substitution \(\psi\), then \(\mathrm{rep}_{\alpha}=\mathrm{rep}_{2c}\)._ ### Fibonacci's complement numeration system In what follows, the Fibonacci sequence \((F_{n})_{n\geq 0}\), \(F_{n}=F_{n-1}+F_{n-2}\), for all \(n\geq 2\), is defined with the initial conditions \(F_{0}=1\), \(F_{1}=2\). We denote \(\Sigma=\{\texttt{0},\texttt{1}\}\). In [10], the Fibonacci's complement numeration system for both nonnegative and negative integers was defined from the value \(\mathrm{val}_{\mathcal{F}_{c}}:\Sigma^{*}\to\mathbb{Z}\) defined for every binary word \(w=w_{k-1}\cdots w_{0}\in\Sigma^{k}\) as \(\mathrm{val}_{\mathcal{F}_{c}}(w)=\sum_{i=0}^{k-1}w_{i}F_{i}-w_{k-1}F_{k}\). It is an analog of the two's complement value map \(\mathrm{val}_{2c}\), using Fibonacci numbers instead of powers of \(2\). It was proved in [10] that for every \(n\in\mathbb{Z}\) there exists a unique odd-length word \(w\in L=\Sigma(\Sigma\Sigma)^{*}\setminus(\Sigma^{*}11\Sigma^{*}\cup 000\Sigma^{*} \cup 101\Sigma^{*})\) such that \(n=\mathrm{val}_{\mathcal{F}_{c}}(w)\). It defines the map \(\mathrm{rep}_{\mathcal{F}_{c}}:\mathbb{Z}\to L\) by the rule \(n\mapsto w\). We can show that the numeration system obtained from the two-sided Fibonacci word is Fibonacci's complement numeration system introduced in [10]. \begin{table} \begin{tabular}{c||c|c|c|c|c|c} substitution & 2-uniform & Thue-Morse & Fibonacci & Fibonacci & Tribonacci & Inon-primitive \\ images & \((ab,cb,ac)\) & \((ab,ba)\) & \((ab,a)\) & \((ab,a)\) & \((ab,ac,a)\) & \((ac,cb,c)\) \\ seed & \(b.a\) & \(a.a\) & \(b.a\) & \(a.a\) & \(c.a\) & \(b.a\) \\ \hline \(n\) & \(\mathrm{rep}_{\alpha}(n)\) & \(\mathrm{rep}_{\beta}(n)\) & \(\mathrm{rep}_{\gamma}(n)\) & \(\mathrm{rep}_{\delta}(n)\) & \(\mathrm{rep}_{\tau}(n)\) & \(\mathrm{rep}_{\tau}(n)\) \\ \hline 10 & 01010 & 010100 & 00100100 & 00100101 & 0100000000 \\ 9 & 01001 & 01001 & 0010001 & 0010001 & 0001010 & 010000000 \\ 8 & 01000 & 01000 & 0010000 & 0010000 & 0001001 & 010000000 \\ 7 & 0111 & 00111 & 01010 & 01010 & 0001000 & 01000000 \\ 6 & 0110 & 00110 & 01001 & 01001 & 0110 & 0100000 \\ 5 & 0101 & 00101 & 01000 & 01000 & 0101 & 010000 \\ 4 & 0100 & 00100 & 00101 & 00101 & 0100 & 01000 \\ 3 & 011 & 011 & 00100 & 00100 & 0011 & 0100 \\ 2 & 010 & 010 & 010 & 010 & 0010 & 010 \\ 1 & 01 & 001 & 001 & 001 & 0001 & 01 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ -1 & 1 & 1 & 1 & 1 & 1 & 1 \\ -2 & 10 & 110 & 100 & 101 & 1010 & 10 \\ -3 & 101 & 101 & 10010 & 100 & 1001 & 100 \\ -4 & 100 & 100 & 10001 & 10101 & 1000 & 1000 \\ -5 & 1011 & 11011 & 10000 & 10100 & 101010 & 10000 \\ -6 & 1010 & 11010 & 1001010 & 10010 & 1010101 & 100000 \\ -7 & 1001 & 11001 & 1001001 & 10001 & 1010100 & 1000000 \\ -8 & 1000 & 11000 & 1001000 & 100000 & 1010011 & 100000000 \\ -9 & 10111 & 10111 & 1000101 & 1010101 & 101001 & 10000000 \\ -10 & 10110 & 10110 & 1001000 & 10101001 & 10100000000 \\ \end{tabular} \end{table} Table 1: Numeration systems for periodic points \(\alpha\), \(\beta\), \(\gamma\), \(\delta\), \(\tau\), \(\xi\) with given seed. **Proposition 22**: _Let \(\varphi_{F}:a\mapsto ab,b\mapsto a\) be the Fibonacci substitution and let \(\gamma\in\operatorname{Per}(\varphi_{F})\) with the seed \(s=b.a.\) Then \(\operatorname{rep}_{\gamma}=\operatorname{rep}_{\mathcal{F}c}\)._ We observe \[\begin{split} L&=(\mathtt{0}(\mathcal{L}(\mathcal{A}_{ \varphi_{F},a})\setminus\mathtt{00}\Sigma^{*})\cup\mathtt{1}(\mathcal{L}( \mathcal{A}_{\varphi_{F},b})\setminus\mathtt{01}\Sigma^{*}))\cap\Sigma( \Sigma\Sigma)^{*}\\ &=\mathcal{L}(\mathcal{A}_{\eta,s})\setminus\{\mathtt{0W}_{\min}, \mathtt{1W}_{\max}\}\mathcal{D}^{*}\cap\Sigma(\Sigma\Sigma)^{*}\end{split} \tag{5}\] From [10], the map \(\operatorname{rep}_{\mathcal{F}c}\) is an increasing bijection \(\mathbb{Z}\to L\) with respect to the order \(\prec\). By Proposition 20 and Equation (5), \(\operatorname{rep}_{\gamma}\) is an increasing bijection \(\mathbb{Z}\to L\) with respect to the order \(\prec\). Moreover, \(\operatorname{rep}_{\gamma}(0)=\mathtt{0}=\operatorname{rep}_{\mathcal{F}c}(0)\). Since there is a unique inscreasing bijection \(\mathbb{Z}\to L\) such that \(0\mapsto\mathtt{0}\), \(\operatorname{rep}_{\mathcal{F}c}=\operatorname{rep}_{\gamma}\). #### 4.0.1 Acknowledgements. This work was partially funded in France by ANR CODYS (ANR-18-CE40-0007) and ANR IZES (ANR-22-CE40-0011). The second author acknowledges financial support by the Barrande fellowship programme and the Grant Agency of Czech Technical University in Prague (SGS20/183/OHK4/3T/14).
2309.11853
BitCoin: Bidirectional Tagging and Supervised Contrastive Learning based Joint Relational Triple Extraction Framework
Relation triple extraction (RTE) is an essential task in information extraction and knowledge graph construction. Despite recent advancements, existing methods still exhibit certain limitations. They just employ generalized pre-trained models and do not consider the specificity of RTE tasks. Moreover, existing tagging-based approaches typically decompose the RTE task into two subtasks, initially identifying subjects and subsequently identifying objects and relations. They solely focus on extracting relational triples from subject to object, neglecting that once the extraction of a subject fails, it fails in extracting all triples associated with that subject. To address these issues, we propose BitCoin, an innovative Bidirectional tagging and supervised Contrastive learning based joint relational triple extraction framework. Specifically, we design a supervised contrastive learning method that considers multiple positives per anchor rather than restricting it to just one positive. Furthermore, a penalty term is introduced to prevent excessive similarity between the subject and object. Our framework implements taggers in two directions, enabling triples extraction from subject to object and object to subject. Experimental results show that BitCoin achieves state-of-the-art results on the benchmark datasets and significantly improves the F1 score on Normal, SEO, EPO, and multiple relation extraction tasks.
Luyao He, Zhongbao Zhang, Sen Su, Yuxin Chen
2023-09-21T07:55:54Z
http://arxiv.org/abs/2309.11853v1
BitCoin: Bidirectional Tagging and Supervised Contrastive Learning based Joint Relational Triple Extraction Framework ###### Abstract Relation triple extraction (RTE) is an essential task in information extraction and knowledge graph construction. Despite recent advancements, existing methods still exhibit certain limitations. They just employ generalized pre-trained models and do not consider the specificity of RTE tasks. Moreover, existing tagging-based approaches typically decompose the RTE task into two subtasks, initially identifying subjects and subsequently identifying objects and relations. They solely focus on extracting relational triples from subject\(\rightarrow\)object, neglecting that once the extraction of a subject fails, it fails in extracting all triples associated with that subject. To address these issues, we propose **BitCoin**, an innovative Bidirectional tagging and supervised Contrastive learning based joint relational triple extraction framework. Specifically, we design a supervised contrastive learning method that considers multiple positives per anchor rather than restricting it to just one positive. Furthermore, a penalty term is introduced to prevent excessive similarity between the subject and object. Our framework implements taggers in two directions, enabling triples extraction from subject\(\rightarrow\)object and object\(\rightarrow\)subject. Experimental results show that BitCoin achieves state-of-the-art results on the benchmark datasets and significantly improves the F1 score on Normal, SEO, EPO, and multiple relation extraction tasks. ## Introduction Relation triple extraction plays a crucial role in information extraction and knowledge graph construction, and it aims to extract entity pairs and their corresponding relations in the form of (subject, relation, object) from unstructured text. Early RTE approaches mainly relied on pipelined methods, which involve a two-step paradigm: extracting entities through named entity recognition (NER) and then identifying relationships through relationship classification [23, 24, 25, 26]. Despite its flexibility, this paradigm falls short in overlooking the correlation and interaction between subtasks and potential error propagation. To address these challenges, researchers have proposed end-to-end approaches that simultaneously model entity recognition and relation classification, which have achieved improved RTE performance [23, 22, 20, 21]. Overlapping triples present a challenge where a single entity or pair of entities participating in multiple relational triples in the same sentence [20]. Among these existing approaches, tagging-based joint extraction methods have shown superior performance and capability in handling overlapping triples [20, 21, 22, 23]. Despite the promising results achieved by existing joint extraction methods, they still suffer from two major shortcomings. Firstly, existing joint extraction methods just employ generalized pre-trained models and do not consider the specificity of RTE tasks and the importance of designing additional tasks, resulting in unreliable extraction results. Secondly, existing tagging-based approaches typically decompose RTE tasks into two subtasks, which identify subjects initially and subsequently identify objects and relations. They solely focus on extracting relational triples from subject\(\rightarrow\)object, neglecting that once the extraction of a subject fails, it fails in extracting all triples associated with that subject. These two issues significantly hamper the performance of RTE. To overcome the limitations of existing approaches and achieve improved results, we propose BitCoin, an end-to-end framework based on **Bidirectional tagging** and supervised **Co**ntrastive learning. The core idea is first to employ contrastive learning method that considers multiple positives per anchor and designs a penalty term to prevent excessive similarity between subject and object. Then, we employ taggers in two directions, enabling the extraction of triples from subject\(\rightarrow\)object and object\(\rightarrow\)subject. There are two main challenges here. Firstly, we aim to design a task to get better features by making related entities closer and unrelated entities farther apart, and one way to achieve this is through contrastive learning. However, existing contrastive learning methods are not applicable in this scenario. The conventional InfoNCE loss is unsuitable for cases where multiple positives may exist, as one subject can have relationships with multiple objects. Also, the subject and object features may become excessively similar, making it difficult to distinguish them accurately, which results in incorrect triple extractions. Secondly, bidirectional extraction is not just about designing extractions in two directions but also about the in teraction of information during the extraction in these two directions. To address the first challenge, we propose a novel supervised contrastive learning method. Firstly, we consider multiple positives per anchor rather than limiting ourselves to just one positive. These positives are selected from samples of the same class, in contrast to the data augmentation approach commonly used in self-supervised learning. We designate a subject as an anchor, consider all relevant objects as positives, and then designate all non-relevant entities as negatives. To augment the number of positives and negatives, we employ dropout as a minimal form of data augmentation. Secondly, we design a penalty term to prevent excessive similarity between subject and object. When the similarity between subject and object reaches a threshold, this penalty term prevents its similarity from expanding further and stabilizes the similarity around the threshold. To address the second challenge, we design taggers in two directions, enabling the extraction of triples from subject-object and object-subject. During extraction, information from both directions can interact with each other. For example, when extracting objects in the s2o direction, we can draw on the information of objects in the o2s direction for more accurate extraction. The extracted triples can be cross-validated by the results of two-way taggers, which means extractions from two directions complement each other. Also, we consider the fundamental properties of a triple, which are the interdependency and indivisibility of its entity pairs and relations. We may get unreliable results if we fail to utilize the relational information when extracting relevant objects fully but rely solely on the subject information to extract objects. To adequately consider the information conveyed by the relation, we design a relationship prediction module to get relational features and combine them with sentence and entity features to get reliable triples. This comprehensive input integration enables us to fully exploit the relational information within the sentence, facilitating easier and more accurate triple extraction. Experimental results show that BitCoin achieves state-of-the-art results on the benchmark datasets and significantly improves the F1 score on Normal, SEO, EPO, and multiple relation extraction tasks. The main contributions of this work are as follows: * This is the first attempt to employ supervised contrastive learning with a penalty term for RTE tasks. * We propose a novel end-to-end bidirectional tagging and supervised contrastive learning based framework BitCoin. It significantly alleviates the problems of one-direction subject extraction failure and neglecting relationship information. * We evaluate our model on four public datasets, and the results indicate that our method outperforms all the state-of-the-art baselines, especially in complex scenarios. ## Related Works Researchers have proposed two kinds of methods for RTE: pipeline and joint learning. Traditional pipelined approaches initially perform named entity recognition to extract entities, followed by identifying relationships through relationship classification [23, 24, 25]. Although flexible, these methods neglect the interdependence between the two subtasks and are prone to error propagation, consequently undermining overall performance. To tackle this problem, several joint models that extract entities and relations jointly have been proposed. Among them, feature-based joint models require a process of constructing features manually. Neural network-based models have shown considerable improvement in both performance and efficacy in RTE tasks by replacing manual feature construction, which is a complicated process of feature-based joint models. Zheng et al. (2017) proposed a novel tagging schema that unified the role of the entity and the relation between entities and converted the task of RTE to an end-to-end sequence tagging problem. However, they ignored the problem of overlapping triples. To address this problem, various neural network-based joint extraction methods are proposed. Zeng et al. (2018) proposed a sequence-to-sequence model with copy mechanism to address the overlapping triples problem, although it struggles to generate multi-word entities. Fu, Li, and Ma (2019) also delved into the issue and devised a novel method based on Graph Convolutional Networks (GCNs). As an improvement, Nayak and Ng (2020) adopted an encoder-decoder model, where the decoder incrementally extracts words, just like machine translation models. Ye et al. (2021) propose a contrastive triple extraction method with a generative transformer. Wang et al. (2020) treated entity recognition and relation classification as a table-filling problem, where each entry represents the interaction between two words. Shang, Huang, and Mao (2022) proposed to frame the joint extraction task as a fine-grained triple classification problem, which can extract triples from sentences in a one-module one-step manner. Among these approaches, tagging-based joint extraction methods have shown superior performance and capability in handling overlapping triples. Wei et al. (2019) proposed a novel cascade binary tagging framework that models relations as functions that map subjects to objects rather than treating relations as discrete labels on entity pairs, achieving competitive results. It first identifies all possible subjects and then identifies the relevant objects under all relations for each subject. Zheng et al. (2021) proposed an end-to-end framework that decomposed joint extraction into three subtasks: relationship determination, entity extraction, and subject-object alignment. Experiments show that tagging-based joint extraction methods achieve competitive results and have a solid ability to extract triples from sentences that contain overlapping triples or multiple triples. ## Method The architecture of BitCoin is shown in Fig. 1. It consists of the following six components: encoder based on BERT and supervised contrastive learning, subject tagger, object tagger, relationship prediction module, relation-specific object tagger and relation-specific subject tagger. During training, we adopt a multi-task learning approach that allows each module to be trained with ground truth, resulting in a more reliable model. During the inference stage, BitCoin operates in three stages: 1) The encoder generates representations for each token. 2) The subjects, objects and potential relationships are identified through the subject tagger, object tagger, and relationship prediction module. 3) The relation-specific object and subject taggers are used to tag the relevant objects and subjects under all relationships based on extracted subjects, objects and potential relations. ### Contrastive Learning based Encoder The encoder module is designed to convert sentences into word embeddings. Here, we first encode the input sentence using a pre-trained BERT model [4] to generate the representations of each token in the sentence and then use our contrastive learning method to learn better features. We use the BERT-BASE-CASED model to ensure a fair comparison with other models, but it is theoretically possible to use other pre-trained models, such as Roberta and BART, _etc_. In the knowledge graph, the ideal situation is that entities that are related are close to each other and entities that are not related are far apart. One way to achieve this is through contrastive learning. However, existing self-supervised contrastive learning methods are not applicable in this scenario, as one entity may have relations with multiple entities. Also, if we adopt the previous idea of comparative learning that pulls in the positives and pushes out the negatives, the subject and object features may become excessively similar, making it difficult to distinguish them accurately, resulting in incorrect triple extractions. To address these problems and obtain better features from the encoder for the RTE task, we have designed a supervised contrastive learning method with a penalty term. After generating the token representations through BERT, we use this contrastive learning method to train the BERT model further to obtain an encoder better suited for the RTE task. The idea of our supervised contrastive learning method is easy to understand. Firstly, we design a new loss function, as the self-supervised contrastive loss is incapable of handling the case where more than one sample is known to belong to the same class. Here, we set a subject as an anchor, all relevant objects as positives, and all non-relevant entities as negatives. Tab. 1 shows an example of positives and negatives. To increase the negatives and positives, we use dropout noise as the minimum form of data augmentation [1] by inputting the sentence to the encoder twice and getting the embeddings of subjects and objects with different dropout masks. Our loss function is structurally similar to the self-supervised contrastive learning loss, and the basic idea is both to pull in the positives and push out the negatives. Differently, we consider many positives per an \begin{table} \begin{tabular}{l|c c} \hline \hline Input & \begin{tabular}{c} Tom was born in New York at 2000. \\ London is the capital of England. \\ \end{tabular} \\ \hline \multirow{3}{*}{Triples} & \multicolumn{2}{c}{(Tom, birth\_place, New York)} \\ & \multicolumn{2}{c}{(Tom, birth\_date, 2000)} \\ & \multicolumn{2}{c}{(London, capital\_of, England)} \\ & \multicolumn{2}{c}{(London, belong\_to, England)} \\ \hline anchor & \multicolumn{2}{c}{Tom} & London \\ \hline Positives & \multicolumn{2}{c}{New York, 2000} & England \\ \hline Negatives & \multicolumn{2}{c}{London, England} & New York, 2000 \\ \hline \hline \end{tabular} \end{table} Table 1: Example of positives and negatives in supervised contrastive learning. Figure 1: The overview structure of BitCoin. In this example, given a sentence, BitCoin detects two candidate subjects, three candidate objects and two potential relations. Then for each candidate subject, the relation-specific object tagger will extract relevant objects under all relations. The relation-specific subject tagger extracts relevant subjects under all relations for each candidate object. Finally, we take the union set of two-direction results. chor instead of only one positive, which can handle the situation where one subject may have multiple relevant objects. The supervised contrastive loss is computed based on the embeddings of subjects and objects, and the loss function is shown in Eq. (1). \[\mathcal{L}_{1}^{i}=\frac{-1}{|P(i)|}\log\frac{\sum_{p=1}^{P(i)}e^{sim\left( \boldsymbol{h_{i}},\boldsymbol{h_{i}^{p}}\right)/\tau}}{\sum_{a=1}^{A(i)}e^{ sim\left(\boldsymbol{h_{i}},\boldsymbol{h_{i}^{a}}\right)/\tau}}, \tag{1}\] where \(P(i)\) is the set of all positives, and \(|P(i)|\) is its cardinality. \(A(i)\) is the set of all negatives and positives, \(h_{i}\) is the representation of the anchor, \(\tau\) is the temperature hyperparameter and \(sim(\boldsymbol{h_{1}},\boldsymbol{h_{2}})\) is the cosine similarity \(\frac{\boldsymbol{h_{1}^{i}}\boldsymbol{h_{2}}}{\|\boldsymbol{h_{1}}\|\cdot\| \boldsymbol{h_{2}}\|}\). Secondly, supposing we only use Eq. (1) as our loss function, we will keep drawing the subject closer to the relevant objects during training, making distinguishing the subjects and objects hard, resulting in incorrect triples. To solve this problem, we design a penalty term to prevent excessive similarity between the subject and object. When the similarity between the subject and the object reaches a threshold, this penalty term prevents its similarity from expanding further and stabilizes the similarity around the threshold. The penalty term is obtained by multiplying the following two terms, where the first term determines the strength of the penalty and the second term determines how the penalty is calculated. For the first term, we compute the difference between the similarity of the subject-object and threshold: the more the difference, the stronger the penalty. For the second term, it is structurally similar to the Eq. (1), but the idea is changed to pull in the non-relevant objects and push out the relevant objects. The penalty term is shown in Eq. (2), and the total loss function for supervised contrastive learning is shown in Eq. (3). \[\begin{split}\mathcal{L}_{2}^{i}=\left[-&\Big{(} \frac{1}{|P(i)|}\sum_{p=1}^{P(i)}sim\left(\boldsymbol{h_{i}},\boldsymbol{h_{ i}^{p}}\right)-\beta\Big{)}\right.\\ &\times\frac{1}{|N(i)|}\log\frac{\sum_{n=1}^{N(i)}e^{sim( \boldsymbol{h_{i}},\boldsymbol{h_{i}^{n}})/\tau}}{\sum_{a=1}^{A(i)}e^{sim( \boldsymbol{h_{i}},\boldsymbol{h_{i}^{a}})/\tau}}\Big{]}_{+},\end{split} \tag{2}\] \[\mathcal{L}_{c}=\sum_{i=1}^{I}\mathcal{L}_{c}^{i}=\sum_{i=1}^{I}\left(\omega_ {1}\mathcal{L}_{1}^{i}+\omega_{2}\mathcal{L}_{2}^{i}\right), \tag{3}\] where \(N(i)\) is the set of all positives, \(|N(i)|\) is its cardinality and \(\beta\) is the threshold. ### Bidirectional Tagging based Decoder BitCoin is a bi-directional framework that extracts the relational triples in two directions: (1) extracting the subject, followed by both the relation and the object (s2o); and (2) extracting the object, followed by both the relation and the subject (o2s). The model structure is identical in both directions, and we will only present direction from subject\(\rightarrow\)object here. Subject TaggerThe Subject Tagger extracts all subjects by decoding h obtained from the encoder. Since the subjects, relations and objects in a triple have unique characteristics, we do not use the same features \(h\) like CasRel, TPLinker and PRGC but use different features for each. Here, after obtaining the representations of each token from the input sentences, we can get the subject-specific representations through a fully-connected layer, denoted as \(\boldsymbol{h}_{sub}^{i}\), which is computed with Eq. (4). \[\boldsymbol{h}_{sub}^{i}=\boldsymbol{W}_{sub}\boldsymbol{h}^{i}+\boldsymbol{b }_{sub} \tag{4}\] where \(\boldsymbol{W}_{sub}\in\mathbb{R}^{d_{h}\times d_{h}}\) is a trainable matrix, \(\boldsymbol{b}_{sub}\in\mathbb{R}^{d_{h}}\) is a bias vector and \(\boldsymbol{d}_{h}\) is the dimension of \(\boldsymbol{h}_{i}\). The Subject Tagger comprises two binary tagging modules that use a simple \((0/1)\) token to indicate whether the token is the start or end position of the subject. For each token, two separate probability formulas are used to calculate the probability that it is the start or end position of the subject. If the probability is higher than a threshold, the corresponding token position is tagged as \(1\), indicating that the token is the start position of the subject. If it is lower, it is tagged as \(0\). As there may be more than one subject in a sentence, we consider the closest starting and ending positions as one subject. The probabilities of the start and end positions of the subject are calculated by Eqs. (5) and (6). \[p_{sub\_start}^{i} =\sigma\left(\boldsymbol{W}_{sub\_start}\boldsymbol{h}_{sub}^{i}+ \boldsymbol{b}_{sub\_start}\right) \tag{5}\] \[p_{sub\_end}^{i} =\sigma\left(\boldsymbol{W}_{sub\_end}\boldsymbol{h}_{sub}^{i}+ \boldsymbol{b}_{sub\_end}\right) \tag{6}\] where \(p_{sub\_start}^{i}\) and \(p_{sub\_end}^{i}\) represent the probabilities of identifying the \(i\)-th token in the input sequence as the start and end position of a subject respectively. \(\boldsymbol{h}_{sub}^{i}\) is the encoded representation of the \(i\)-th token with subject feature in the input sequence and \(\sigma\) is the sigmoid activation function. Relation Prediction ModuleTo obtain the relation-specific representations, we pass the representation of each token obtained from the encoder through a fully-connected layer, as shown in Eq. (7). \[\boldsymbol{h}_{rel}^{i}=\boldsymbol{W}_{rel}\boldsymbol{h}^{i}+\boldsymbol{b }_{rel} \tag{7}\] where \(\boldsymbol{W}_{rel}\in\mathbb{R}^{d_{h}\times d_{h}}\) is a trainable matrix, \(\boldsymbol{b}_{rel}\in\mathbb{R}^{d_{h}}\) is a bias vector and \(\boldsymbol{d}_{h}\) is the dimension of \(\boldsymbol{h}_{i}\). After obtaining the relation-specific representations, the Relation Prediction Module performs an Avgpool operation on these representations to obtain the representation of the input sentence. This representation is then used to extract all possible relations. Precisely, we can calculate the probability of each relation being present using a probability formula based on the average pooled representation of the input sentence. The corresponding relation is tagged as \(1\) if the probability is higher than a threshold. The potential relations in the input sentence are obtained from Eq. (9). \[\boldsymbol{h}_{rel}^{\text{avg}} =\text{Avgpool}\left(\boldsymbol{h}_{rel}\right)\in\mathbb{R}^{d_ {h}\times 1} \tag{8}\] \[p_{rel} =\sigma\left(\boldsymbol{W}_{rel}\boldsymbol{h}_{rel}^{avg}+ \boldsymbol{b}_{rel}\right) \tag{9}\] where \(p_{rel}\) represents the probability of identifying the relations in the input sequence and \(\boldsymbol{d}_{r}\) is the number of relations. Relation-Specific Object TaggerThe Relation-Specific Object Tagger extracts objects under all relations. Unlike the Subject Tagger, which uses encoder output directly, Relation-Specific Object Tagger also considers the subject tagged by the subject tagger, the relations predicted by the relation prediction module and object-specific representations from the object tagger in the other direction. This allows us to extract relations and objects simultaneously, which can solve the EPO and SEO problems. The module consists of \(2\times num_{rels}\) binary tagging modules. Similar to the subject tagger, we calculate the probability that it is the start or end position of the corresponding object under each relation using two probability formulas, respectively. If the probability exceeds a threshold, the position corresponding to that token is tagged as 1; otherwise, it is tagged as 0. The probabilities of the start and end positions of the object under each relation are calculated by Eqs. (11) and (12). \[\mathbf{h}^{i}_{rel\_obj} =\mathbf{W}_{obj}\left(\left(\mathbf{h}^{i}_{obj}+\mathbf{v}^{n}_{sub} \right)\oplus\mathbf{p}_{rel}\right)+\mathbf{b}_{obj} \tag{10}\] \[p^{i}_{obj\_start} =\sigma\left(\mathbf{W}^{r}_{obj\_start}\left(\mathbf{h}^{i}_{rel\_obj} \right)+\mathbf{b}^{r}_{obj\_start}\right)\] (11) \[p^{i}_{obj\_end} =\sigma\left(\mathbf{W}^{r}_{obj\_start}\left(\mathbf{h}^{i}_{rel\_obj} \right)+\mathbf{b}^{r}_{obj\_end}\right) \tag{12}\] where \(p^{i}_{obj\_start}\) and \(p^{i}_{obj\_end}\) represent the probabilities of identifying the \(i\)-th token as the start and end position of an object respectively. \(\mathbf{h}^{i}_{obj}\) is the representation of the \(i\)-th token with object feature, \(\mathbf{v}^{n}_{sub}\) is the encoded representation of the \(n\)-th subject extracted in the subject tagger and \(\oplus\) means concatenating two tensors. ### Training Strategy All sub-modules of this framework work in a multi-task learning manner. In this way, each sub-modules have its loss function. The model is trained jointly by optimizing the combined objective function during training. The loss functions for the three modules mentioned above are denoted as \(\mathcal{L}_{sub\_head}\), \(\mathcal{L}_{sub\_tail}\), \(\mathcal{L}_{rel}\), \(\mathcal{L}_{rel\_obj\_head}\) and \(\mathcal{L}_{rel\_obj\_tail}\), and they all utilize the binary cross-entropy loss function, as shown in Eqs. (13) to (16). \[\mathrm{BCE}\left(\hat{y},y\right) =-[ylog\left(\hat{y}\right)+\left(1-y\right)\log\left(1-\hat{y} \right)] \tag{13}\] \[\mathcal{L}_{s\_(h,t)} =\frac{1}{l}\sum_{i=1}^{l}\mathrm{BCE}\left(p^{i}_{s\_(h,t)},y^{i }_{s\_(h,t)}\right)\] (14) \[\mathcal{L}_{rel\_o,o\left(h,t\right)} =\frac{1}{l}\sum_{i=1}^{l}\mathrm{BCE}\left(p^{i}_{o\left(h,t \right)},y^{i}_{o\left(h,t\right)}\right)\] (15) \[\mathcal{L}_{rel} =\frac{1}{r}\sum_{i=1}^{r}\mathrm{BCE}\left(p^{i}_{rel},y^{i}_{ rel}\right) \tag{16}\] where \(\mathrm{BCE}\left(\hat{y},y\right)\) is a binary cross entropy loss, \(\hat{y}\in\left(0,1\right)\) is the calculated probability and y is the true label. l is the number of tokens in the input sentence, and r is the number of relations. Similar to the subject-to-object direction, there are four tagger losses in the object-to-subject direction, which are denoted as \(\mathcal{L}_{obj\_head}\), \(\mathcal{L}_{obj\_tail}\), \(\mathcal{L}_{rel\_sub\_head}\) and \(\mathcal{L}_{rel\_sub\_tail}\), and are calculated by a formula similar to Eqs. (14) to (16). The total loss is the sum of these parts, and it can be expressed by the following equation. \[\mathcal{L}_{total}=\mathcal{L}_{\left(s,o\right)\_(h,t)}+\mathcal{L}_{rel\_(s, o)\_(h,t)}+\mathcal{L}_{rel}+\mathcal{L}_{c} \tag{17}\] ## Experiment ### Datasets Following previous works [23, 24, 25], we evaluate our framework on datasets NYT [11] and WebNLG [12]. NYT and WebNLG have two different versions: one version only annotates the last word of entities, and the other annotates the whole span of entities. We denote the datasets based on the first standard as NYT* and WebNLG* and those based on the second standard as NYT and WebNLG. According to the different patterns of relational triple overlap, we classify the sentences into Normal, Entity-Pair-Overlap (EPO) and Single-Entity-overlap (SEO) classes to further study the capability of the proposed BitCoin in handling complex scenarios. Detailed statistics of the two datasets are described in Tab. 2. Evaluation MetricsThe standard micro precision, recall, and F1 score are used to evaluate the results. There are two match standards for the RTE task: one is Partial Match that an extracted triple is regarded as correct if the predicted relation and the head of both the subject and object are correct, and the other is Exact Match that a triple is regarded as correct only when its entities and relationships are completely matched with a correct triple. We follow previous work [23, 24, 25, 26, 27] and use Partial Match on NYT* and WebNLG*, use Exact Match on NYT and WebNLG. #### Implementation Details In our experiments, all training process is completed on a workstation with an Intel(R) 4210R 2.40G CPU, 128G memory, a single RTX 3090 GPU, and Ubuntu 20.04. We use a small batch mechanism to train the model, with 4, 6, and 8 batch sizes. The learning rate is a linear warmup, the maximum learning rate is set to 1e-5, and the warmup step is set to the first quarter of the epoch. The threshold for judging whether there is a subject, an object, or a relation is set to 0.5-0.6. The threshold for supervised contrastive learning in Eq. (2) is set to 0.85. The pre-trained BERT model is \([BERT-base,cases]\), which contains 12 Transformer blocks and 110M parameters, and the hidden size d is 768. All parameters are optimized by Adam algorithm [10]. The dropout probability is 0.1. For a fair comparison, the maximum length of our model input sentences is set to 100 words, as suggested in previous works [24, 25]. We also use an early stopping mechanism to prevent the overfitting of the model. Specifically, we stop the training process when the performance on the validation set does not obtain any improvement for at least ten consecutive cycles. All involved hyperparameters are determined based on the results of the development set. Other parameters are initialized randomly. BaselinesWe compare BitCoin with 13 strong baseline models and the state-of-the-art models ETL-Span (Yu et al., 2019), CasRel (Wei et al., 2019), RIN (Sun et al., 2020), TPLinker (Wang et al., 2020), CGT (Ye et al., 2021), CasDE (Ma, Ren, and Zhang, 2021), RIFRE (Zhao et al., 2021), StereoRel (Tian et al., 2021),PRGC (Zheng et al., 2021), R-BPtrNet (Chen et al., 2021), BiRTE (Ren et al., 2022), OneRel (Shang, Huang, and Mao, 2022) and SPAN (Sui et al., 2023). Most results of these baselines are copied from their original papers directly. ### Experimental Results Overall ResultsTab. 3 shows the comparison results of our model against 13 baselines on NYT and WebNLG in terms of Partial Match and Exact Match. Our BitCoin method outperforms them in respect of almost all evaluation metrics. Especially on WebNLG*, BitCoin obtains the best performance in terms of all three evaluation metrics. These results verify our motivation. We attribute the outstanding performance of BitCoin to its two advantages: Firstly, BitCoin uses the novel supervised contrastive learning method with a penalty term to make the pre-trained model more suitable for the RTE task. We can get better features from the encoder through this method. Secondly, we design taggers in two directions, and we can extract triples from subject\(\rightarrow\)object and from object\(\rightarrow\)subject. The two directions complement each other, and the extraction results can be cross-validated. Also, we notice the fundamental property of a triple, which is the interdependency and indivisibility of its entity pairs and relations. We designe the relation prediction model to fully exploit the relational information within the sentence, facilitating easier and more accurate triple extraction. Compared with the tagging-based method CasRel which inspires us to treat relations as functions mapping subjects to objects instead of treating relations as discrete labels on entity pairs, BitCoin achieves 3.5 and 2.6 absolute gain in F1-score on NYT* and WebNLG*, respectively. Such results confirm that contrastive learning and bidirectional framework are effective for RTE tasks. Detailed Results on Complex ScenariosTo verify the ability of our method on complex scenarios, we evaluate BitCoin's ability for extracting triples from sentences that contain overlapping triples and multiple triples. This ability is widely discussed in existing models and is an important metric for evaluating the robustness of a model. For a fair comparison, we follow the settings of some previous models that classify sentences according to the degree of overlapping and the number of triples contained in a sentence and conduct two extended experiments on different subsets \begin{table} \begin{tabular}{l|c c c c|c c c c c c c c} \hline \hline \multirow{2}{*}{Category} & \multicolumn{4}{c|}{Dataset} & \multicolumn{4}{c}{Details of Test Set} \\ & Train & Valid & Test & Relation & Normal & SEO & EPO & N=1 & N=2 & N=3 & N=4 & N\(\geq\)5 & Triples \\ \hline \hline NYT* & 56,195 & 4999 & 5000 & 24 & 3,266 & 1,297 & 978 & 3,244 & 1,045 & 312 & 291 & 108 & 8,110 \\ WebNLG* & 5,019 & 500 & 703 & 171 & 245 & 457 & 26 & 266 & 171 & 131 & 90 & 45 & 1,591 \\ NYT & 5,6195 & 5,000 & 5,000 & 24 & 3,222 & 1,273 & 969 & 3,240 & 1,047 & 314 & 290 & 109 & 8,120 \\ WebNLG & 5,019 & 500 & 703 & 216 & 239 & 448 & 6 & 256 & 175 & 138 & 93 & 41 & 1607 \\ \hline \hline \end{tabular} \end{table} Table 2: Statistics of datasets. Note that a sentence can belong to both EPO class and SEO class. \begin{table} \begin{tabular}{l c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{6}{c}{Partial Match} & \multicolumn{6}{c}{Exact Match} \\ \cline{2-13} & \multicolumn{3}{c}{NYT*} & \multicolumn{3}{c}{WebNLG*} & \multicolumn{3}{c}{NYT} & \multicolumn{3}{c}{WebNLG} \\ \cline{2-13} & Prec. & Rec. & F1 & Prec. & Rec. & F1 & Prec. & Rec. & F1 & Prec. & Rec. & F1 \\ \hline \hline ETL-Span (Yu et al., 2019) & 84.9 & 72.3 & 78.1 & 84.0 & 91.5 & 87.6 & 85.5 & 71.7 & 78.0 & 84.3 & 82.0 & 83.1 \\ CasRel (Wei et al., 2019) & 89.7 & 89.5 & 89.6 & 93.4 & 90.1 & 91.8 & - & - & - & - & - \\ RIN (Sun et al., 2020) & 87.2 & 87.3 & 87.3 & 87.6 & 87.0 & 87.3 & 83.9 & 85.5 & 84.7 & 77.3 & 76.8 & 77.0 \\ TPLinker (Wang et al., 2020) & 91.3 & 92.5 & 91.9 & 91.8 & 92.0 & 91.9 & 91.4 & 92.6 & 92.0 & 88.9 & 84.5 & 86.7 \\ SPAN (Sui et al., 2023) & 93.3 & 91.7 & 92.5 & 93.1 & 93.6 & 93.4 & 92.5 & 92.2 & 92.3 & - & - & - \\ CGT (Ye et al., 2021) & **94.7** & 84.2 & 89.1 & 92.9 & 75.6 & 83.4 & - & - & - & - & - & - \\ CasDE (Ma, Ren, and Zhang, 2021) & 90.2 & 90.9 & 90.5 & 90.3 & 91.5 & 90.9 & 89.9 & 91.4 & 90.6 & 88.0 & 88.9 & 88.4 \\ RIFRE (Zhao et al., 2021) & 93.6 & 90.5 & 92.0 & 93.3 & 92.0 & 92.6 & - & - & - & - & - & - \\ StereoRel (Tian et al., 2021) & 92.0 & 92.3 & 92.2 & 91.6 & 92.6 & 92.1 & 92.0 & 92.3 & 92.2 & - & - & - \\ PRGC (Zheng et al., 2021) & 93.3 & 91.9 & 92.6 & 94.0 & 92.1 & 93.0 & 93.5 & 91.9 & 92.7 & 89.9 & 87.2 & 88.5 \\ R-BPtrNet (Chen et al., 2021) & 92.7 & 92.5 & 92.6 & 93.7 & 92.8 & 93.3 & - & - & - & - & - & - \\ BiRTE (Ren et al., 2022) & 92.2 & **93.8** & 93.0 & 93.2 & 94.0 & 93.6 & **93.7** & 92.8 & 89.0 & 89.5 & 89.3 \\ OneRel (Shang, Huang, and Mao, 2022) & 92.8 & 92.9 & 92.8 & 94.1 & 94.4 & 94.3 & **93.2** & 92.6 & **92.9** & 91.8 & 90.3 & 91.0 \\ \hline \hline BitCoin (ours) & 92.9 & 93.3 & **93.1** & **94.4** & **94.5** & **94.4** & 93.1 & 92.6 & 92.8 & **91.9** & **90.5** & **91.2** \\ \hline \hline \end{tabular} \end{table} Table 3: Precision(%), Recall (%) and F1-score (%) of our proposed BitCoin and baselines. of NYT* and WebNLG*. We select five powerful models as baselines, and the detailed results are shown in Tab. 4. It can be observed that BitCoin has excellent superiority in handling complex sentences and achieves the best F1-score on 10 of the 16 subsets. Moreover, BitCoin achieves more performance improvement when handling the sentences of the SEO class. This is mainly because a single entity in an SEO sentence may be associated with multiple triples. Thus the existing models are more likely to suffer from the problem that once the extraction of an entity in some SEO triples is failed, all the associated triples of this entity would not be extracted either. However, the bidirectional framework in BitCoin can effectively overcome such deficiency, and the mentioned issue almost does not affect it when handling the SEO sentences. This is also why BitCoin performs well on sentences containing multiple triples. In general, these two further experiments adequately show the advantages of our model in complex scenarios. Ablation StudyHere we make five kinds of detailed evaluations on BitCoin, and the results are shown in Tab. 5. First, we evaluate the effectiveness of supervised contrastive learning. To this end, we implement a variant of BitCoin without the supervised contrastive learning method. Tab. 5 show that this performance drops on all datasets, indicating that our supervised contrastive learning is suitable for RTE tasks. We can get better word embeddings through this method. Additionally, Tab. 5 show that the performance of the one-directional tagging framework is much better than the CasRel model, which again indicates the effectiveness of our supervised contrastive learning method. Second, we evaluate the effectiveness of the bidirectional tagging framework. To this end, we implement the following two variants of BitCoin: (1) \(\mathrm{BitCoin_{s20}}\), a variant that only uses the subject\(\rightarrow\)object direction to extract triples; (2) \(\mathrm{BitCoin_{o28}}\), a variant that only uses the object\(\rightarrow\)subject direction to extract triples. Tab. 5 show that the performance of both variants drops on all datasets, which shows the superiority of the proposed bidirectional tagging framework. Primarily, we take the union set of triples extracted from two directions as the final results. Both variants achieve lower perceptions and recalls, which indicates that the information from both directions can interact with each other, and the extracted triples can be cross-validated by the results of two-way taggers. Third, we evaluate the effectiveness of the relation prediction module. To this end, we implement a variant of BitCoin that neglect the potential relations obtained from the relationship prediction module. Tab. 5 show that the performance drops on all datasets, which indicates that entity pairs and relations are interdependency and indivisibility, and we can obtain reliable results by fully utilizing the relational information present in the sentence during the extraction. ## Conclusion In this paper, we propose a novel bidirectional tagging and supervised contrastive learning based joint model named BitCoin for RTE tasks. BitCoin provides a general contrastive learning method that considers multiple positives per anchor and designs a penalty term to prevent excessive similarity between subject and object. Different from existing methods, taggers in our method are conducted in two directions, enabling the extraction of triples from subject\(\rightarrow\)object and object\(\rightarrow\)subject. Extensive experiments on four widely used datasets demonstrate the effectiveness of our method. \begin{table} \begin{tabular}{l c c c c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{8}{c}{NYT*} & \multicolumn{8}{c}{WebNLG*} \\ \cline{2-13} & Normal & SEO & EPO & N=1 & N=2 & N=3 & N=4 & N\(\geq\)5 & Normal & SEO & EPO & N=1 & N=2 & N=3 & N=4 & N\(\geq\)5 \\ \hline \hline CasRel & 87.3 & 91.4 & 92.0 & 88.2 & 90.3 & 91.9 & 94.2 & 83.7 & 89.4 & 92.2 & 94.7 & 89.3 & 90.8 & 94.2 & 92.4 & 90.9 \\ TPLinker & 90.1 & 93.4 & 94.0 & 90.0 & 92.8 & 93.1 & 96.1 & 90.0 & 87.9 & 92.5 & 95.3 & 88.0 & 90.1 & 94.6 & 93.3 & 91.6 \\ SPN & 90.8 & 94.0 & 94.1 & 90.9 & 93.4 & 94.2 & 95.5 & 90.6 & - & - & - & 89.5 & 91.3 & 96.4 & 94.7 & 93.8 \\ PRGC & 91.0 & 94.0 & 94.5 & 91.1 & 93.0 & 93.5 & 95.5 & 93.0 & 90.4 & 93.6 & 95.9 & 89.9 & 91.6 & 95.0 & 94.8 & 92.8 \\ OneRel & 90.6 & 94.8 & 95.1 & 90.5 & 93.4 & 93.9 & 96.5 & 94.2 & 91.9 & 94.7 & 95.4 & 91.4 & 93.0 & 95.9 & 95.7 & 94.5 \\ \hline \hline BitCoin & **91.1** & **95.0** & 94.9 & **91.2** & **93.6** & 93.8 & 95.9 & **94.4** & **92.0** & **94.9** & 95.7 & 91.1 & **93.4** & 95.7 & **95.7** & **94.7** \\ \hline \hline \end{tabular} \end{table} Table 4: F1-score (%) on sentences with different overlapping patterns and different triple numbers \begin{table} \begin{tabular}{l|l l c c} \hline \hline \multicolumn{2}{c}{Model} & Prec. & Rec. & F1 \\ \hline \hline \multirow{4}{*}{Model} & **BitCoin** & **92.8** & **93.4** & **93.1** \\ & - Contrastive Learning & 92.7 & 92.5 & 92.6 \\ & - Direction from o2s & 92.3 & 92.2 & 92.4 \\ & - Direction from s2o & 91.6 & 92.7 & 92.2 \\ & - Relation Prediction & 92.5 & 92.3 & 92.4 \\ \hline \multirow{4}{*}{Model} & **BitCoin** & **94.4** & **94.5** & **94.4** \\ & - Contrastive Learning & 93.9 & 93.2 & 93.6 \\ & - Direction from o2s & 93.7 & 92.3 & 93.0 \\ & - Direction from s2o & 94.0 & 91.7 & 92.8 \\ & - Relation Prediction & 94.1 & 94.1 & 94.1 \\ \hline \multirow{4}{*}{Model} & **BitCoin** & **93.1** & **92.6** & **92.8** \\ & - Contrastive Learning & 92.9 & 92.1 & 92.5 \\ & - Direction from o2s & 90.9 & 91.9 & 91.4 \\ & - Direction from o2s & 91.6 & 91.3 & 91.5 \\ & - Relation Prediction & 92.6 & 92.4 & 92.5 \\ \hline \multirow{4}{*}{Model} & **BitCoin** & **91.9** & **90.5** & **91.2** \\ & - Contrastive Learning & 90.1 & 88.5 & 89.3 \\ \cline{1-1} & - Direction from o2s & 90.6 & 87.8 & 89.1 \\ \cline{1-1} & - Direction from s2o & 90.8 & 86.4 & 88.6 \\ \cline{1-1} & - Relation Prediction & 91.7 & 87.5 & 89.5 \\ \hline \hline \end{tabular} \end{table} Table 5: Precision(%), Recall (%) and F1-score (%) for ablation study of BitCoin.
2309.10595
Kähler-Einstein Bergman metrics on pseudoconvex domains of dimension two
We prove that a two dimensional pseudoconvex domain of finite type with a K\"ahler-Einstein Bergman metric is biholomorphic to the unit ball. This answers an old question of Yau for such domains. The proof relies on asymptotics of derivatives of the Bergman kernel along critically tangent paths approaching the boundary, where the order of tangency equals the type of the boundary point being approached.
Nikhil Savale, Ming Xiao
2023-09-19T13:07:32Z
http://arxiv.org/abs/2309.10595v1
# Kahler-Einstein Bergman metrics on pseudoconvex domains of dimension two ###### Abstract. We prove that a two dimensional pseudoconvex domain of finite type with a Kahler-Einstein Bergman metric is biholomorphic to the unit ball. This answers an old question of Yau for such domains. The proof relies on asymptotics of derivatives of the Bergman kernel along critically tangent paths approaching the boundary, where the order of tangency equals the type of the boundary point being approached. 2020 Mathematics Subject Classification: 32F45, 32Q20, 32H10 N. S. was partially supported by the DFG funded project CRC/TRR 191. M. X. was partially supported by the NSF grants DMS-1800549 and DMS-2045104. Reinhardt, pseudoconvex domain of finite type domain in \(\mathbb{C}^{2}\), if the Bergman metric is Kahler-Einstein, then the domain is biholomorphic to the unit ball. Their proof utilized the non-tangential limit of the Bergman invariant function (see Fu [10]). Besides, their proof used the aid of a computer, again reflecting the intricacy of the problem in the more general finite type case. Our main theorem below gives an affirmative answer to Yau's question for smoothly bounded pseudoconvex domains of finite type in dimension two. **Theorem 1**.: _Let \(D\subset\mathbb{C}^{2}\) be a smoothly bounded pseudoconvex domain of finite type. If the Bergman metric of \(D\) is Kahler-Einstein, then \(D\) is biholomorphic to the unit ball in \(\mathbb{C}^{2}\)._ A key role is again played by the boundary asymptotics for the Bergman kernel. For two dimensional pseudoconvex domains of finite type, Hsiao and the first author [15] recently described the asymptotics of the Bergman kernel along transversal paths approaching the boundary. For our proof we shall need to extend this asymptotic result to tangential paths approaching a pseudoconvex point on the boundary. The paths shall further be chosen to be _critically tangent_; their order of tangency with the boundary equals the type of the point on the boundary that is being approached (see Remark 5 below for a further discussion of this choice). As a consequence of our main theorem, we also positively answer Yau's question for two dimensional bounded domains with real analytic boundary (such domains are always of finite type). **Corollary 2**.: _Let \(D\subset\mathbb{C}^{2}\) be a bounded pseudoconvex domain with real analytic boundary. If the Bergman metric of \(D\) is Kahler-Einstein, then \(D\) is biholomorphic to the unit ball in \(\mathbb{C}^{2}\)._ The article is organized as follows. We begin with some preliminaries on the Bergman and Kahler-Einstein metrics in Section 2. In Section 3, we establish the asymptotics for the Bergman kernel and its derivatives along a critically tangent path. The leading term of the asymptotics is computed as well in terms of a model Bergman kernel on the complex plane. Then we carry out the requisite analysis of the model in Section 4. Finally we prove Theorem 1 in Section 5. ## 2. Preliminaries In this section we begin with some requisite preliminaries on the Bergman and Kahler-Einstein metrics. Let \(D\subset\mathbb{C}^{n}\) be a smoothly bounded domain. A boundary defining function is a smooth function \(\rho\in C^{\infty}\left(\bar{D}\right)\) satisfying \(D=\left\{\rho\left(z\right)<0\right\}\subset\mathbb{C}^{2}\) and \(\left.d\rho\right|_{\partial D}\neq 0.\) The CR and Levi-distributions on the boundary \(X\coloneqq\partial D\) are defined via \(T^{1,0}X=T^{1,0}\mathbb{C}^{2}\cap T_{\mathbb{C}}X\) and \(HX\coloneqq\operatorname{Re}\left[T^{1,0}X\oplus T^{0,1}X\right]\) respectively. The Levi form on the boundary is defined by \[\mathscr{L}\in\left(T^{1,0}X\right)^{*}\otimes\left(T^{0,1}X \right)^{*} \tag{2.1}\] \[\mathscr{L}\left(U,\bar{V}\right)\coloneqq\partial\bar{\partial} \rho\left(U,\bar{V}\right)=-\overline{\partial}\rho\left(\left[U,\bar{V} \right]\right)\] for \(U,V\in T^{1,0}X\). The domain is called _strongly pseudoconvex_ if the Levi form is positive definite; and _weakly pseudoconvex_ (or simply _pseudoconvex_) if the Levi form is semi-definite. We now recall the notion of finite type. There are two standard notions of finite type (D'Angelo and Kohn/Bloom-Graham) of a smooth real hypersurface \(M\), and these happen to coincide in \(\mathbb{C}^{2}.\) (The reader is referred to [1] for more details). The domain is called of finite type (in the sense of Kohn/Bloom-Graham) if the Levi-distribution \(HX\) is bracket generating: \(C^{\infty}\left(HX\right)\) generates \(TX\) under the Lie bracket. In particular the _type of a point_ on the boundary \(x\in X=\partial D\) is the smallest integer \(r\left(x\right)\) such that \(H_{x}X_{r\left(x\right)}=T_{x}X\), where the subspaces \(HX_{j}\subset TX\), \(j=1,\ldots\) are inductively defined by \[HX_{1} \coloneqq HX\] \[HX_{j+1} \coloneqq HX+\left[HX_{j},HX\right],\quad\forall j\geq 1. \tag{2.2}\] In general, the function \(x\mapsto r\left(x\right)\) is only upper semi-continuous. The finite type hypothesis is then equivalent to \(r\coloneqq\max_{x\in X}r\left(x\right)<\infty.\) Note that the type of a strongly pseudoconvex point \(x\) is \(r\left(x\right)=2\). The Bergman projector of \(D\) is the orthogonal projector \[K_{D}:L^{2}\left(D\right)\to L^{2}\left(D\right)\cap\mathcal{O}\left(D\right) \tag{2.3}\] from square integrable functions onto the closed subspace of square-integrable holomorphic ones. Its Schwartz kernel, still denoted by \(K_{D}\left(z,z^{\prime}\right)\in L^{2}\left(D\times D\right),\) is called the Bergman kernel of \(D\). It is well-known to be smooth in the interior and positive along the diagonal. The Bergman metric is the Kahler metric in the interior defined by \[g_{\alpha\bar{\beta}}^{D}\coloneqq\partial_{\alpha}\partial_{\bar{\beta}}\ln K _{D}\left(z,z\right).\] Denote by \(G=\det\left(g_{\alpha\bar{\beta}}^{D}\right)\) the determinant of the above metric. The Ricci tensor of \(g^{D}\) is by definition \(R_{\alpha\bar{\beta}}=-\partial_{\alpha}\partial_{\bar{\beta}}\ln G\). The Bergman metric is always Kahler, and is further said to be _Kahler-Einstein_ if \(R_{\alpha\bar{\beta}}=cg_{\alpha\bar{\beta}}^{D}\) for some constant \(c\). Since \(D\) is a bounded domain, the sign of \(c\) must necessarily be negative (cf. [4, page 518]). The Bergman invariant function is defined by \(B\left(z\right)\coloneqq\frac{G\left(z\right)}{K_{D}\left(z,z\right)}\). It follows from the transformation formula of the Bergman kernel that the Bergman invariant function is invariant under biholomorphisms. Next we briefly discuss the Kahler-Einstein metric. Recall the existence of a complete Kahler-Einstein metric on \(D\subset\mathbb{C}^{n}\) is governed by the following Dirichlet problem: \[J\left(u\right)\coloneqq(-1)^{n}\det\begin{pmatrix}u&u_{\bar{ \beta}}\\ u_{\alpha}&u_{\alpha\bar{\beta}}\end{pmatrix}=1\quad\text{in }D,\] \[u=0\quad\text{on }\partial D. \tag{2.4}\] with \(u>0\) in \(D\). Here \(u_{\alpha}\) denotes \(\partial_{z_{\alpha}}u\), and likewise for \(u_{\bar{\beta}}\) and \(u_{\alpha\bar{\beta}}\). The problem was first studied by Fefferman [9], and \(J(\cdot)\) is often referred as Fefferman's complex Monge-Ampere operator. Cheng and Yau [4] proved the existence and uniqueness of an exact solution \(u\in C^{\infty}(D)\) to (2.4), on a smoothly bounded strongly pseudoconvex domain \(D\). The function \(u\) is called the Cheng-Yau solution; and \(-\partial\partial\log u\) gives rise to a complete Kahler-Einstein metric on \(D\). Mok-Yau [22] further showed a bounded domain admits a complete Kahler-Einstein metric if and only if it is a domain of holomorphy. We next make some observations on the Monge-Ampere operator for later applications. The left hand side of the first equation in (2.4) can further be invariantly written as \(J\left(u\right)=u^{n+1}\det\left[\partial\bar{\partial}\left(-\ln u\right)\right]\). It may thus be computed in terms of any orthonormal frame \(\left\{Z_{\alpha}\right\}_{\alpha=1}^{n}\) of \(T^{1,0}\mathbb{C}^{n}\) as \[J\left(u\right)=\det\begin{pmatrix}u&\bar{Z}_{\beta}u\\ Z_{\alpha}u&Z_{\alpha}\bar{Z}_{\beta}u-\left[Z_{\alpha},\bar{Z}_{\beta}\right] ^{0,1}u\end{pmatrix}. \tag{2.5}\] This can be proved using the identity \[\partial\bar{\partial}f\left(Z_{\alpha},\bar{Z}_{\beta}\right)=Z_{\alpha}\bar {Z}_{\beta}\left(f\right)-\bar{\partial}f\left(\left[Z_{\alpha},\bar{Z}_{ \beta}\right]\right). \tag{2.6}\] Here the normality of \(\left\{Z_{\alpha}\right\}_{\alpha=1}^{n}\) means each of them has the same norm as \(\partial_{z_{1}},\cdots,\partial_{z_{n}}\). The following proposition gives an equivalent condition for the Bergman metric being Kahler-Einstein, which is easier to work with. The proof is similar to [11, Proposition 1.1] and [16, Proposition 3.3]. **Proposition 3**.: _Let \(D\subset\mathbb{C}^{n},n\geq 2,\) be a smoothly bounded pseudoconvex domain. Then its Bergman metric \(g^{D}\) is Kahler-Einstein if and only if the Bergman invariant function is constant \(B\left(z\right)\equiv(n+1)^{n}\frac{\pi^{n}}{n!}.\) This is also equivalent to the Bergman kernel \(K_{D}\) satisfying \(J\left(K_{D}\right)=(-1)^{n}\frac{(n+1)^{n}\pi^{n}}{n!}K_{D}^{n+2}.\)_ Proof.: We start with the proof of the first assertion. Since the reverse direction is trivial, we only need to prove the forward part. Assume the Bergman metric of \(D\) is Kahler-Einstein. Recall a smoothly bounded domain in \(\mathbb{C}^{n}\) always has a strongly pseudoconvex boundary point. Therefore we can find a strongly pseudoconvex open connected piece \(M\) of \(\partial D\). Fix \(p\in M\). Next pick a small smoothly bounded strongly pseudoconvex domain \(D^{\prime}\subseteq D\) such that \(D^{\prime}\cap O=D\cap O\) and \(\partial D^{\prime}\cap O=\partial D\cap O=:M_{0}\subseteq M\) for some small ball \(O\) in \(\mathbb{C}^{n}\) centered at \(p\). Write \(K_{D^{\prime}}\) for the Bergman kernel of \(D^{\prime}\). Then by the localization of the Bergman kernel on pseudoconvex domains at a strongly pseudoconvex boundary point (cf. Theorem 4.2 in Englis [8]), there is a smooth function \(\Phi\) in a neighborhood of \(D^{\prime}\cup M_{0}\) such that \[K_{D}=K_{D^{\prime}}+\Phi\text{ on }D^{\prime}. \tag{2.7}\] Note that \(K_{D^{\prime}}\) obeys Fefferman asymptotic expansion on \(D^{\prime}\) by [9]. Combining this with (2.7), we see for any defining function \(\rho\) of \(D\cap O\) with \(D\cap O=\{z\in O:\rho(z)<0\},\) the Bergman kernel \(K_{D}\) also has the Fefferman type expansion in \(D\cap O:\) \[K_{D}=\frac{\phi}{\rho^{n+1}}+\psi\log(-\rho)\quad\text{on }D\cap O. \tag{2.8}\] Here \(\phi\) and \(\psi\) are smooth in a neighborhood of \(D^{\prime}\cup M_{0}\) with \(\phi\) nowhere zero on \(M_{0}.\) Then by (2.8) and (the proof of) Theorem 1 of Klembeck [18], the Bergman metric of \(D\) is asymptotically of constant holomorphic sectional curvature \(\frac{-2}{n+1}\) as \(z\in D\to M_{0}\). Consequently, the Bergman metric of \(D\) is asymptotically of constant Ricci curvature \(-1\) as \(z\in D\to M_{0}\) (To prove the latter fact, alternatively one can apply a similar argument as page 510 of Cheng-Yau [4]). Therefore by the Kahler-Einstein assumption, we must have \(R_{ij}=-g_{ij}.\) This yields \(\partial\bar{\partial}\log B\equiv 0\) in \(D\). That is, \(\log B\) is pluriharmonic in \(D\). Furthermore, by (2.7) and a similar argument as in the proof of Lemma 3.2 in [16], we have \(B(z,z)\rightarrow\frac{(n+1)^{n}\pi^{n}}{n!}\) as \(z\to M_{0}.\) Now write \(\Delta=\{z\in\mathbb{C}:|z|<1\}\) for the unit disk. Let \(f:\Delta\to O\) be an analytic disk attached to \(M_{0}.\) That is, \(f\) is holomorphic in \(\Delta\) and continuous in \(\Delta\) with \(f(\Delta)\subset O\cap D\) and \(f(\partial\Delta)\subset M_{0}.\) Then \(\log B(f)\) is harmonic in \(\Delta,\) continuous up to \(\partial\Delta,\) and takes constant value \(\log\frac{(n+1)^{n}\pi^{n}}{n!}\) on \(\partial\Delta.\) This implies \(B\) takes the constant value \(\frac{(n+1)^{n}\pi^{n}}{n!}\) on \(f(\Delta).\) But since \(M_{0}\) is strongly pseudoconvex, we can find a family \(\mathcal{F}\) of analytic disks such that \(\cup_{f\in\mathcal{F}}f(\Delta)\) fills up an open subset \(U\) of \(O\cap D\)(cf. [1]). Thus \(B\) is constant on \(U\). Since \(B\) is real analytic and \(D\) is connected, we see \(B\equiv\frac{(n+1)^{n}\pi^{n}}{n!}.\) Finally, a routine computation using the formula \(J(u)=u^{n+1}\det\left(\partial\overline{\partial}(-\ln u)\right)\) yields that, \(B\left(z\right)=c\) if and only if \(J\left(K_{D}\right)=(-1)^{n}cK_{D}^{n+2}\). Then the second assertion of the proposition follows immediately. ## 3. The Bergman kernel and its derivatives To prove Theorem 1, we shall fundamentally use the asymptotics of the Bergman kernel on pseudoconvex domains of finite type. In this section, we first briefly recall some classical and recent known work, and then prove new results for asymptotics of the Bergman kernel. In Section 2, we already made use of Fefferman's Bergman kernel asymptotics in the strongly pseudoconvex case. Let \(D\) be a strongly pseudoconvex domain with a defining function \(\rho\in C^{\infty}\left(\bar{D}\right)\). Fefferman [9] showed that the Bergman kernel of the domain \(D\) has an asymptotic expansion \[K_{D}\left(z,z\right)=a\left(z\right)\rho^{-n-1}+b\left(z\right)\ln\left(-\rho\right) \tag{3.1}\] for some functions \(a\left(z\right),b\left(z\right)\in C^{\infty}\left(\bar{D}\right)\). Recently, the asymptotics in (3.1) were extended to pseudoconvex domains of finite type in \(\mathbb{C}^{2}\) by Hsiao and the first author [15, Theorem 2]. They established the full asymptotic expansion of the Bergman kernel described along transversal paths approaching the boundary. This is not suitable for our proof of Theorem 1. We shall need the asymptotic expansion of the Bergman kernel, and its derivatives, along certain critically tangent paths (see Section 1 and Remark 5) approaching the boundary. Besides, we also need information of the leading coefficient in the asymptotics. To state our result, some setup is in order. Fix \(x^{*}\in X=\partial D\) on the boundary of the domain of type \(r=r\left(x^{*}\right)\). Let \(U_{1},U_{2}\coloneqq JU_{1}\in C^{\infty}\left(HX\right)\) be two local orthonormal sections of the Levi distribution and \(U_{3}\in C^{\infty}\left(TX\right)\), \(U_{3}\perp HX\) to be a unit normal to the Levi distribution. One then extends \(U_{1}\) to a local unit length vector field in the interior of \(D\). Set \(U_{2}=JU_{1}\) to be an extension of \(U_{2}\) to the interior of \(D\). Choose an extension of \(U_{3}\) of unit length and that is orthogonal to \(U_{1},U_{2}\). Set \(U_{0}=-JU_{3}\) (so that \(U_{3}=JU_{0}\)). It is easy to see that \(U_{0}\) is of unit length and normal to the boundary \(U_{0}\perp T\partial D\) near \(x^{*}\in X\). Replacing \(U_{3}\) by \(-U_{3}\) if needed, we assume \(U_{0}\) is outward-pointing to \(D\). This also gives a local boundary defining function \(\rho\) via \(U_{0}\left(\rho\right)=1,\;\rho|_{X}=0\). Note that the flow of the normal vector field \(U_{0}\) also gives a locally defined projection \(\pi:D\to X=\partial D\) onto the boundary. The pairs of vector fields define CR vector fields \(Z=\frac{1}{2}\left(U_{1}-iU_{2}\right),W=\frac{1}{2}\left(U_{0}-iU_{3}\right) \in T^{1,0}\mathbb{C}^{2}\). In [5, Prop. 3.2] (see also [1]) it was shown that a coordinate system \(x=\left(x_{1},x_{2},x_{3}\right)\) on the boundary centered at \(x^{*}\) maybe chosen so that \[Z|_{X}=\frac{1}{2}\left[\underbrace{\partial_{x_{1}}+\left(\partial_{x_{2}}p \right)\partial_{x_{3}}-i\left(\partial_{x_{2}}-\left(\partial_{x_{1}}p\right) \partial_{x_{3}}\right)}_{\coloneqq Z_{0}}+R\right], \tag{3.2}\] where \(p\left(x_{1},x_{2}\right)\) is a homogeneous, subharmonic (and non-harmonic) real polynomial of degree and weight \(r\). We note that \(r\) must be even. Besides, \(p\) has no purely holomorphic or anti-holomorphic terms in \(z_{1}=x_{1}+ix_{2}\) in its Taylor expansion at \(0\). Moreover, \(R=\sum_{j=1}^{3}r_{j}\left(x\right)\partial_{x_{j}}\) is a real vector field of weight \(w\left(R\right)\geq 0\). Here the weight of local functions and vector fields are defined as follows. The weight of a monomial \(x^{\alpha}\), \(\alpha\in\mathbb{N}_{0}^{3}\), is defined as \(w.\alpha\coloneqq\alpha_{1}+\alpha_{2}+r\alpha_{3}\), with \(w(x)=w\left(x_{1},x_{2},x_{3}\right)\coloneqq\left(1,1,r\right)\). The weight \(w\left(f\right)\) of a function \(f\in C^{\infty}\left(X\right)\) is then the minimum weight of the monomials appearing in its Taylor series at \(x^{*}=0\). Finally, the weight \(w\left(U\right)\) of a smooth vector field \(U=\sum_{j=1}^{3}f_{j}\partial_{x_{j}}\) is \(w\left(U\right)\coloneqq\min\left\{w\left(f_{1}\right)-1,w\left(f_{2}\right)-1,w\left(f_{3}\right)-r\right\}\). The coordinates \(\left(x_{1},x_{2},x_{3}\right)\) are next extended to the interior of the domain by being constant in the normal direction \(U_{0}\left(x_{j}\right)=0\), \(j=1,2,3\). Then \(x^{\prime}\coloneqq\left(\rho,x_{1},x_{2},x_{3}\right)\) serve as coordinates on the interior of the domain near \(x^{*}\) in which \(U_{0}=\partial_{\rho}\). We also extend the notion of weights to the new coordinate system. The weight of a monomial \(\rho^{\alpha_{0}}x^{\alpha}\) is defined as \(w^{\prime}\left(\rho^{\alpha_{0}}x^{\alpha}\right)=w^{\prime}.\alpha^{\prime} \coloneqq r\alpha_{0}+\alpha_{1}+\alpha_{2}+r\alpha_{3}\), with \(w^{\prime}(x^{\prime})=w^{\prime}\left(\rho,x_{1},x_{2},x_{3}\right)\coloneqq \left(r;1,1,r\right)\) now denoting the augmented weight vector. We again define the weight \(w(f)\) of a smooth function \(f\in C^{\infty}\left(D\right)\) near \(x^{*}\) as the minimum weight of the monomials appearing in its Taylor series at \(x^{*}\) in these coordinates. Finally, the weight \(w\left(U\right)\) of a smooth vector field \(U=f_{0}\partial_{\rho}+\sum_{j=1}^{3}f_{j}\partial_{x_{j}}\) is \(w\left(U\right)\coloneqq\min\left\{w\left(f_{0}\right)-r,w\left(f_{1}\right)-1,w \left(f_{2}\right)-1,w\left(f_{3}\right)-r\right\}\). Note that one has \(w\left(U\right)\geq-r\), and \(w\left(U\right)>-r\) if \(f_{0}(0)=f_{3}(0)=0\). Below \(O\left(k\right)\) denotes a vector field of weight \(k\) or higher. By a rescaling of the \(x_{3}\) coordinate, and at the cost of scaling the polynomial \(p\left(x_{1},x_{2}\right)\), we may also arrange \(\left.U_{3}\right|_{x^{*}=0}=\pm\partial_{x_{3}}\). By the fact that \(\left[Z,\bar{Z}\right]=\left[-\Delta p\left(z_{1}\right)\frac{i}{2}\partial_{ x_{3}}\right]+O\left(-1\right)\) and the pseudoconvexity condition (2.1), one can show that it must be \(\partial_{x_{3}}\). But the sign is irrelevant to our proof, and thus we will not elaborate it here. Therefore we have \[U_{3}=\partial_{x_{3}}+O\left(-r+1\right). \tag{3.3}\] Next let \(V\in C^{\infty}\left(HX\right)\) denote another locally defined section of the Levi distribution. This defines a local _tangential path_ approaching \(x^{*}\) via \[z\left(\epsilon\right)\coloneqq\left(\underbrace{\epsilon^{\epsilon V}x^{*}} _{=\pi\left(z\left(\epsilon\right)\right)},\,\underbrace{-\epsilon^{r}}_{= \rho\left(z\left(\epsilon\right)\right)}\right)\in D,\quad\epsilon>0. \tag{3.4}\] Note the above path is indeed tangential to the boundary; its tangent vector at \(x^{*}\) is in the Levi-distribution \(\left.\frac{dz}{d\epsilon}\right|_{\epsilon=0}=V_{x^{*}}\in H_{x^{*}}X\). The order of tangency the path makes with the boundary is the type of the point \(r\left(x^{*}\right)\). Writing \(V=\sum_{j=1}^{3}g_{j}\partial_{x_{j}}\), we associate the section \(V\) with a point \[z_{1,V}\coloneqq\left(x_{1,V},x_{2,V}\right)=\left(g_{1}\left(0\right),g_{2} \left(0\right)\right)\in\mathbb{R}^{2} \tag{3.5}\] In the computation of the leading asymptotics of the Bergman kernel \(K_{D}\) (see (3.7) in Theorem 4), one will further see the appearance of the _model Bergman kernel_\(B_{0}\) corresponding to the subharmonic polynomial \(p\) in (3.2). For the readers' convenience, we briefly recall the notion of model Bergman kernel. For that, we consider the \(L^{2}\) orthogonal projector from \(L^{2}\left(\mathbb{C}_{z_{1}}\right)\) to \(H_{p}^{2}\). Here \[H_{p}^{2}\coloneqq\left\{f\in L^{2}\left(\mathbb{C}_{z_{1}}\right)|\,\, \bar{\partial}_{p}f=0\right\};\quad\text{and}\,\,\,\,\bar{\partial}_{p} \coloneqq\partial_{\bar{z}_{1}}+\partial_{\bar{z}_{1}}p.\] Then \(B_{0}\) is defined to be the Schwartz kernel of this projector. More discussion and analysis of the model Bergman kernel follows in Section 4. We now state the necessary asymptotics result for the Bergman kernel and its derivatives. Below \(\partial^{\alpha^{\prime}}=\left(\frac{1}{2}U_{0}\right)^{\alpha_{0}}Z^{ \alpha_{1}}\bar{Z}^{\alpha_{2}}\left(\frac{1}{2}U_{3}\right)^{\alpha_{3}}\) denotes a mixed derivative along the respective vector fields for the multi-index \(\alpha^{\prime}=\left(\alpha_{0},\alpha_{1},\alpha_{2},\alpha_{3}\right)\in \mathbb{N}_{0}^{4}\). **Theorem 4**.: _Let \(D\subset\mathbb{C}^{2}\) be a smoothly bounded pseudoconvex domain of finite type. For any point \(x^{*}\in X=\partial D\) on the boundary, of type \(r=r\left(x^{*}\right)\), the Bergman kernel and its derivatives satisfy the asymptotics_ \[\partial^{\alpha^{\prime}}K_{D}\left(z,z\right)=\sum_{j=0}^{N}\frac{1}{\left( -2\rho\right)^{\frac{2+2\nu^{\prime}\alpha^{\prime}}{r}-\frac{1}{rj}}}a_{j}+ \sum_{j=0}^{N}b_{j}\left(-\rho\right){}^{j}\log\left(-\rho\right)+O\left( \left(-\rho\right)^{\frac{1}{r}\left(N+1\right)-2-\frac{2+w^{\prime}\alpha^{ \prime}}{r}}\right),\quad\forall N\in\mathbb{N}, \tag{3.6}\] _for some set of numbers \(a_{j},b_{j}\) as \(z\to x^{*}\) tangentially to the boundary along the path \(\left(\ref{eq:2.1}\right)\)._ _Furthermore, the leading term can be computed in terms of the model Bergman kernel of the subharmonic polynomial \(p\) as_ \[a_{0}=\delta_{0\alpha_{3}}.\left[\partial_{z_{1}}^{\alpha_{1}}\partial_{\bar{ z}_{1}}^{\alpha_{2}}\underbrace{\left(\frac{1}{\pi}\int_{0}^{\infty}e^{-s}s^{1+ \frac{2}{r}+\alpha_{0}}B_{0}\left(s^{\frac{1}{r}}z_{1}\right)ds\right)}_{= \bar{B}_{0,\alpha_{0}}\left(z_{1}\right)}\right]_{z_{1}=z_{1,V}}. \tag{3.7}\] Proof.: The proof is similar to [15, Thm. 2]. We shall only point out the necessary modifications. In [15, Sec. 4 ] the following space of symbols \(\hat{S}_{\frac{1}{r}}^{m}\left(\mathbb{C}^{2}\times\mathbb{C}^{2}\times\mathbb{ R}_{t}\right)\), \(m\in\mathbb{R}\), in the variables \(\left(\rho,x,\rho^{\prime},y;t\right)\in\mathbb{C}^{2}\times\mathbb{C}^{2} \times\mathbb{R}_{t}\) was defined. This is the space of smooth functions satisfying the symbolic estimates \[\left|\partial_{\rho}^{\alpha_{0}}\partial_{\rho^{\prime}}^{\beta_{0}}\partial _{x}^{\alpha}\partial_{y}^{\beta}\partial_{t}^{\gamma}a(\rho,x,\rho^{\prime}, y,t)\right|\leq C_{N,\alpha\beta\gamma}\left\langle t\right\rangle^{m-\gamma+\frac{u^{ \prime}\left(\alpha^{\prime}+\beta^{\prime}\right)}{r}}\frac{\left(1+\left|t ^{\frac{1}{r}}\hat{x}\right|+\left|t^{\frac{1}{r}}\hat{y}\right|\right)^{N \left(\alpha^{\prime},\beta^{\prime},\gamma\right)}}{\left(1+\left|t^{\frac{ 1}{r}}\hat{x}-t^{\frac{1}{r}}\hat{y}\right|\right)^{-N}}, \tag{3.8}\] for each \(\left(x,y,\rho,\rho^{\prime},t,N\right)\in\mathbb{R}_{x,y}^{6}\times\mathbb{ R}_{\rho,\rho^{\prime}}^{2}\times\mathbb{R}_{t}\times\mathbb{N}\) and \(\left(\alpha^{\prime},\beta^{\prime},\gamma\right)\in\mathbb{N}_{0}^{4}\times \mathbb{N}_{0}^{4}\times\mathbb{N}_{0}\) with \(\alpha^{\prime}=\left(\alpha_{0},\alpha\right)\), \(\beta^{\prime}=\left(\beta_{0},\beta\right)\). Here \(N\left(\alpha^{\prime},\beta^{\prime},\gamma\right)\in\mathbb{N}\) depends only on the given indices, \(\left\langle t\right\rangle\coloneqq\sqrt{1+t^{2}}\) denotes the Japanese bracket while the notation \(\hat{x}=\left(x_{1},x_{2}\right)\) denotes the first two coordinates of the tuple \(x=\left(x_{1},x_{2},x_{3}\right)\). Below \(\hat{S}\left(\mathbb{R}_{\hat{x}}^{2}\times\mathbb{R}_{\hat{y}}^{2}\right)\) further denotes the space of restrictions of functions in \(\hat{S}_{\frac{1}{r}}^{m}\) to \(x_{3},y_{3},\rho,\rho^{\prime}=0\) and \(t=1\). Next a generalization of this space is defined via \[\hat{S}_{\frac{1}{r}}^{m,k}\coloneqq\bigoplus_{p+q+p^{\prime}+q^{\prime}\leq k }\left(tx_{3}\right)^{p}\left(t\rho\right)^{q}\left(ty_{3}\right)^{p^{\prime}} \left(t\rho^{\prime}\right)^{q^{\prime}}\hat{S}_{\frac{1}{r}}^{m}, \tag{3.9}\] for each \(\left(m,k\right)\in\mathbb{R}\times\mathbb{N}_{0}\). Finally, the subspace of classical symbols \(\hat{S}_{\frac{1}{r},\mathrm{cl}}^{m}\subset\hat{S}_{\frac{1}{r}}^{m}\) comprises of those symbols for which there exist \(a_{jpp^{\prime}qq^{\prime}}\left(\hat{x},\hat{y}\right)\in\hat{S}\left( \mathbb{R}^{2}\times\mathbb{R}^{2}\right),j,p,p^{\prime},q,q^{\prime}\in \mathbb{N}_{0}\), such that \[a\left(x,y,t\right)-\sum_{j=0}^{N}\sum_{p+q+p^{\prime}+q^{\prime}\leq j}t^{m- \frac{1}{r}j}\left(tx_{3}\right)^{p}\left(t\rho\right)^{q}\left(ty_{3}\right)^ {p^{\prime}}\left(t\rho^{\prime}\right)^{q^{\prime}}a_{jpp^{\prime}qq^{\prime }}\left(t^{\frac{1}{r}}\hat{x},t^{\frac{1}{r}}\hat{y}\right)\in\hat{S}_{\frac{1 }{r}}^{m-\left(N+1\right)\frac{1}{r},N+1} \tag{3.10}\] for each \(N\in\mathbb{N}_{0}\). The space \(\hat{S}_{\frac{1}{r},\mathrm{cl}}^{m,k}\) is now defined similarly to (3.9). The principal symbol of such an element \(a\in\hat{S}_{\frac{1}{r},\mathrm{cl}}^{m}\) is defined to be the function \[\sigma_{L}\left(a\right)\coloneqq a_{00000}\in\hat{S}\left(\mathbb{R}^{2} \times\mathbb{R}^{2}\right).\] Now, following the proof of [13, Prop. 7.6], there exists a smooth phase function \(\Phi(z,w)\) defined locally on a neighbourhood \(U\times U\) of \(\left(x^{*},x^{*}\right)\) in \(\bar{D}\times\bar{D}\) such that \[\Phi(z,w)=x_{3}-y_{3}-i\rho\sqrt{-\sigma_{\triangle_{X}}(x,\left(0,0,1))}-i \rho^{\prime}\sqrt{-\sigma_{\triangle_{X}}(y,\left(0,0,1\right))}+O(\left|\rho \right|^{2})+O(\left|\rho^{\prime}\right|^{2}), \tag{3.11}\] \[q_{0}(z,d_{z}\Phi)\text{ vanishes to infinite order on }\rho=0,\] \[q_{0}(w,-\overline{d}_{w}\Phi)\text{ vanishes to infinite order on }\rho^{\prime}=0.\] Here \(\triangle_{X}\) denotes the real Laplace operator on the boundary \(X=\partial D\) of the domain, while \(q_{0}=\sigma\left(\square_{f}\right)\) denotes the principal symbol of the complex Laplace-Beltrami operator \(\square_{f}=\bar{\partial}_{f}^{*}\bar{\partial}+\partial\bar{\partial}_{f}^{*}\) on the domain. The proofs of [15, Lemma 17] and [15, Lemma 20] can be repeated to obtain the following description for the Bergman kernel: for some \(a\left(z;w,t\right)\in\hat{S}_{\frac{1}{r},\mathrm{cl}}^{1+\frac{2}{r}}\left( \mathbb{C}^{2}\times\mathbb{C}^{2}\times\mathbb{R}_{t}\right)\) one has \[K_{D}\left(z,w\right)=\frac{1}{\pi}\int_{0}^{\infty}e^{i\Phi(z,w)t}a\left(z,w,t \right)dt\quad\left(\mathrm{mod}\ C^{\infty}\left(\left(U\times U\right)\cap \left(\overline{D}\times\overline{D}\right)\right)\right) \tag{3.12}\] with \(\sigma_{L}\left(a\right)=B_{0}\) being the model Bergman kernel defined prior to the statement of this theorem. We need to differentiate the last description (3.12). For that, we adopt the notion of weights we defined before Theorem 4. By construction, the chosen vector fields \(\left(U_{0},Z,\overline{Z},U_{3}\right)\) have weights \(\left(-r,-1,-1,-r\right)\) respectively. Furthermore, the leading parts in their weight expansions are given by \[\left(U_{0},Z,\overline{Z},U_{3}\right)=\left(\partial_{\rho},Z_{0}+O\left(0 \right),\bar{Z}_{0}+O\left(0\right),\partial_{x_{3}}+O\left(-r+1\right)\right), \tag{3.13}\] Here \(Z_{0}\coloneqq\frac{1}{2}[\partial_{x_{1}}+\left(\partial_{x_{2}}p\right) \partial_{x_{3}}-i\left(\partial_{x_{2}}-\left(\partial_{x_{1}}p\right) \partial_{x_{3}}\right)]\) is now understood as a locally defined vector field in the interior of the domain. Next we observe from definitions of the symbol spaces (3.8), (3.9) that a vector field \(U\) of weight \(w\left(U\right)\) maps \[U:\hat{\mathcal{S}}_{\frac{1}{r},\mathrm{cl}}^{m}\rightarrow\hat{\mathcal{S} }_{\frac{1}{r},\mathrm{cl}}^{m-\frac{1}{r}w\left(U\right)}. \tag{3.14}\] The equations (3.11), (3.13), (3.14) now allow us to differentiate (3.12) to obtain: for some \(a_{\alpha}\left(z;w,t\right)\in\hat{\mathcal{S}}_{\frac{1}{r},\mathrm{cl}}^{ 1+\frac{2+w^{\prime},\alpha}{r},\alpha_{0}+\alpha_{3}}\left(\mathbb{C}^{2} \times\mathbb{C}^{2}\times\mathbb{R}_{t}\right)\) one has \[\partial^{\alpha}K_{D}\left(z,z\right) =\frac{1}{\pi}\int_{0}^{\infty}e^{i\Phi\left(z,z\right)t}a_{ \alpha}\left(z,z,t\right)dt\quad\left(\mathrm{mod}\ C^{\infty}\left(\left(U \times U\right)\cap\left(\overline{D}\times\overline{D}\right)\right)\right)\] \[\qquad\text{with}\quad a_{\alpha} =\left(Z_{0}^{\alpha_{1}}\bar{Z}_{0}^{\alpha_{2}}B_{0}\right)t^{1+ \frac{2+w^{\prime},\alpha}{r}}+\hat{\mathcal{S}}_{\frac{1}{r},\mathrm{cl}}^{ 1+\frac{1+w^{\prime},\alpha}{r},\alpha_{0}+\alpha_{3}}. \tag{3.15}\] Recall the vector field \(V=\sum_{j=1}^{3}g_{j}\partial_{x_{j}}\in C^{\infty}\left(HX\right)\) lies in the Levi distribution. By (3.2), its \(\partial_{x_{3}}\)-component function has weight \(w\left(g_{3}\right)\geq r-1\). Thus along the flow of \(V\), and consequently along the path \(z\left(\epsilon\right)\) in (3.4), the coordinate functions satisfy \[\left(x_{1},x_{2},x_{3},\rho\right)=\left(\epsilon g_{1}\left(0\right)+O\left( \epsilon^{2}\right),\epsilon g_{2}\left(0\right)+O\left(\epsilon^{2}\right),O \left(\epsilon^{r}\right),-\epsilon^{r}\right). \tag{3.16}\] The last two equations (3.15) and (3.16) now combine to give the theorem. _Remark 5_.: (Critical tangency) The path \(z\left(\epsilon\right)\) in (3.4) is particularly chosen to be critically tangent to the boundary. Namely its order of tangency with the boundary is the type \(r\left(x^{\ast}\right)\) of the boundary point \(x^{\ast}\in\partial D\) that is being approached. This order of tangency is critical in the sense that it is the maximum for which the expansion in (3.6) can be proved. As for a higher order of tangency (i.e., \(\rho\) having vanishing order higher than \(r\) at \(\epsilon=0\)), the terms in the symbolic expansion of \(a_{\alpha}\in\hat{\mathcal{S}}_{\frac{1}{r},\mathrm{cl}}^{1+\frac{2+w^{\prime} \alpha}{r},\alpha_{0}+\alpha_{3}}\) in (3.15) become increasing in order and not asymptotically summable. This means in particular, the double summation in (3.10) would be asymptotically non-summable along the path. A critically tangent path is necessary in our proof below since for such a path the leading coefficient (3.7) picks up information of the model Bergman kernel at the arbitrary tangent vector \(V\). For a path tangent at a lesser order, the leading coefficient only depends on the value of the model kernel \(B_{0}\) at the origin. ## 4. Analysis of the model kernel In Section 3, we introduced the model Bergman kernel \(B_{0}\), corresponding to a subharmonic, homogeneous polynomial \(p\left(x_{1},x_{2}\right)\). As we see from Theorem 4, it plays an important role in the asymptotics of the Bergman kernel \(K_{D}\) of \(D\). To prepare for the proof of Theorem 1, we need to further analyze this model Bergman kernel \(B_{0}\). For convenience, we will also write \(p\left(x_{1},x_{2}\right)\) as \(p\left(z_{1}\right)\), where \(z_{1}=x_{1}+ix_{2}\). ### Expansion of the model kernel and first few coefficients First we will work out the expansion of the model Bergman kernel \(B_{0},\) and compute the values of the first few coefficients in the expansion. As usual, for a smooth function \(f\) on \(\mathbb{C}_{z_{1}},\) we write \(f_{z_{1}}=\partial_{z_{1}}f=\frac{\partial f}{\partial z_{1}},\) and likewise for \(f_{\bar{z}_{1}}\) and \(f_{z_{1}\bar{z}_{1}}.\) **Proposition 6**.: _For any \(z_{1}\in\mathbb{R}^{2}\), with \(\Delta p\left(z_{1}\right)\neq 0\), the model Bergman kernel on diagonal satisfies the asymptotics_ \[\left[\partial_{z_{1}}^{\alpha_{1}}\partial_{\bar{z}_{1}}^{\alpha_{2}}B_{0} \right]\left(t^{\frac{1}{r}}z_{1}\right)=\frac{t^{1-\frac{2+|\alpha|}{r}}}{2 \pi}\partial_{z_{1}}^{\alpha_{1}}\partial_{\bar{z}_{1}}^{\alpha_{2}}\left[ \sum_{j=0}^{N}b_{j}t^{-j}+O\left(t^{-N-1}\right)\right] \tag{4.1}\] _for each \(N\in\mathbb{N}\) as \(t\to\infty\). Moreover, the first four terms in the asymptotics are given by_ \[b_{0} =4q;\quad b_{1}=q^{-2}Q;\quad b_{2}=\frac{1}{6}\partial_{z_{1}} \partial_{\bar{z}_{1}}\left[q^{-3}Q\right]; \tag{4.2}\] \[b_{3} =\frac{q}{48}\left\{\left[q^{-1}\partial_{z_{1}}\partial_{\bar{z }_{1}}\right]^{2}q^{-3}Q-q^{-4}Q\left[\partial_{z_{1}}\partial_{\bar{z}_{1}} \right]q^{-3}Q-q^{-1}\left[\partial_{\bar{z}_{1}}\left(q^{-3}Q\right)\right] \left[\partial_{z_{1}}\left(q^{-3}Q\right)\right]\right\};\] _where \(q\coloneqq\frac{1}{4}\Delta p=p_{z_{1}\bar{z}_{1}}\) and \(Q\coloneqq qq_{z_{1}\bar{z}_{1}}-q_{z_{1}}q_{\bar{z}_{1}}\) are defined in terms of the polynomial \(p.\)_ Proof.: The proof uses some rescaling arguments. Following [21, Sec. 4.1], we introduce the rescaling operator \(\delta_{t^{-\frac{1}{r}}}:\mathbb{C}\to\mathbb{C}\) given by \(\delta_{t^{-\frac{1}{r}}}\left(z_{1}\right)\coloneqq t^{-\frac{1}{r}}z_{1},t>0.\) Recall when introducing \(B_{0},\) we defined \(\bar{\partial}_{p}\coloneqq\partial_{\bar{z}_{1}}+\partial_{\bar{z}_{1}}p.\) The corresponding Kodaira Laplacian on functions \(\square_{p}=\bar{\partial}_{p}^{*}\bar{\partial}_{p}\) then gets rescaled to the operator \[\left(\delta_{t^{-\frac{1}{r}}}\right)_{*}\square_{p}=t^{-\frac{2}{r}}\square _{t}\] where \(\square_{t}\coloneqq\bar{\partial}_{t}^{*}\bar{\partial}_{t},\) and \(\bar{\partial}_{t}\coloneqq\partial_{\bar{z}_{1}}+t\left(\partial_{\bar{z}_{1 }}p\right)\). We pause to introduce two more Bergman type kernel functions that are defined similarly as \(B_{0}.\) Set \(H_{t,p}^{2}\coloneqq\left\{f\in L^{2}\left(\mathbb{C}_{z_{1}}\right)|\bar{ \partial}_{t}f=0\right\}\) and consider the \(L^{2}\) orthogonal projector \(B_{t}\) from \(L^{2}\left(\mathbb{C}_{z_{1}}\right)\) to \(H_{t,p}^{2}.\) Slightly abusing notation, we still denote the Schwartz kernel of this projector by \(B_{t}\). Next we define \(L_{t}^{2}\left(\mathbb{C}_{z_{1}}\right)\coloneqq\left\{f|e^{-tp}f\in L^{2} \left(\mathbb{C}_{z_{1}}\right)\right\}\), and denote by \(\mathcal{O}\left(\mathbb{C}_{z_{1}}\right)\) the space of entire functions on \(\mathbb{C}_{z_{1}}.\) Consider the \(L^{2}\) orthogonal projector \(B_{t}^{p}\) from \(L_{t}^{2}\left(\mathbb{C}_{z_{1}}\right)\) to \(L_{t}^{2}\left(\mathbb{C}_{z_{1}}\right)\cap\mathcal{O}\left(\mathbb{C}_{z_{1}}\right)\). We still write the Schwartz kernel of this projector as \(B_{t}^{p}\). A routine computation yields that the two kernels \(B_{t}\) and \(B_{t}^{p}\) are related by \[B_{t}\left(z_{1},z_{1}^{\prime}\right)=e^{-tp\left(z_{1}\right)-tp\left(z_{1}^{ \prime}\right)}B_{t}^{p}\left(z_{1},z_{1}^{\prime}\right), \tag{4.3}\] Moreover, \(B_{t}\) can be equivalently understood as the Bergman projector for the trivial holomorphic line bundle on \(\mathbb{C}\) with Hermitian metric \(h_{t}=e^{-tp}\). The curvature of this metric is \(t\underbrace{\left(2\partial_{z_{1}}\partial_{\bar{z}_{1}}p\right)}_{=\frac{1} {2}\Delta p}dz_{1}\wedge d\bar{z}_{1}\). Its eigenvalue is \(\Delta p\). In [15, Thm. 14], the Bergman kernel of \(B_{t}\) was related to the model via \[B_{0}\left(t^{\frac{1}{r}}z,t^{\frac{1}{r}}z^{\prime}\right)=t^{-\frac{2}{r}}B_ {t}\left(z,z^{\prime}\right). \tag{4.4}\] Furthermore, in its proof the following spectral gap property for \(\square_{t}\) was observed \[\operatorname{Spec}\left(\square_{t}\right)\subset\left\{0\right\}\cup\left[c _{1}t^{2/r}-c_{2},\infty\right)\] for some \(c_{1},c_{2}>0\). At a point \(z_{1}\in\mathbb{C}\), where \(\Delta p\left(z_{1}\right)\neq 0\), the asymptotics of \(B_{t}\left(z,z\right)\) as \(t\to\infty\) are thus the standard asymptotics for the Bergman kernel on tensor powers of a positive line bundle (cf. [14, Thm. 1.6]). There is an asymptotic expansion \[\partial_{z_{1}}^{\alpha_{1}}\partial_{\bar{z}_{1}}^{\alpha_{2}}B_{t}\left(z \right)=\frac{t}{2\pi}\partial_{z_{1}}^{\alpha_{1}}\partial_{\bar{z}_{1}}^{ \alpha_{2}}\left[\sum_{j=0}^{N}b_{j}t^{-j}+O\left(t^{-N-1}\right)\right] \tag{4.5}\] for each \(N\in\mathbb{N}\) as \(t\to\infty\). The last two equations (4.4) and (4.5) combine to prove (4.1). It remains to compute the first four coefficients in (4.5). For that we will make use of (4.3), by which it suffices to find the corresponding coefficients in the expansion of \(B_{t}^{p}.\) The computations for the latter can be found in [7, (6.2) and Theorem 9]. In order to see the specialization of the formulas therein to the special case here, we note the Kahler metric \(g=\partial\bar{\partial}p\) with potential \(p\) has component \(g_{1\bar{1}}=q=\partial_{z_{1}}\partial_{\bar{z}_{1}}p\). The only non-zero Christoffel symbols are \(\overline{\Gamma_{11}^{1}}=\Gamma_{\bar{1}\bar{1}}^{\bar{1}}=q^{-1}\partial_{ \bar{z}_{1}}q\). Furthermore, the only non-zero components of the Riemannian, Ricci and scalar curvatures respectively are given by the following. Here we follow the convention of curvatures in [7, pp. 6], which may differ from that of some other papers by a negative sign. \[R_{1\bar{1}1\bar{1}}=\partial_{z_{1}}\partial_{\bar{z}_{1}}q-q^{-1}\left( \partial_{z_{1}}q\right)\left(\partial_{\bar{z}_{1}}q\right)=q^{-1}Q;\quad \mathrm{Ric}_{1\bar{1}}=q^{-2}Q;\quad R=q^{-3}Q.\] The corresponding Laplace operator \(L_{1}\) of [7, (2.10)] in our special context is given by \(L_{1}=q^{-1}\partial_{z_{1}}\partial_{\bar{z}_{1}}\). Bringing these specializations into [7, (6.2) and Theorem 9], a routine computation yields the values of \(b_{0}\), \(b_{1}\), \(b_{2}\) and \(b_{3}\). _Remark 7_.: Although we computed the values of \(b_{0},\cdots,b_{3}\) in Proposition 6, we will only use \(b_{3}\) in the proof of Theorem 1. ### Models with vanishing expansion coefficients Having shown that the model kernel \(B_{0}\left(t^{\frac{1}{t}}z_{1}\right)\) admits an asymptotic expansion at \(t\to\infty,\) we ask when the terms of the asymptotic expansion are eventually zero, or in other words, \(b_{j}=0\) for \(j\) sufficiently large. This is relevant to our theorem below. We prove the following somewhat surprising result which shows the vanishing of the third coefficient is already restrictive. As above, let \(p\left(x_{1},x_{2}\right)\) be a subharmonic and non-harmonic homogeneous polynomial of degree \(r\). **Theorem 8**.: _Suppose the third term \(b_{3}\) vanishes in the asymptotic expansion (4.1) of the model kernel \(B_{0}\) corresponding to \(p\). Then there exists some real number \(c_{0}>0\) such that \(q=c_{0}\left(z_{1}\bar{z}_{1}\right)^{\frac{r}{2}-1}.\) Here as before, \(q\coloneqq\frac{1}{4}\Delta p.\)_ To prove the theorem, we carry out some Hermitian analysis. For that, we start with a few definitions and lemmas. In the remainder of this subsection, we will write \(z\) instead of \(z_{1}\) for simplicity. **Definition 9**.: Let \(f\in\mathbb{C}[z,\zeta]\) be a polynomial of two variables. Fix \(a\in\mathbb{C}.\) Let \(k\in\mathbb{N}_{0}\) and \(\lambda\in\mathbb{C}.\) We say \(h\) is divisible by \((z+a\zeta)^{k}\) with coefficient \(\lambda,\) denoted by \(f\sim D_{a}(k,\lambda),\) if \(f(z,\zeta)=(z+a\zeta)^{k}\hat{f}(z,\zeta)\) for some \(\hat{f}\in\mathbb{C}[z,\zeta]\) with \(\hat{f}(-a,1)=\lambda.\) It is clear that if \(f\sim D_{a}(k,\lambda)\) with \(k\geq 1,\) then we have \(f\sim D_{a}(k-1,0).\) In the following, we say \(f\in\mathbb{C}[z,\zeta]\) is Hermitian if \(f(z,\bar{z})\) is real-valued for every \(z\in\mathbb{C}.\) **Lemma 10**.: _Let \(f\in\mathbb{C}[z,\zeta]\) be a nonconstant Hermitian homogeneous polynomial of two variables. Then there exist \(a\in\mathbb{C},k\geq 1\) and a nonzero \(\lambda\in\mathbb{C}\) such that \(f\sim D_{a}(k,\lambda).\) Moreover, if \(f\neq cz^{m}\zeta^{m}\) for every real number \(c\neq 0\) and integer \(m\geq 1\), then we can further choose \(a\neq 0.\)_ Proof.: Write \(d\) for the degree of \(f\). Since \(f\) is homogeneous, we have \[f(z,\zeta)=\zeta^{d}f(\frac{z}{\zeta},1). \tag{4.6}\] By assumption, \(f(\eta,1)\in\mathbb{C}[\eta]\) is nonconstant, for otherwise \(f(z,\zeta)\) is not Hermitian. Then by the fundamental theorem of algebra, write \[f(\eta,1)=c\eta^{m}\prod_{j=1}^{l}(\eta-a_{j})^{k_{j}}. \tag{4.7}\] Here \(c\in\mathbb{C}\) is nonzero, and \(m,l\geq 0\) and \(k_{j}\geq 1\) satisfy \(m+\sum_{j=1}^{l}k_{j}\leq d.\) Moreover, \(a_{j}^{\prime}\)s are distinct nonzero complex numbers. When \(l=0,\) the above equation is understood as \(f(\eta,1)=c\eta^{m}.\) By (4.6) and (4.7), we have \[f(z,\zeta)=cz^{m}\zeta^{n}\prod_{j=1}^{l}(z-a_{j}\zeta)^{k_{j}},\quad\text{ where}\quad n=d-m-\sum_{j=1}^{l}k_{j}. \tag{4.8}\] We first consider the case where \(l=0.\) In this case, \(f(z,\zeta)=cz^{m}\zeta^{n}.\) Since \(f\) is nonconstant and Hermitian, we must have \(c\in\mathbb{R},c\neq 0,\) and \(n=m\geq 1\). The conclusion of the lemma follows if we choose \(a=0,k=m\geq 1,\lambda=c\neq 0.\) We next assume \(l\geq 1.\) Then by (4.8), the conclusion of the lemma follows if we choose \(a=-a_{1}\neq 0,k=k_{1}\geq 1,\lambda=ca_{1}^{m}\prod_{j=2}^{l}(a_{1}-a_{j})^{k_ {j}}\neq 0\). This proves the first part of Lemma 10. Note if \(f\) is not a multiple of \(z^{m}\bar{\zeta}^{m}\) for any integer \(m,\) then it can only be the latter case, and this establishes the second part of Lemma 10. We next extend the above definition to rational functions. **Definition 11**.: Let \(g\in\mathbb{C}(z,\zeta)\) be a rational function. Write \(g=\frac{f_{1}}{f_{2}},\) where \(f_{1},f_{2}\in\mathbb{C}[z,\zeta]\) and \(f_{2}\neq 0.\) If \(f_{i}\sim D_{a}(k_{i},\lambda_{i}),1\leq i\leq 2,\) with \(k_{1},k_{2}\geq 0\) and \(\lambda_{2}\neq 0,\) then we say \(g\sim D_{a}(k_{1}-k_{2},\frac{\lambda_{1}}{\lambda_{2}}).\) Note that \(k_{1}-k_{2}\) could be negative. Note if \(g\in\mathbb{C}(z,\zeta)\) and \(g\sim D_{a}(k,\lambda),\) then we have \(g\sim D_{a}(k-1,0).\) We next make a few more observations. **Lemma 12**.: _If \(g\in\mathbb{C}(z,\zeta)\) and \(g\sim D_{a}(k,\lambda)\) for some \(a\in\mathbb{C}\), then the following hold:_ _(1) \(\partial_{z}g\sim D_{a}(k-1,k\lambda)\) and \(\partial_{\zeta}g\sim D_{a}(k-1,ak\lambda);\)_ _(2) \(\partial_{z}\partial_{\zeta}g\sim D_{a}(k-2,ak(k-1)\lambda).\)_ Proof.: Write \(g=\frac{f_{1}}{f_{2}}\) with \(f_{1},f_{2}\in\mathbb{C}[z,\zeta],f_{2}\neq 0.\) Write \(f_{i}=(z+a\zeta)^{k_{i}}h_{i}\) for \(1\leq i\leq 2,\) where \(h_{1},h_{2}\in\mathbb{C}[z,\zeta],k_{1},k_{2}\geq 0,k_{1}-k_{2}=k\) and \(h_{2}(-a,1)\neq 0,\frac{h_{1}(-a,1)}{h_{2}(-a,1)}=\lambda.\) A routine computation yields \[\partial_{z}g=\frac{f_{2}\partial_{z}f_{1}-f_{1}\partial_{z}f_{2}}{f_{2}^{2}}= \frac{(k_{1}-k_{2})(z+a\zeta)^{k_{1}+k_{2}-1}h_{1}h_{2}+(z+a\zeta)^{k_{1}+k_{2 }}(h_{2}\partial_{z}h_{1}-h_{1}\partial_{z}h_{2})}{(z+a\zeta)^{2k_{2}}h_{2}^{2}}.\] Then it is clear that \(\partial_{z}g\sim D_{a}(k-1,k\lambda)\). Similarly one can show \(\partial_{\zeta}g\sim D_{a}(k-1,ak\lambda).\) This finishes the proof of part (1). The conclusion in part (2) follows immediately from part (1). The statements in the next lemma follow from direct computations. We omit the proof. **Lemma 13**.: _Let \(g_{1},g_{2}\in\mathbb{C}(z,\zeta)\) and \(a\in\mathbb{C}\). Assume \(g_{i}\sim D_{a}(k_{i},\lambda_{i})\) for \(1\leq i\leq 2\) where \(k_{i}\in\mathbb{Z}\) and \(\lambda_{i}\in\mathbb{C}\), then the following hold:_ _(1) \(g_{1}g_{2}\sim D_{a}(k_{1}+k_{2},\lambda_{1}\lambda_{2});\)_ _(2) \(cg_{1}\sim D_{a}(k_{1},c\lambda_{1})\) for any complex number \(c;\)_ _(3) \(g_{1}+g_{2}\sim D_{a}(k_{1},\lambda_{1}+\lambda_{2})\) if \(k_{1}=k_{2};\) and \(g_{1}+g_{2}\sim D_{a}(k_{1},\lambda_{1})\) if \(k_{1}<k_{2};\)_ _(4) In addition assume \(\lambda_{2}\neq 0.\) Then \(\frac{g_{1}}{g_{2}}\sim D_{a}(k_{1}-k_{2},\frac{\lambda_{1}}{\lambda_{2}}).\)_ We are now ready to prove Theorem 8. Proof of Theorem 8.: Recall \(q=\partial_{z}\partial_{\bar{z}}p\) and \(Q=q(\partial_{z}\partial_{\bar{z}}p)-(\partial_{z}q)(\partial_{\bar{z}}q)\) are real polynomials in \(\mathbb{C}[z,\bar{z}]\). Note we can assume \(q\) is nonconstant, for otherwise the conclusion is trivial. We will identify \(p(z,\bar{z})\in\mathbb{C}[z,\bar{z}]\) with its complexification \(p(z,\zeta)\in\mathbb{C}[z,\zeta]\) (where we replace \(\bar{z}\) by a new variable \(\zeta\)). Moreover, since \(p(z,\bar{z})\) is real-valued, \(p(z,\zeta)\) is Hermitian. Likewise for \(q(z,\bar{z})\) and \(Q(z,\bar{z}).\) To establish Theorem 8, it suffices to show that \(q(z,\zeta)=c_{0}z^{m}\zeta^{m}\) for some integer \(m\geq 1\). Seeking a contraction, suppose the conclusion fails. Then by Lemma 10, we can find some complex numbers \(a\neq 0,\lambda\neq 0,\) and some integer \(k\geq 1\) such that \(q\sim D_{a}(k,\lambda).\) That is, we can write \(q(z,\zeta)=(z+a\zeta)^{k}h,\) where \(h\in\mathbb{C}[z,\zeta]\) and \(h(-a,1)=\lambda.\) A direct computation yields the following holds for some \(\hat{h}\in\mathbb{C}[z,\zeta].\) \[Q(z,\zeta)=-ak(z+a\zeta)^{2k-2}h^{2}+(z+a\zeta)^{2k-1}\hat{h}.\] Thus we have \(Q\sim D_{a}(2k-2,-ak\lambda^{2}).\) By assumption \(b_{3}\equiv 0.\) We multiply it by \(\frac{48}{q}\) and use the standard complexification to get \[\left[q^{-1}\partial_{z}\partial_{\zeta}\right]^{2}q^{-3}Q-q^{-4}Q\left[ \partial_{z}\partial_{\zeta}\right]q^{-3}Q-q^{-1}\left[\partial_{\zeta}\left( q^{-3}Q\right)\right]\left[\partial_{z}\left(q^{-3}Q\right)\right]=0. \tag{4.9}\] On the other hand, by Lemma 13, \(q^{3}\sim D_{a}(3k,\lambda^{3})\) and \(q^{-3}Q\sim D_{a}(-k-2,-\frac{ak}{\lambda}).\) Then by Lemma 12, \[\partial_{z}\left(q^{-3}Q\right)\sim D_{a}(-k-3,\frac{ak(k+2)}{\lambda});\quad \partial_{\zeta}\left(q^{-3}Q\right)\sim D_{a}(-k-3,\frac{a^{2}k(k+2)}{ \lambda}).\] Using the above and Lemma 13, we can compute the last term on the left hand side of (4.9): \[-q^{-1}\left[\partial_{\zeta}\left(q^{-3}Q\right)\right]\left[\partial_{z} \left(q^{-3}Q\right)\right]\sim D_{a}(-3k-6,-\frac{a^{3}k^{2}(k+2)^{2}}{ \lambda^{3}}).\] Similarly, we compute the first two terms on the left hand side of (4.9): \[\left[q^{-1}\partial_{z}\partial_{\zeta}\right]^{2}q^{-3}Q\sim D_{a}(-3k-6,- \frac{a^{3}k(k+2)(k+3)(2k+4)(2k+5)}{\lambda^{3}});\] \[-q^{-4}Q\left[\partial_{z}\partial_{\zeta}\right]q^{-3}Q\sim D_{a}(-3k-6,- \frac{a^{3}k^{2}(k+2)(k+3)}{\lambda^{3}}).\] Consequently, the left hand side of (4.9) equals to \(D_{a}(-3k-6,T),\) where \[T=-\frac{a^{3}k(k+2)}{\lambda^{3}}\left[k(k+2)+(k+3)(2k+4)(2k+5)+k(k+3)\right] \neq 0.\] This means the left hand side of (4.9) is nonzero, a contradiction. The proof is completed. ### The case \(p=\frac{c}{2}\left(z_{1}\bar{z}_{1}\right)^{\frac{r}{2}}\) We next consider the particular case when \(p=\frac{c}{2}\left(z_{1}\bar{z}_{1}\right)^{\frac{r}{2}}\) for \(c>0\) (recall \(r\) must be even). Here it becomes possible to compute the Bergman kernel \(B_{0}\) explicitly. **Theorem 14**.: _The model Bergman kernel corresponding to the homogeneous subhamonic polynomial \(p=\frac{c}{2}\left(z_{1}\bar{z}_{1}\right)^{\frac{r}{2}}\) is given by_ \[B_{0}\left(z_{1},z_{1}^{\prime}\right) =\frac{re^{-\left[p\left(z_{1}\right)+p\left(z_{1}^{\prime}\right) \right]}c^{\frac{2}{r}}}{2\pi}G\left(c^{\frac{2}{r}}z_{1}\overline{z_{1}^{ \prime}}\right),\quad\text{where} \tag{4.11}\] \[G\left(x\right) \coloneqq\sum_{\alpha=0}^{\frac{r}{2}-1}\frac{x^{\alpha}}{\Gamma \left(\frac{2(\alpha+1)}{r}\right)}+x^{\frac{r}{2}-1}e^{x^{2}}\left[\sum_{ \alpha=0}^{\frac{r}{2}-1}\frac{\Gamma\left(\frac{2(\alpha+1)}{r}\right)- \Gamma\left(\frac{2(\alpha+1)}{r},x^{\frac{r}{2}}\right)}{\Gamma\left( \frac{2(\alpha+1)}{r}\right)}\right] \tag{4.10}\] _is given in terms of the incomplete gamma function \(\Gamma\left(a,u\right)\coloneqq\int_{u}^{\infty}t^{a-1}e^{-t}dt\), \(u>0\)._ Proof.: From the formulas \(\square_{p}=\bar{\partial}_{p}^{*}\bar{\partial}_{p}\) and \(\bar{\partial}_{p}\coloneqq\partial_{\bar{z}_{1}}+\partial_{\bar{z}_{1}}p= \partial_{\bar{z}_{1}}+\frac{cr}{4}z_{1}^{\frac{r}{2}}\bar{z}_{1}^{\frac{r}{2}-1}\), an orthonormal basis for \(\ker\left(\square_{p}\right)\) is easily found to be \[s_{\alpha}\coloneqq\left(\frac{1}{2\pi}\frac{r}{\Gamma\left(\frac{2(\alpha+1) }{r}\right)}c^{\frac{2(\alpha+1)}{r}}\right)^{1/2}z_{1}^{\alpha}e^{-p},\quad \alpha\in\mathbb{N}_{0}.\] Since \(B_{0}=\sum s_{\alpha}\overline{s_{\alpha}}\), we have \[B_{0}\left(z_{1},z_{1}^{\prime}\right)=\frac{re^{-\left[p\left(z_{1}\right)+p \left(z_{1}^{\prime}\right)\right]}}{2\pi}\sum_{\alpha\in\mathbb{N}_{0}}\frac{ 1}{\Gamma\left(\frac{2(\alpha+1)}{r}\right)}c^{\frac{2(\alpha+1)}{r}}\left(z_ {1}\overline{z_{1}^{\prime}}\right)^{\alpha}. \tag{4.12}\] To compute the above in a closed form, consider the series \[F\left(y\right)\coloneqq\sum_{\alpha=0}^{\infty}\frac{y^{\frac{\alpha+1}{s}-1 }}{\Gamma\left(\frac{\alpha+1}{s}\right)}=\sum_{\alpha=0}^{s-1}\frac{y^{\frac{ \alpha+1}{s}-1}}{\Gamma\left(\frac{\alpha+1}{s}\right)}+\underbrace{\sum_{ \alpha=s}^{\infty}\frac{y^{\frac{\alpha+1}{s}-1}}{\Gamma\left(\frac{\alpha+1}{ s}\right)}}_{F_{0}\left(y\right)\coloneqq},\] for \(s=\frac{r}{2}\). Differentiating the second term in the series and using \(\Gamma\left(z+1\right)=z\Gamma\left(z\right)\) yields \(F_{0}^{\prime}\left(y\right)=F_{0}\left(y\right)+\sum_{\alpha=0}^{s-1}\frac{ y^{\frac{\alpha+1}{s}-1}}{\Gamma\left(\frac{\alpha+1}{s}\right)}\) for \(y>0.\) This ODE can be solved (uniquely) with the boundary condition \(F_{0}\left(0\right)=0\) to give \[F_{0}\left(y\right)=e^{y}\left[\sum_{\alpha=0}^{s-1}\frac{\Gamma\left(\frac{ \alpha+1}{s}\right)-\Gamma\left(\frac{\alpha+1}{s},y\right)}{\Gamma\left( \frac{\alpha+1}{s}\right)}\right] \tag{4.13}\] in terms of the incomplete gamma function. Thus in particular we have computed \(F\left(y\right)\coloneqq y^{\frac{1}{s}-1}G\left(y^{\frac{1}{s}}\right)\), where \(G\) is as defined in (4.11). Finally we note from (4.12) that \[B_{0}\left(z,z^{\prime}\right)=\frac{re^{-\left[p\left(z_{1}\right)+p\left(z_ {1}^{\prime}\right)\right]_{C^{\frac{2}{r}}}}}{2\pi}x^{s-1}F\left(x^{s}\right),\] for \(x=c^{\frac{2}{r}}z_{1}\overline{z_{1}^{\prime}}\), completing the proof. ## 5. Proof of the main theorem In this section we finally prove Theorem 1. Proof of Theorem 1.: It suffices to show that \(D\) is strongly pseudoconvex, or the type \(r=2\) along the boundary, as thereafter one can apply Fu-Wong [11] and Nemirovski-Shafikov [23]. To this end, suppose \(x^{*}\in\partial D\) is a point on the boundary of type \(r=r\left(x^{*}\right)\geq 2\). By Proposition 3 and (2.5), under the assumption of Theorem 1, the Bergman kernel \(K=K_{D}\) of the domain satisfies the following Monge-Ampere equation inside \(D\). \[J\left(K\right)\coloneqq\det\begin{pmatrix}K&\bar{Z}K&\bar{W}K\\ ZK&\left(Z\bar{Z}-\left[Z,\bar{Z}\right]^{0,1}\right)K&\left(Z\bar{W}-\left[Z, \bar{W}\right]^{0,1}\right)K\\ WK&\left(W\bar{Z}-\left[W,\bar{Z}\right]^{0,1}\right)K&\left(W\bar{W}-\left[W, \bar{W}\right]^{0,1}\right)K\end{pmatrix}=\frac{9\pi^{2}}{2}K^{4}. \tag{5.1}\] Here we have used the orthonormal frame of \(T^{1,0}\mathbb{C}^{2}\) given by \(Z=\frac{1}{2}\left(U_{1}-iU_{2}\right)\), \(W=\frac{1}{2}\left(U_{0}-iU_{3}\right)\) defined prior to Theorem 4. Using (3.2) and (3.13), we compute the \(\left(0,1\right)\) components of the commutators above: \[\left[Z,\bar{Z}\right]^{0,1}= \left[-\Delta p\left(z_{1}\right)\frac{i}{2}\partial_{x_{3}}\right] ^{0,1}+O\left(-1\right)\] \[= \frac{\Delta p\left(z_{1}\right)}{2}\left(W-\bar{W}\right)^{0,1}+O \left(-1\right)\] \[= -\frac{\Delta p\left(z_{1}\right)}{2}\bar{W}+O\left(-1\right);\] \[\left[Z,\bar{W}\right]^{0,1}= O\left(-r\right);\quad\left[W,\bar{Z}\right]^{0,1}=O\left(-r\right); \quad\left[W,\bar{W}\right]^{0,1}=O\left(-2r+1\right).\] This allows us to compute the most singular term in the asymptotics of both sides of (5.1) as \(z\to x^{*}\) along the tangential path \(z\left(\epsilon\right)\) in (3.4). By Theorem 4, one obtains along \(z\left(\epsilon\right)\), \[J\left(K\right) =\left[\left(-2\rho\right)^{-2-\frac{2}{r}}\right]^{4}\left[ \det\begin{pmatrix}\tilde{B}_{0,0}&\partial_{\bar{z}_{1}}\tilde{B}_{0,0}& \tilde{B}_{0,1}\\ \partial_{z_{1}}\tilde{B}_{0,0}&\partial_{z_{1}}\partial_{\bar{z}_{1}}\tilde {B}_{0,0}+\left[\frac{\Delta p}{2}\right]\tilde{B}_{0,1}&\partial_{z_{1}} \tilde{B}_{0,1}\\ \tilde{B}_{0,1}&\partial_{\bar{z}_{1}}\tilde{B}_{0,1}&\tilde{B}_{0,2}\end{pmatrix} \left(z_{1,V}\right)+o_{\epsilon}\left(1\right)\right]\] \[K^{4} =\left[\left(-2\rho\right)^{-2-\frac{2}{r}}\right]^{4}\left[ \tilde{B}_{0,0}\left(z_{1,V}\right)^{4}+o_{\epsilon}\left(1\right)\right].\] Here we say a function \(\phi\) is \(o_{\epsilon}\left(1\right)\) if \(\phi(\epsilon)\) goes to \(0\) as \(\epsilon\to 0^{+}.\) (Recall \(\rho=-\epsilon^{r}\) along the path). Thus comparing the leading coefficients in the asymptotics gives the following equation \[\det\begin{pmatrix}\tilde{B}_{0,0}&\partial_{\bar{z}_{1}}\tilde{B}_{0,0}& \tilde{B}_{0,1}\\ \partial_{z_{1}}\tilde{B}_{0,0}&\partial_{z_{1}}\partial_{\bar{z}_{1}}\tilde {B}_{0,0}+\left[\frac{\Delta p}{2}\right]\tilde{B}_{0,1}&\partial_{z_{1}} \tilde{B}_{0,1}\\ \tilde{B}_{0,1}&\partial_{\bar{z}_{1}}\tilde{B}_{0,1}&\tilde{B}_{0,2}\end{pmatrix} \left(z_{1}\right)=\frac{9\pi^{2}}{2}\tilde{B}_{0,0}\left(z_{1}\right)^{4}, \tag{5.2}\] at each \(z_{1}\in\mathbb{R}^{2}\), for the model Bergman kernel. Here \(\tilde{B}_{0,\alpha_{0}}\) is as defined in (3.7). Finally, one chooses \(z_{1}\) such that \(\Delta p\left(z_{1}\right)\neq 0\) and substitutes \(z_{1}\mapsto t^{\frac{1}{r}}z_{1}\) in the last equation (5.2) above for the model. The terms involved in the above equation are then of the form \[\tilde{B}_{0,\alpha_{0}}\left(t^{\frac{1}{r}}z_{1}\right) =\frac{1}{\pi}\int_{0}^{\infty}e^{-s}s^{1+\frac{2}{r}+\alpha_{0}}B _{0}\left(s^{\frac{1}{r}}t^{\frac{1}{r}}z_{1}\right)ds\] \[=\frac{t^{-2-\frac{2}{r}-\alpha_{0}}}{\pi}\int_{0}^{\infty}e^{- \frac{r}{r}}\tau^{1+\frac{2}{r}+\alpha_{0}}B_{0}\left(\tau^{\frac{1}{r}}z_{1} \right)d\tau\] from the definition (3.7). Upon differentiation, using Proposition 6 and standard asymptotics for the Laplace transform of a classical symbol \(\tau^{1+\frac{2}{r}+\alpha_{0}}B_{0}\left(\tau^{\frac{1}{r}}z_{1}\right)\in S _{\tau,\mathrm{cl}}^{2+\alpha_{0}}\), the terms involved in (5.2) now have an asymptotic expansion \[\left[\partial_{z_{1}}^{\alpha_{1}}\partial_{\bar{z}_{1}}^{\alpha_{2}}\tilde{B }_{0,\alpha_{0}}\right]\left(t^{\frac{1}{r}}z_{1}\right)=t^{1-\frac{2+|\alpha| }{r}}\left[\sum_{j=0}^{N+2+\alpha_{0}}c_{j}t^{-j}+\sum_{j=0}^{N}d_{j}t^{-(3+ \alpha_{0}+j)}\ln t+O\left(t^{-(3+\alpha_{0}+N)}\right)\right], \tag{5.3}\] \(\forall N\in\mathbb{N}\), as \(t\rightarrow\infty\). Furthermore the leading logarithmic term is \(d_{0}=\frac{1}{2\pi^{2}}\partial_{z_{1}}^{\alpha_{1}}\partial_{\bar{z}_{1}}^{ \alpha_{2}}b_{3+\alpha_{0}}\). The above allows us to compute the asymptotics of both sides of the equation (5.2) as \(t\rightarrow\infty\). In particular the right hand side of (5.2) is seen to contain the logarithmic term \[\frac{9\pi^{2}}{2}b_{3}^{4}\left(\frac{1}{2\pi^{2}}t^{-2-\frac{2}{r}}\ln t \right)^{4}\] in its asymptotic expansion. Such a term involving the fourth power of a logarithm is missing from the left hand side of (5.2). This particularly gives \(b_{3}=0\). Using Theorem 8, it now follows that \(q(z,\bar{z})=c_{0}(z_{1}\bar{z}_{1})^{\frac{r}{2}-1}\) for some \(c_{0}>0\). Since \(p\) has no purely holomorphic or anti-holomorphic terms in \(z_{1}\), this gives \(p=\frac{c}{2}\left(z_{1}\bar{z}_{1}\right)^{\frac{r}{2}}\) for some \(c>0\). However, the model kernel \(B_{0}\) for this potential \(p=\frac{c}{2}\left(z_{1}\bar{z}_{1}\right)^{\frac{r}{2}}\) was computed in Theorem 14. Suppose \(r>2\). By Theorem 14 and definition of \(\tilde{B}_{0,\alpha_{0}}\) in (3.7), \[\tilde{B}_{0,\alpha_{0}}\left(0\right) =\frac{1}{\pi}\Gamma\left(2+\frac{2}{r}+\alpha_{0}\right)B_{0} \left(0\right)=\frac{1}{2\pi^{2}}\Gamma\left(2+\frac{2}{r}+\alpha_{0}\right) \frac{r}{\Gamma\left(\frac{2}{r}\right)}c^{\frac{2}{r}};\] \[\left[\partial_{z_{1}}\tilde{B}_{0,\alpha_{0}}\right]\left(0 \right) =\left[\partial_{\bar{z}_{1}}\tilde{B}_{0,\alpha_{0}}\right]\left(0 \right)=0;\] \[\left[\partial_{z_{1}}\partial_{\bar{z}_{1}}\tilde{B}_{0,\alpha _{0}}\right]\left(0\right) =\frac{1}{\pi}\Gamma\left(2+\frac{4}{r}+\alpha_{0}\right)\left[ \partial_{z_{1}}\partial_{\bar{z}_{1}}B_{0}\right]\left(0\right)\] \[=\frac{1}{2\pi^{2}}\Gamma\left(2+\frac{4}{r}+\alpha_{0}\right) \frac{r}{\Gamma\left(\frac{4}{r}\right)}c^{\frac{4}{r}}.\] Plugging the above into (5.2) with \(z_{1}=0\), and noting \(\Delta p(0)=0\) as \(r>2\), we obtain Using \(\Gamma\left(z+1\right)=z\Gamma\left(z\right)\), the above simplifies to the equation \[\left(1+\frac{4}{r}\right)\left(2+\frac{2}{r}\right)=\frac{9}{4}\left(1+\frac {2}{r}\right)^{2}.\] Solving this quadratic equation yields \(r=2\), a plain contradiction. This finishes the proof. _Remark 15_.: Note in our proof above, we compared the \(\left(t^{-2-\frac{2}{r}}\ln t\right)^{4}\) term on both sides of (5.2). For that, we only used the information of \(b_{3}\), where \(b_{3}\) arises in the coefficient of the first \(\ln t\) term in the asymptotics for the model Bergman kernel (see (5.3)). The authors also compared the non-logarithmic terms on two sides of (5.2): the \(\left(t^{-2-\frac{2}{r}}\right)^{4}\) and \(\left(t^{-2-\frac{2}{r}}\right)^{4}t^{-1}\) terms, whose calculations then involve \(b_{0}\) and \(b_{1}.\) Nevertheless, we only got tautologies and thus derived no contradiction. It is interesting to compare this with the proofs of Cheng's conjecture. In dimension 2, Fu-Wong [11] used information of the logarithmic term in the Fefferman expansion of the Bergman kernel (3.1); while in higher dimension, Huang and the second author [17] utilized information of the non-logarithmic term (principal singular term) in the expansion (3.1).
2304.09748
Reference-based Image Composition with Sketch via Structure-aware Diffusion Model
Recent remarkable improvements in large-scale text-to-image generative models have shown promising results in generating high-fidelity images. To further enhance editability and enable fine-grained generation, we introduce a multi-input-conditioned image composition model that incorporates a sketch as a novel modal, alongside a reference image. Thanks to the edge-level controllability using sketches, our method enables a user to edit or complete an image sub-part with a desired structure (i.e., sketch) and content (i.e., reference image). Our framework fine-tunes a pre-trained diffusion model to complete missing regions using the reference image while maintaining sketch guidance. Albeit simple, this leads to wide opportunities to fulfill user needs for obtaining the in-demand images. Through extensive experiments, we demonstrate that our proposed method offers unique use cases for image manipulation, enabling user-driven modifications of arbitrary scenes.
Kangyeol Kim, Sunghyun Park, Junsoo Lee, Jaegul Choo
2023-03-31T06:12:58Z
http://arxiv.org/abs/2304.09748v1
# Reference-based Image Composition with Sketch ###### Abstract Recent remarkable improvements in large-scale text-to-image generative models have shown promising results in generating high-fidelity images. To further enhance editability and enable fine-grained generation, we introduce a multi-input-conditioned image composition model that incorporates a sketch as a novel modal, alongside a reference image. Thanks to the edge-level controllability using sketches, our method enables a user to edit or complete an image sub-part with a desired structure (i.e., sketch) and content (i.e., reference image). Our framework fine-tunes a pre-trained diffusion model to complete missing regions using the reference image while maintaining sketch guidance. Albeit simple, this leads to wide opportunities to fulfill user needs for obtaining the in-demand images. Through extensive experiments, we demonstrate that our proposed method offers unique use cases for image manipulation, enabling user-driven modifications of arbitrary scenes. ## 1 Introduction Recent advancements in large-scale text-to-image studies employing diffusion models [10, 11, 13] have shown remarkable generative capabilities in synthesizing intricate images guided by textual input. Building upon these foundational generative models, various approaches have been developed to enhance editability, either through the modification of a forward scheme during the inference process [7] or by incorporating diverse modalities [8, 16, 17]. Notably, Paint-by-Example [16] proposes the utilization of a visual hint to mitigate the ambiguity arising from textual descriptions. This empowers users to manipulate object-level semantics leveraging a reference image. Our goal is to advance generative diffusion models by incorporating a partial sketch as a novel modal. Sketches have long served as an intuitive and efficient means for creating a user-intended drawings, and are widely employed by both artists and the general population. A key advantage of sketches compared to other models, such as text [10, 11, 13] and image [16], is to provide edge-level controllability by guiding the geometric structure during image synthesis. This feature enables users to achieve finer detailed generation and editing of images in comparison to textual descriptions and standalone visual hints. In practice, due to the significant utility of sketches in creating content in cartoons, this work focuses on the editing of cartoon scenes. In this work, we propose a multi-input-conditioned image composition framework capable of generating a result guided by a sketch and reference image. During generation, the sketch serves as a structure prior that determines the shape of the result within the target region. To achieve this, we train a diffusion model [11] to learn the completion of missing regions using the reference image, while maintaining sketch guidance. Furthermore, we suggest a _sketch plug-and-drop strategy_ during the inference phase, which grants the model a degree of flexibility to relax sketch guidance. The motivation behind this approach is to diminish the impact of overly simplified sketches (_e.g._, a single straight line for generating the clouds), and to make the model to accommodate a wide range of sketch types. Compared to existing frameworks, sketch-guided generation offers distinguishable use cases for image manipulation. Fig. 1 presents visual examples utilizing distinct reference images and sketches. In each row, the foreground and background have been modified separately by incorporating the provided conditions to fill in the target region. These examples highlight the effectiveness of the proposed method in enabling user-driven modifications of arbitrary scenes. ## 2 Methods ### Preliminaries **Latent Diffusion Model.** Recent text-to-image diffusion models such as LDM [11] apply a diffusion model training in the latent space of a pre-trained autoencoder for efficient text-to-image generation. Specifically, an encoder \(\mathcal{E}\) encodes \(\mathbf{x}\) into a latent representation \(z=\mathcal{E}(\mathbf{x})\), and a decoder \(\mathcal{D}\) reconstructs the image from \(z\). Here, \(\mathbf{x}\in\mathbb{R}^{3\times H\times W}\) indicates an input image, where \(H\) and \(W\) denotes height and width, respectively. Then, a conditional diffusion model \(\epsilon_{\theta}\) is trained with the following loss function: \[\mathcal{L}=\mathbb{E}_{\epsilon(\mathbf{x}),\mathbf{y},\epsilon\sim\mathcal{ N}(0,1),t}\left[\|\epsilon-\epsilon_{\theta}(z_{t},t,\text{CLIP}(\mathbf{y}))\|_{2}^{ 2}\right], \tag{1}\] where \(\mathbf{y}\) denotes a text condition that is fed to a CLIP [9] text encoder. \(t\) is uniformly sampled from \(\{1,...,T\}\), and \(z_{t}\) is a noisy version of the latent representation \(z\). Moreover, a latent diffusion model employs a time-conditioned U-Net as \(\epsilon_{\theta}\). To achieve the faster convergence of our method, we employ Stable Diffusion [11] as a strong prior. ### Proposed Approach #### 2.2.1 Training Phase **Problem Setup.** We aim to train a diffusion model that takes the following inputs. \(\mathbf{x_{p}}\in\mathbb{R}^{3\times H\times W}\) indicates an initial image, where \(H\) and \(W\) denotes height and width, respectively. Let \(\mathbf{m}\in\{0,1\}^{H\times W}\) denote a binary mask, where one indicates target editing regions, while zero means the regions to be preserved. Corresponding sketch image \(\mathbf{s}\in\{0,1\}^{H\times W}\) convey a structure information of masked region and a reference image \(\mathbf{x_{r}}\in\mathbb{R}^{3\times H^{\prime}\times W^{\prime}}\) is responsible for semantics inside the sketch. During training, the model fills the masked regions following the sketch-guided structure with the contents of the reference image. **Initialization.** During training, the model is responsible for generating the masked region following the sketch guidance and properly placing the reference image. It may be extra tasks for model, instead we opt to use previous work [16]'s trained weights as an initialization. By doing this, the model has a strong prior to bring the reference image on the masked region. The initialization makes the model achieve its objective at ease, by optimizing the initialized weights to follow the sketch guidance. We found that the model takes a longer time to converge without the strong prior. **Model Forward.** We take self-supervised training to train a diffusion model. For each iteration, the training batch consists of \(\{\mathbf{x_{p}},\mathbf{m},\mathbf{s},\mathbf{x_{r}}\}\), and the goal of the model is to properly produce the masked part \(\mathbf{m}\odot\mathbf{x_{p}}\). We _randomly_ generate a region of interest (RoI) as a bounding box, in which mask shape augmentation as previous work [16] is applied to simulate a drawn-like mask. On another branch, \(\mathbf{x_{r}}\) is generated by cropping and augmenting the RoI region, being successively fed to a CLIP [9] image encoder to make a condition \(\mathbf{c}\) for a diffusion model. Formally, it can be written as \(\mathbf{c}=\text{MLP}(\text{CLIP}(\mathbf{x_{r}}))\), where MLP is a simple feed-forward network to transform the output distribution to be properly adjusted as the condition of the diffusion model. For each diffusion step, the masked initial image, the sketch, and the previous step's result \(\mathbf{y_{t}}\) are concatenated and fed into the diffusion model. #### 2.2.2 Inference Phase **Sketch Plug-and-Drop Strategy.** Although the free-drawn sketch is a handy condition for a user, the model occasionally has difficulty in strictly keeping the outline structure. This is noticeable when it comes to generating scenery backgrounds such as clouds and snowy trees, where the boundaries are rather ambiguous. In these cases, a simple straight line may be inadequate though the user's burden can be minimized. In this respect, we add on a simple yet effective method, _sketch sketch plug-and-drop_, in which the infusion steps of the sketch condition are flexibly adjusted. **Self-reference Generation.** Sketch-guided generation can be used in various cases, such as manipulating the shapes of objects and changing the poses. When generating specific parts of an object, obtaining a suitable reference image is not trivial, because it is difficult to collect a harmonic image with a masked part. In practice, we found that using a certain part of the initial image alternatively is a reasonable way to get the reference image. ## 3 Experiments As a training and testing dataset, we utilized Danbooru [1] dataset. Due to the massive volume of the original dataset, we opt to use its subset to reduce the excessive training duration. The Danbooru dataset encompasses a wide variety of animated characters, exhibiting diverse artistic styles from numerous artists. We employed a recently released edge detection method [14] to extract the edges, subsequently binarizing the extracted edges. The training and testing datasets comprise 55,104 and 13,775 image-sketch pairs, respectively. For qualitative evaluation, we collect real-world cartoon scenes to showcase the potential of our work. The majority of these cartoon scenes were sourced from Naver Webtoon platform 1 and captured from Ghibli studio's movies 2. Footnote 1: [https://comic.naver.com/webtoon/weekday](https://comic.naver.com/webtoon/weekday) Footnote 2: [https://www.ghibli.jp/](https://www.ghibli.jp/) ### Comparisons with Baselines Baselines.To the best of our understanding, no prior research has proposed a multi-input-conditioned model with a diffusion model approach. Therefore, we implement two baselines to analyze our model in a qualitative and quantitative manner. In specific, we implement (1) Paint-by-T+S that uses a text-sketch pair instead of an example-sketch pair (2) Paint-by-E (xample) [16] to reveal the effectiveness of sketch guidance to complete a missing part of an image. One of our interests is to demonstrate the superiority of an example-sketch pair compared to other guidance. In the following experiments, we focus on unraveling the potential of such guidance by not only showing superb quantitative results but also providing multiple use cases of our model. All baselines and our model are trained with the same configuration. For quantitative comparison, we use the averages of \(L_{1}\) and \(L_{2}\) errors between the initial and reconstructed images. We utilize Frechet inception distance (FID) [3] to evaluate the visual quality of the generated images. **Comparison Results.** Fig. 2 shows obvious differences in each input setting. Using a sole reference image is insufficient to make a good guess of the missing part, producing an aesthetically unappealing completion result (_2nd_ column of Fig. 2). On the other hand, simply feeding sketch input greatly improves visual quality by guiding the structure. Especially, unlike a text condition that generally contains information for the entire image, an exemplar image could be an efficient condition for filling the local context. Table 1 shows quantitative comparisons with baselines. As can be seen, Paint-by-E relatively performs worse than other models, because there is no explicit guidance about the structure within the masked region. Compared to it, both Paint-by-T+S and Paint-by-E+S exhibit superior performances thanks to the sketch conditions. Particularly, Paint-by-E+S approach demonstrates the most \begin{table} \begin{tabular}{c c c c} \hline \hline & \(L_{1}\)\(error\) & \(L_{2}\)\(error\) & \(FID\) \\ \hline Paint-by-E & 0.0866 & 0.0380 & 6.314 \\ Paint-by-T+S & 0.0851 & 0.0313 & 6.314 \\ Paint-by-E+S & **0.0680** & **0.2393** & **5.716** \\ \hline \hline \end{tabular} \end{table} Table 1: Quantitative comparisons with the baselines Figure 4: Examples of local object shape editing applications. Figure 3: Examples for background scene editing. Figure 2: Qualitative comparisons with the baselines. exceptional performance, in conjunction with accompanying sketch and exemplar image. ### Application Scenarios In this section, we show multiple representative applications of our model for editing real-world cartoon scenes. Note that since a sketch has a variety of shapes, the applications are not limited to presenting examples. **Background Scene Editing.** Drawing background cartoon scenes is labor-intensive and time-consuming work. Hence, many scenes are cropped on purpose and reduce the author's effort to draw scenery parts. In response, our approach opens the way to complete and extend the cropped scenes, giving a chance to flexibly control a shape and semantics. Fig 3 shows that new continuous scenes have been successfully added to real-world cartoon ones. This enables the authors not to be dedicated to creating unimportant scenes. **Object Shape Editing.** Fig. 4 shows that our model's use case is to edit fine-detailed object shapes such as hairs and beards. As can be seen, a user can manipulate the structure of local regions by simply giving user-desirable sketches. This application is practically useful for generating numerous scenes that have different structures. **Object Changes.** Our model takes a reference image that is used to determine the in-context of a masked region. In this sense, a preferable reference image from a user serves to generate user-desirable images. As shown in Fig. 5, we can readily alter an upper cloth of a character by providing various references. Surprisingly, a texture or pattern of cloth is imported to the generated results as well as reference colors. ### Qualitative Analysis **Effect of Sketch Plug-and-Drop.** Fig. 6 presents the effect of the sketch plug-and-drop strategy. In this case, a user means to add a cloud to the sky, yet the sketch guidance is composed of straight lines that are not suitable to represent the detailed boundaries of a cloud. As a result, the synthesized cloud is awkward as seen in the last column of Fig 6. On the other hand, reducing the time-step is an effective workaround to relax an over-constrained sketch condition, leading to more natural results as presented in the rest columns of Fig 6. **Effect of Sketch Boundary.** A sketch with multiple lines forms boundaries in an image, and we found that the boundaries act as a pivotal point. As shown in Fig. 7, two sketches bring on different results, especially a straight line of the second sketch serves to determine the boundary of cloths. ## 4 Discussions and Conclusions In this paper, we present a novel sketch-guided diffusion model. Our primary motivation lies in fully utilizing _partial_ sketch and reference image during diffusion process to enable a user to control the structure of output. With our model, a user successfully generates and manipulate the targeted region, conditioned on a user-drawn sketch and reference image. Throughout the generation process, the sketch offers structural guidance, while the reference image dictates the output's semantics. We demonstrated the utilities of the proposed approach by showing various use examples. Despite its effectiveness, our model can be further enhanced to provide a more user-friendly tool in practical scenarios. Given the consideration of multiple inputs, a user-centric system that facilitates seamless interaction between the user and the model needs to be explored In following research, we plan to address this issue, and devise a highly intuitive Figure 5: Visual results of object change with reference images. Figure 6: Effect of sketch sketch plug-and-drop strategy. Figure 7: Effect of different sketches for object synthesis. tool incorporating our model.
2309.14083
High-order aberrations of vortex constellations
When reflected from an interface, a laser beam generally drifts and tilts away from the path predicted by ray optics, an intriguing consequence of its finite transverse extent. Such beam shifts manifest more dramatically for structured light fields, and in particular for optical vortices. Upon reflection, a field containing a high-order optical vortex is expected to experience not only geometrical shifts, but an additional splitting of its high-order vortex into a constellation of unit-charge vortices, a phenomenon known as topological aberration. In this article, we report on the first direct observation of the topological aberration effect, measured through the transformation of a vortex constellation upon reflection. We develop a general theoretical framework to study topological aberrations in terms of the elementary symmetric polynomials of the coordinates of a vortex constellation, a mathematical abstraction which we prove to be the physical quantity of interest. Using this approach, we are able to verify experimentally the aberration of constellations of up to three vortices reflected from a thin metallic film. Our work not only deepens the understanding of the reflection of naturally occurring structured light fields such as vortex constellations but also sets forth a potential method for studying the interaction of twisted light fields with matter.
Rafael Barros, Subhajit Bej, Markus Hiekkamäki, Marco Ornigotti, Robert Fickler
2023-09-25T12:22:34Z
http://arxiv.org/abs/2309.14083v1
# High-order aberrations of vortex constellations ###### Abstract When reflected from an interface, a laser beam generally drifts and tilts away from the path predicted by ray optics, an intriguing consequence of its finite transverse extent. Such beam shifts manifest more dramatically for structured light fields, and in particular for optical vortices. Upon reflection, a field containing a high-order optical vortex is expected to experience not only geometrical shifts, but an additional splitting of its high-order vortex into a constellation of unit-charge vortices, a phenomenon known as topological aberration. In this article, we report on the first direct observation of the topological aberration effect, measured through the transformation of a vortex constellation upon reflection. We develop a general theoretical framework to study topological aberrations in terms of the elementary symmetric polynomials of the coordinates of a vortex constellation, a mathematical abstraction which we prove to be the physical quantity of interest. Using this approach, we are able to verify experimentally the aberration of constellations of up to three vortices reflected from a thin metallic film. Our work not only deepens the understanding of the reflection of naturally occurring structured light fields such as vortex constellations but also sets forth a potential method for studying the interaction of twisted light fields with matter. _Introduction.--_ The reflection of light beams on interfaces is a quintessential problem in wave optics. In contrast to simple geometrical optics laws, where a ray of light reflects of a surface with the same incidence angle but on the opposing side of the surface normal, the reflection of beams in wave optics encompasses a more complex behavior. Generally, a beam can be mathematically described by a finite spectrum of plane waves. Upon reflection, each of these plane waves individually follows paths determined by geometrical optics [1]. However, the changes to the amplitudes and phases of the component plane waves result in a reflected field which is macroscopically different from the incident field. Perhaps the most well-known examples of such phenomena are optical beam shifts, which are spatial and angular deviations of reflected laser beams from their expected ray-optical trajectories [2; 3]. Such shifts are usually separated into Goos-Hanchen (GH) [4; 5] and Imbert-Fedorov (IF) [2; 6] shifts, which occur along and orthogonal to the plane of incidence, respectively. These effects have distinct physical origins: while the GH shifts come from the angular variation of the reflection coefficient of the interface, the IF shifts, also known as the spin-Hall effect of light [7], stem from the rotation of the plane of incidence experienced by elliptically polarized waves upon reflection. The GH and IF shifts have been extensively studied for different types of optical beams and interfaces in a range of contexts, as discussed in detail in Ref.[2]. A more subtle topic is the reflection of vortex beams [2; 8; 9; 10; 11; 12; 13]. Such beams are most well known for their ability to carry orbital angular momentum (OAM), which is accompanied by a twisted phase structure, i.e., a varying phase front transverse to the propagation direction as \(\exp(i\ell\phi)\), where \(\ell\) is an integer number and \(\phi\) labels the azimuthal angle [14; 15]. The wavefronts of such beams consist of \(|\ell|\) identical helicoids nested on the propagation axis, on which lies a strength-\(\ell\) optical vortex or phase singularity [16; 17]. Due to the OAM-induced mixing of the spatial and angular GH and IF shifts [18], vortex beams also acquire OAM sidebands upon reflection [19] and feature significant deformations in their intensity profiles [8; 9] at critical angles. Beyond beam shifts, Dennis and Gotte showed in their seminal work [20] that a pure strength-\(\ell\) optical vortex splits into a constellation of unit strength vortices by reflecting at a simple dielectric interface, which they recognized as a topological aberration effect. It is well known, however, that it is impossible to generate perfect higher-order vortices, as the latter are unstable under any kind of pertubation[21; 22; 23]. Therefore, a fundamental question remains open: how do vortex constellations, the actual observable physical objects, experience topological aberrations? In this article, we adress this question first by generalizing the framework developed in Ref. [20] to arbitrary uniform aberrations of vortex constellations, thereby addressing typical experimental limitations in the generation of vortex beams. We then show that the aberration of a vortex constellation is captured by a linear transformation of the elementary symmetric polynomials of its coordinates, and that this transformation is related to the angular Wirtinger derivatives of the aberration. Lastly, we detail the experimental observation of the topological aberration of vortex constellations under total internal reflection from a thin Au film-Fused Silica interface, where aberration effects are enhanced due to the resonant excitation of surface plasmons at the interface. With this method, we are able to verify the topological aberration using constellations with up to 3 vortices, in good agreement with the theoretical model developed. Due to the direct link between the vortex dynamics and the properties of the material upon which the light is reflected, our results could be applied to advanced material characterization techniques. Moreover, the underlying theoretical description of the vortex dynamics might also be applicable to other fields of physics such as Bose-Einstein condensates [24; 25], superfluids[26; 27], or topological field theories[28]. _Theory of topological aberrations.--_ We start by considering an arbitrary input field containing a constellation of \(\ell_{m}\) identical unit-strength vortices. In momentum space, the scalar part of such a field can be written as [20] \[\tilde{\psi}_{I}(\mathbf{\chi})=\sum_{\ell=0}^{\ell_{m}}\sigma_{\ell}(|\chi|)\, \left(\frac{\chi}{|\chi|}\right)^{\ell}\,, \tag{1}\] where \(\mathbf{\chi}=(\chi,\chi^{*})\), with \(\chi=(k_{x}+ik_{y})/\sqrt{2}\), and \((k_{x},k_{y})\) are Cartesian coordinates in momentum space. \(\sqrt{2}|\chi|=k_{\perp}=\sqrt{k_{x}^{2}+k_{y}^{2}}\) and \(\text{Arg}(\chi)=\alpha=\tan^{-1}(k_{y}/k_{x})\), where \(k_{\perp}\) and \(\alpha\) are the momentum radial and azimuthal coordinates, respectively. Equation (1) can be seen as a superposition of optical vortices with topological charges \(0\leq\ell\leq\ell_{m}\), whose background functions \(\sigma_{\ell}(|\chi|)\) determine both the OAM spectrum and the field's radial features. Furthermore, the constellation coordinates are encoded in the complex roots of the field (1), which is a polynomial of order \(\ell_{m}\) in \(\chi\). A field of the form given in Eq. (1), i.e. a constellation of vortices, is naturally obtained by attempting to generate a vortex of order \(\ell_{m}\) experimentally, due to inherent limitations in light-shaping devices and/or subsequent aberrations caused by mirrors, lenses, and other refractive elements. Nevertheless, for small aberrations, which is the case for a carefully assembled experiment, the contributions of \(\ell<\ell_{m}\) are small and the constellation is tightly confined to the beam's central propagation direction. In this case, we can approximate the field in Eq. (1) in real space to the lowest order of each OAM component, obtaining \[\psi_{I}(\mathbf{\xi})=\sum_{\ell=0}^{\ell_{m}}\bar{\sigma}_{\ell}\xi^{\ell}\,, \tag{2}\] where \(\mathbf{\xi}=(\xi,\xi^{*})\), with \(\xi=(x+iy)/\sqrt{2}\), and where \((x,y)\) are the Cartesian coordinates in real space. The coefficients \(\bar{\sigma}_{\ell}\), derived in detail in the Supplementary Material, represent the \(\ell\)-th order moments of the background functions \(\sigma_{\ell}(|\chi|)\). Upon a spatially uniform aberration such as the reflection from a flat interface, each plane wave component of momentum \(\mathbf{\chi}\) in the angular spectrum (1) is transformed as \(\exp(i\mathbf{\chi}\cdot\mathbf{\xi}^{*})\to R(\chi)\exp(i\chi\cdot\xi^{*})\), where \(R(\mathbf{\chi})\) is the momentum-dependent aberration function. In the case of reflection from a flat interface, the aberration function is the reflection coefficient given by the Fresnel coefficients, which also depends on the incident and measurement polarizations [2; 20]. The resulting field in real space can then be modeled as \[\psi_{R}(\mathbf{\xi})=\int d^{2}\mathbf{\chi}\tilde{\psi}_{I}(\mathbf{\chi})R(\mathbf{\chi} )\exp(i\mathbf{\chi}\cdot\mathbf{\xi}^{*})\,, \tag{3}\] which is a Fourier transform in the complex coordinates \(\mathbf{\xi}\) and \(\mathbf{\chi}\). Hence, we can expand the aberration in a power series and use the differentiation property of the Fourier transform to obtain \[\psi_{R}(\mathbf{\xi}) = \sum_{n=0}^{\infty}\sum_{m=0}^{n}i^{-n}C_{n}^{m}\frac{\partial_{ n}\psi_{I}(\mathbf{\xi})}{\partial\xi^{*m}\partial\xi^{n-m}}\,, \tag{4}\] \[C_{n}^{m} = \frac{1}{n!}\binom{n}{m}\frac{\partial_{n}R(\mathbf{\chi})}{\partial \chi^{m}\partial\chi^{*n-m}}\bigg{|}_{\chi,\chi^{*}=0}\,, \tag{5}\] where \(\partial/\partial\xi=\left(\partial/\partial x-i\partial/\partial y\right)/ \sqrt{2}\) and \(\partial/\partial\chi=\left(\partial/\partial k_{x}-i\partial/\partial k_{y} \right)/\sqrt{2}\) are Wirtinger derivatives [29]. From Eqs. (2) and (4), we arrive at the final expression for the aberrated field \[\psi_{R}(\mathbf{\xi})=\sum_{\ell=0}^{\ell_{m}}\bar{\gamma}_{\ell}\xi^{\ell}=\sum_ {\ell=0}^{\ell_{m}}\sum_{n=0}^{\ell}i^{-n}\bar{\sigma}_{\ell}n!\binom{\ell}{n} C_{n}^{0}\xi^{\ell-n}\,, \tag{6}\] where \(\bar{\gamma}_{\ell}\) are the coefficients of the vortex expansion of the aberrated field. Equation (6) shows that an aberration decomposes the incoming light field into a superposition of its Wirtinger derivatives, weighted by the Wirtinger derivatives of the aberration function. Interestingly, in the case of the vortex input field of Eq.(1) the derivative modes are simply optical vortices of lower order, in such a way that the aberrated field still contains a collection of \(\ell_{m}\) phase singularities, but in a deformed constellation, as we illustrate in Fig. 1. This implies that by monitoring the changes in a vortex constellation, one can gain insight into the properties of the aberration function, and thus into features of the light-matter interaction causing the aberration. _High-order aberrations of vortex constellations--_ Equation (6) establishes a direct correspondence between the coefficients of the vortex expansions of the incident and aberrated fields. However, the question still remains as to how the coordinates of the corresponding constellations compare. This problem is elegantly solved by Vieta's theorem [30], which we illustrate in the following. Consider an arbitrary constellation of \(\ell_{m}\) vortices with coordinates \((x_{j},y_{j})\), which yields a vortex expansion with roots \(\Delta_{j}=(x_{j}+iy_{j})/\sqrt{2}\) and coefficients \(c_{i}\) to be determined. According to Vieta's theorem we have that \[e_{j}=(-1)^{j}\frac{c_{\ell_{m}-j}}{c_{\ell_{m}}}\,,\quad 1<j\leq\ell_{m}\,, \tag{7}\] where \(e_{j}\) is the \(j\)-th order elementary symmetric polynomial (ESP) of the set of complex roots \(\{\Delta\}\). In terms of ESPs, equation (6) assumes the simple form \[\mathbf{e}_{R}=\hat{R}_{\chi}\mathbf{e}_{I}\,, \tag{8}\] where the vectors \(\mathbf{e}_{I}\) and \(\mathbf{e}_{R}\) contain the ESPs of the input and aberrated constellations, respectively, with the aberration operator \(\hat{R}_{\chi}\) acting on the \(\ell_{m}\)-dimensional subspace spanned by the ESPs. The explicit form for the operator \(\hat{R}_{\chi}\) is an upper triangular matrix containing the Wirtinger derivatives of the aberration function \(R(\mathbf{\chi})\) up to the order \(\ell_{m}\), as we detail in the Supplementary Material. We conclude from Eq. (8) that the aberration of a vortex constellation is fully captured by a linear transformation of its ESPs. Furthermore, an intuitive physical/geometrical interpretation of the ESP transformations is possible by means of Newton's identities [30], which relate the ESPs to power sums of a set of complex roots. For example, \(e_{1}\) is proportional to the constellation barycenter, whose transformation contains the GH and IF beam shifts of vortex beams [2; 3; 18; 20]. On the other hand, \(e_{2}\) and \(e_{3}\) give the second and third moments of the constellation positions with respect to the barycenter, which are related to the variance and the skewness of the constellation, respectively. For higher orders, the ESPs are sums of products of moments of the constellation positions that add up to that order, whose meaning we could not identify. For clarity, let us explicitly consider the case of \(\ell_{m}=2\). We choose the origin of the reference frame at the barycenter of the input constellation, such that the input field is \(\psi_{I}(\mathbf{\xi})\propto\xi^{2}+e_{I2}\xi^{0}\). From Eqs. (6) and (7), we obtain that \[e_{R1} = 2iR_{\chi}^{\prime}/R_{\chi}\Big{|}_{0}\,, \tag{9}\] \[e_{R2} = e_{I2}-R_{\chi}^{\prime\prime}/R_{\chi}\Big{|}_{0}\,, \tag{10}\] where \(R_{\chi}^{\prime}|_{0}\) and \(R_{\chi}^{\prime\prime}|_{0}\) are the first and second Wirtinger derivatives of \(R(\mathbf{\chi})\) at \(\chi=\chi^{*}=0\). Equation (9) shows the shift of the barycenter of the constellation by the known Artmann translator of a unit vortex [12; 18], which is the first order topological aberration of the constellation. In fact, this result, previously predicted only for perfect input vortices [20], holds regardless of the particular geometry or the order of a vortex constellation. Furthermore, equation (10), shows the second-order topological aberration, which amounts to the stretching of the input constellation and its rotation around the barycenter. Experiment.--To observe the aberration of vortex constellations, we use the experimental setup depicted in Fig.2. We use a fiber-coupled diode laser centered at the wavelength \(\lambda=810\,\mathrm{nm}\), operated below the lasing threshold and filtered by a narrow bandpass filter (Semrock, 3nm bandwidth). The beam is then polarized on a polarizing beam-splitter (PBS) and divided with a non-polarizing 50:50 beam-splitter (BS) into a probe beam, which we prepare with the desired constellation, and a reference beam, used afterwards for the generation of off-axis holograms. To prepare the probe beam, we shape it on a spatial light modulator (SLM, Holoeye Pluto 2.0) displaying a \(\exp(i\ell_{m}\phi)\) phase structure, which naturally leads to a constellation tightly confined to the beam center. We further tune the constellation by introducing wavefront aberrations by means of Zernike polynomials [31], ensuring that all the \(\ell_{m}\) vortices are clearly observable in our apparatus. As an additional precaution, we use a second PBS to prevent any undesired changes in polarization due to the SLM. Our aberrating device consists of a 35nm thick Au film assembled on the hypothenuse of a fused silica (FS) right-angle prism, in the so-called Kretschmann-Raether (KR) configuration [32; 33]. The Au film is deposited Figure 1: Conceptual picture of the topological aberration of a vortex constellation upon reflection. \(\vec{k}\) denotes a plane wave component of the incident vortex, while \(\vec{k}_{0}\) is the central wave vector in the beam. The plane of incidence is \(k_{x}k_{z}\), and the dashed lines show the ray optics trajectory of the central plane wave \(\vec{k}_{0}\), with the angle of incidence \(\theta_{0}\). The input and reflected constellations are marked with black crosses and orange circles, respectively. Figure 2: Experimental setup containing: BP, bandpass filter; ND, neutral density filter; M1, retroreflecting mirror; PBS, polarizing beam-splitter; BS, 50:50 non-polarizing beam-splitter; HWP1 and HWP2, true-zero order half-wave plates; SLM, spatial light modulator; CMOS, camera sensor. The inset shows a breakdown of our reflecting object, which consists of an Au film assembled on a FS substrate and optically contacted to a FS prism. on an FS substrate using electron beam-assisted evaporation and optically contacted to the prism with index-matching gel (Thorlabs G608N3). In the KR configuration, a P-polarized input beam can resonantly excite surface plasmons at the interface between FS and the Au film via attenuated total reflection (ATR), enhancing the angular gradients of the reflection coefficient, and hence the topological aberration imprinted on the reflected beam [34]. By measuring the power of the reflected S- and P-polarized probe beams as a function of the angle of incidence and using a transfer matrix model (see Supplementary Material) to numerically fit the results, we determine the resonant ATR angle and the effective permittivity of the Au film to be \(\theta_{ATR}\approx 45.02^{\circ}\) and \(\epsilon_{f}=-22.8456+1.2619i\), respectively. Furthermore, since only P-polarized light excites surface waves, we can use the reflected S-polarized field as a good approximation of the input field and consider the aberration function \(R(\mathbf{\chi})\) to be the fitted reflection coefficient for P-polarized light. To measure the constellation coordinates, we superpose the probe and reference beams off-axis and record the interference patterns with a CMOS camera (IDS xxx, 2.2 \(\mu\)m pixel size), from which both the intensity and phase profiles are digitally retrieved [35, 36]. It is worth noting that the measured constellations are affected by interference with stray light coming from the protective glass window on the CMOS sensor, the reflective layers of the beam-splitters, etc., which cannot be eliminated in post-processing due to the propagation direction being almost identical to the probe field. The laser was operated below the threshold in order to eliminate these unwanted interference effects, at the cost of requiring precise control of the path length difference between reference and probe beams. In our setup we match the path lengths with a translation stage, also ensuring negligible lensing effect on the recorded holograms. Furthermore, we dim the intensity of the reference beam with neutral density filters, improving the hologram visibility around the singularity positions and allowing better use of the CMOS dynamic range. Finally, we determine the constellation coordinates from the retrieved phase profiles using the algorithm detailed in the Supplementary Material. The uncertainty of the method was estimated to be 1/2 pixel irrespective of the hologram fringe density, which is the uncertainty we used to calculate the error bars shown in Fig.3. _Results.--_ We show in Fig 3 our experimental results for constellations with \(1\leq\ell_{m}\leq 3\). In Fig.3a we show the logarithmic Wirtinger derivative of the reflection coefficient, retrieved from the shifts in the barycenters of the constellations using Eqs. (6) and (9). We see that the retrieved derivatives accurately follow the theoretical curves simulated from the measured material parameters under the assumption \(R_{\chi}^{(n)}|_{0}\approx R_{k_{x}}^{(n)}|_{0}/2^{n/2}\). Such an approximation is reasonable in our case, since \(k_{x}k_{z}\) is the plane of incidence, and the constellations are prepared and measured in the same polarization. We note that, unlike the shifts in the center of mass for the vortex-carrying fields [12, 20], the measured barycenter shifts do not depend on the total topological charge \(\ell_{m}\). This result is a direct measurement of the first-order topological aberration of a vortex constellation, which, despite its strong correspondence to the GH and IF shifts, had not yet been observed. Furthermore, the barycenter shifts we report are greatly enhanced compared to those of dielectric interfaces, reaching more than 30 wavelengths in magnitude. A similar enhancement of the GH shift has been reported in [34] for Gaussian beams, which here we extend to vortex beams and out-of-plane (IF) shifts. Coming to the second-order topological aberration, in Fig. 3b we show the second Wirtinger derivative of the reflection coefficient, retrieved from the transformations in the second-order ESPs of the constellations with \(\ell_{m}=2\) and \(\ell_{m}=3\). The retrieved derivatives agree well with the theoretical simulations, although the experimental results show sharper features when compared to the theory. The geometrical meaning of the second-order topological aberration can also be seen in Fig. 3d, where we show the measured input and aberrated constellations for the 15 angles of incidence highlighted by an arrow in Fig. 3b. We see that near the resonant ATR angle, both the size and shape of the constellation change noticeably, in addition to the barycenter shifts following Fig. 3a. Furthermore, we note that the reflected S-polarized constellations do not change noticeably with the angle of incidence, supporting our assumption that they are good approximations of the input constellations. Lastly, in Fig. 3c we show the third Wirtinger derivative of the reflection coefficient, retrieved from the changes in the third-order ESPs of the \(\ell_{m}=3\) constellation. In this case, the experimental results diverge more substantially from the theoretical simulations, which we attribute to two possible sources of error. First, measuring constellations is increasingly challenging as the constellation order increases, since the intensity near the singularity positions decreases exponentially with \(\ell_{m}\). For \(\ell_{m}>2\), in our case, properly exposing the hologram near the singularity positions requires long exposure times, and hence the measurements are more susceptible to mechanical instabilities and noise. Second, the transfer matrix model that we used for the theoretical simulations does not include the roughness of the Au film surface. Due to these limitations, we did not measure the aberrations of constellations of even higher order. Nonetheless, analogously to before, we show in Fig. 3e the raw constellation coordinates for the set of angles highlighted by an arrow in Fig. 3c. Here, the deformation of the P-polarized constellations as well as the shift of the barycenter can be clearly seen. _Conclusion.--_ In this article, we developed a new framework to describe the topological aberration of vortex constellations and reported on the first direct observation of the phenomenon predicted more than a decade ago [20]. We showed that aberrations generally affect the elementary symmetric polynomials of a constellation's coordinates, from which the angular Wirtinger deriva tives of the aberration can be directly retrieved. We demonstrate the effect experimentally by measuring the topological aberrations of vortex constellations with up to 3 vortices upon attenuated total reflection, where aberration effects are enhanced by the resonant excitation of surface plasmon polaritons on the interface between a thin metallic film and a dielectric medium. Our results mark the first experimental observation of the topological aberration effect and introduce a new framework for probing light-matter interactions with twisted light. These topological aberration effects, when described with a precise theoretical model, will be a useful tool to derive the optical properties of metal-dielectric interfaces, thereby possibly simplifying current standard characterization techniques such as ellipsometry. In addition, because of the generality of our underlying theoretical framework describing the dynamics of vortex constellations, it will be interesting to apply our approach to other singular light fields featuring, for example, polarization singularities [17], vortex knots [37] and polarization knots [38]. Beyond light waves, we expect our work to inspire connections to other fields of physics where complex vortex constellations appear, such Bose-Einstein condensates, superfluids [24; 25; 26; 27], and even topological field theories [28]. Finally, as the structured light fields are seen as promising approaches to encode classical and quantum information, a better understanding / simplified description of the vortex dynamics will facilitate the reduction of errors caused by aberrations [39]. _Acknowledgements.--_ We thank Jorg Gotte and Mark Dennis for inspiring discussions at the ICOAM 2022. We also thank Matias Eriksson for the helpful comments on the data processing. R. B. acknowledges the support of the Academy of Finland through the postdoctoral researcher funding (decision 349120). M. H. acknowledges support from the Doctoral School of Tampere University, Emil Aaltonen foundation, and the Magnus Ehrn-rooth foundation through its graduate student scholarship R. F. acknowledges the support of the Academy of Finland through the Academy Research Fellowship (decision 332399). All authors acknowledge the support of the Academy of Finland through the Competitive Funding to Strengthen University Research Profiles (decision 301820) and the support of the Photonics Research and Innovation Flagship (PREIN - decision 320165).
2310.00195
Exploring Strategies for Modeling Sign Language Phonology
Like speech, signs are composed of discrete, recombinable features called phonemes. Prior work shows that models which can recognize phonemes are better at sign recognition, motivating deeper exploration into strategies for modeling sign language phonemes. In this work, we learn graph convolution networks to recognize the sixteen phoneme "types" found in ASL-LEX 2.0. Specifically, we explore how learning strategies like multi-task and curriculum learning can leverage mutually useful information between phoneme types to facilitate better modeling of sign language phonemes. Results on the Sem-Lex Benchmark show that curriculum learning yields an average accuracy of 87% across all phoneme types, outperforming fine-tuning and multi-task strategies for most phoneme types.
Lee Kezar, Riley Carlin, Tejas Srinivasan, Zed Sehyr, Naomi Caselli, Jesse Thomason
2023-09-30T00:19:10Z
http://arxiv.org/abs/2310.00195v1
# Exploring Strategies for Modeling ###### Abstract Like speech, signs are composed of discrete, recombinable features called phonemes. Prior work shows that models which can recognize phonemes are better at sign recognition, motivating deeper exploration into strategies for modeling sign language phonemes. In this work, we learn graph convolution networks to recognize the sixteen phoneme "types" found in ASL-LEX 2.0. Specifically, we explore how learning strategies like multi-task and curriculum learning can leverage mutually useful information between phoneme types to facilitate better modeling of sign language phonemes. Results on the Sem-Lex Benchmark show that curriculum learning yields an average accuracy of 87% across all phoneme types, outperforming fine-tuning and multi-task strategies for most phoneme types. ## 1 Introduction Phonology can act as a low-level yet discrete feature space to help guide a language model's perception of language. This guidance is particularly attractive for computationally modeling signed languages, a task where accurate and reliable perception is fundamental but frequently muddied by insufficient data and a high degree of signer variation. From the perspective of phonology, however, the features of interest are significantly easier to learn. As the systematic components of signs, phonemes are by definition more abundant and less complex than whole signs. Meanwhile, the utility of phoneme recognition for understanding signed language is clear. [1] showed that leading models for isolated sign recognition (ISR) do not reliably encode sign language phonemes, but with supervision for phonemes alongside gloss, those models will be up to 9% more accurate at ISR. Moreover, the descriptive power of sign language phonology can readily extend to sign constructions not found in lexicons, like derivatives of signs (e.g. day vs. two-days) and classifier constructions (e.g. CL:drive-up-hill). Building on these observations, we focus on modeling sign language phonology as a task unto itself. We evaluate two learning strategies, multi-task and curriculum learning, on their ability to improve the recognition of American Sign Language (ASL) phonemes. Our experiments using the Sem-Lex Benchmark [2] to learn a graph convolution network reveal that learning phoneme types together (rather than separately) improves accuracy. We additionally show that curriculum learning, wherein the model is given structural priors related to phoneme types, is the most accurate method to date. ## 2 Related Work on Modeling Sign Language Phonology Several related works have explored models for sign language phonology, both as its own task and in relation to sign recognition, in a variety of ways. Perhaps the earliest effort to recognize sign language phonemes, [3] explores the use of nearest-neighbor classifiers for recognizing handshapes, palm orientations, locations, and movements, based on hand-crafted feature representations of the hands and body, such as "rotation values of the hand joints." Although they claim 85%-95% accuracy, the classifiers are trained and evaluated on synthetic sign recognition, raising concerns regarding their classifiers' ability to generalize to naturalistic signing. Later efforts to recognize SL phonemes would focus on designing neural architectures to replace the hand-crafted features with encodings. While [5], [6], and [7] improve sign recognition by more intentionally attending to the hands and mouth, one might describe their connection with language _phonetic_, as they are more closely associated with continuous input-level features than they are with discrete and symbolic representations. WLASL-LEX [8] is conceptually similar to the work presented here. This work compared four classification models for each of the 6 phoneme types found in ASL-LEX 1.0, learned with WL-ASL dataset. In contrast, the work presented here uses the Sem-Lex Benchmark [2], which contains 10 additional phoneme types (see Table 1 and approximately 300% more sign videos to learn from. Additionally, we explore learning strategies rather than model architectures. \begin{table} \begin{tabular}{l l r} \hline \hline \multicolumn{1}{c}{Phoneme Type} & \multicolumn{1}{c}{Description} & \multicolumn{1}{c}{\#Values} \\ \hline Major Location & The sign’s broad location. & 5 \\ Minor Location & The signs’s specific location. & 37 \\ Second Minor Loc. & The sign’s specific, secondary location. & 37 \\ \hline Contact & If the hand touches body. & 2 \\ Thumb Contact & If the thumb touches other fingers. & 3 \\ \hline Sign Type & Movement symmetry (if 2H) & 6 \\ Repeated Movement & If the movement is repeated. & 2 \\ \hline Path Movement & The shape that the hand traces. & 8 \\ Wrist Twist & If the hand rotates. & 2 \\ Spread & If the hand’s fingertips touch. & 3 \\ Flexion & The way the finger joints are bent. & 8 \\ Thumb Position & If the thumb is in/out. & 2 \\ Selected Fingers & Which fingers are salient to the sign. & 8 \\ Spread Change & If _Spread_ changes. & 3 \\ Nondom. Handshape & Configuration of the nondominant hand. & 56 \\ Handshape & Configuration of the dominant hand. & 58 \\ \hline \hline \end{tabular} \end{table} Table 1: Overview of each phoneme types found in ASL-LEX 2.0, including the number of possible values. See [4] for a more detailed description of the types. ## 3 Methodology ### Task Description Brentari's Prosodic Model [9] organizes sign language phonology into a hierarchy of sixteen distinct phoneme types \(\mathcal{P}_{1\dots 16}\). We view learning each phoneme type \(\mathcal{P}_{i}\) as a classification task with \(K_{i}\) distinct classes, where a model takes as input a pose estimation video \(\mathbf{x}\) and predicts an output class \(y\in\{1,...,K_{i}\}\). ### Learning to Classify Phoneme Types with SL-GCN Following [1], we perform phoneme classification using an SL-GCN encoder [10]\(\mathcal{M}_{SL}\) to encode the pose estimation video. To classify phoneme type \(\mathcal{P}_{i}\), a linear classification layer \(\theta_{i}\) maps the encoding to a probability distribution \(p(y|\mathbf{x};\mathcal{M}_{SL},\theta_{i})\) over the \(K_{i}\) output classes of that phoneme type. The cross-entropy loss with ground-truth label \(\mathbf{y}_{i}\) is minimized over training dataset \(\mathcal{D}\): \[\min_{\mathbf{x},\mathbf{y}_{i}\sim\mathcal{D}}\mathcal{L}_{CE}\Big{(}\mathbf{ y}_{i},\;\;p(y|\mathbf{x};\mathcal{M}_{SL},\theta_{i})\Big{)} \tag{1}\] ### Multi-task Learning of Phoneme Types Training separate models for each phoneme type misses an opportunity to leverage shared knowledge across phoneme types. To this end, the first strategy we explore is multi-task learning of phoneme types, where individual classification layers for each of the 16 phoneme types are trained simultaneously. All 16 phoneme type classifiers \(\theta_{1\dots 16}\) are learned jointly using video encodings from a shared SL-GCN encoder. \[\min_{\mathbf{x},\mathbf{y}_{1\dots 16}\sim\mathcal{D}}\sum_{i=1}^{16} \mathcal{L}_{CE}\Big{(}\mathbf{y}_{i},\;\;p(y|\mathbf{x};\mathcal{M}_{SL}, \theta_{i})\Big{)} \tag{2}\] Figure 1: We explore multi-task and curriculum learning to improve modeling of sign language phonology by sharing knowledge across phoneme types. ### Curriculum Learning of Phoneme Types While multi-task learning allows the model to implicitly share knowledge across phoneme types, there is no structural prior or inductive bias that regulates how the knowledge is shared. Controlling the order in which phoneme types are introduced might introduce such a structural prior. For instance, learning to locate the hands first can help us identify the type of hand movement better. To decide this order, we follow two principles: earlier types should be "easier" than later types, and the knowledge of earlier types should reduce the entropy of later types. Because Brentari's Prosodic Model is hierarchical--phoneme types have children and/or parent types--the most sensible way to follow these principles is to start with "leaf" phoneme types (those which have no children and fewer values) and moving up towards broader, more holistic phoneme types. For example, Handshape has children types Flexion, Selected Fingers, et al. Ergo, learning the more specific children types before Handshape is both easier (in terms of number of values possible values) and reduces the entropy of Handshape. The resulting curriculum is shown in the ordering of Table 1, starting with Major Location and ending in Handshape. We perform curriculum learning by introducing phoneme types into the learning objective cumulatively. We begin training by only learning phoneme type \(\mathcal{P}_{1}\), and introduce a new phoneme type \(\mathcal{P}_{k}\) into the learning objective every \(e\) epochs. For the final \(e\) epochs, model training is identical to multi-task learning of all 16 phoneme types \(\mathcal{P}_{1\dots 16}\). \[\text{Step }k:\min_{\mathbf{x},\mathbf{y}_{1\dots k}\sim\mathcal{D}}\sum_{i=1}^{ k}\mathcal{L}_{CE}\Big{(}\mathbf{y}_{i},\,\,\,p(y|\mathbf{x};\mathcal{M}_{SL}, \theta_{i})\Big{)} \tag{3}\] ## 4 Data and Experimental Setup To evaluate our method, we use the Sem-Lex Benchmark [2], which contains 65,935 isolated sign videos annotated by humans with both gloss and ASL-LEX phoneme types. This dataset was collected from deaf, fluent signers who gave informed consent and received financial compensation. We use the train partition (\(n=51,029\)) gloss labels to pre-train the SL-GCN model to recognize gloss only and use this as the base model to fine-tune for phonological feature recognition. For multi-task learning, we use a cosine-annealing learning rate and train for 100 epochs, at which point the validation accuracy plateaus. For curriculum learning, we follow the same procedure but with \(e=20\) between the introduction of a new phoneme type. Models are implemented in PyTorch, largely building on the OpenHands framework [11], and trained on four Nvidia 3090 GPUs. Our code can be found at [https://github.com/leekezar/Modeling-ASL-Phonology/](https://github.com/leekezar/Modeling-ASL-Phonology/). ## 5 Results and Discussion The top-1 accuracies for each phoneme type across methods are shown in Table 2. Overall, the three methods are effective at learning the phonological features in Sem-Lex, with an overall accuracy of 85.9%. This outperforms WLASL-LEX [8] across its six phoneme types by 5.9-20.9%. From these results, we glean the following conclusions: * **Phoneme types co-occur.** There is a relatively small difference of 0.8% between learning the entire model for each phoneme type individually (fine-tune) vs. learning them all at once (multi-task). This indicates that the value of \(\mathcal{P}_{i}\) informs the value of \(\mathcal{P}_{j}\) to such an extent that it overcomes the challenges associated with learning many tasks simultaneously. * **Inductive priors help.** The slight but consistent improvement imbued by the curriculum shows that, in addition to co-occurrence (captured by the multi-task strategy), there exist structural priors in the form of hierarchical relationships. In other words, the information gain is minimized (i.e. \(\mathcal{P}_{i}\) is least surprising) when more fine-grained phoneme types are learned _after_ coarse-grained ones. \begin{table} \begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{**Phoneme Type**} & \multicolumn{3}{c}{**Learning Method**} & Type \\ \cline{2-5} & Fine-Tune & Multitask & Curriculum & Average \\ \hline Major Location & 87.7 & 87.5 & **89.1** & 88.1 \\ Minor Location & 79.2 & 78.1 & **80.7** & 79.3 \\ Second Minor Location & 78.7 & 77.2 & **80.9** & 78.9 \\ Contact & 89.3 & 88.6 & **91.1** & 89.7 \\ Thumb Contact & 91.7 & 91.1 & **92.1** & 91.6 \\ Sign Type & 88.9 & 87.9 & **89.4** & 88.7 \\ Repeated Movement & 85.5 & 85.4 & **87.3** & 86.1 \\ Path Movement & 75.6 & 75.4 & **79.6** & 76.9 \\ Wrist Twist & 92.4 & 92.6 & **93.5** & 92.8 \\ Selected Fingers & **91.1** & 90.2 & 90.6 & 90.6 \\ Thumb Position & 91.5 & 91.5 & **91.8** & 91.6 \\ Flexion & 81.2 & 81.0 & **83.2** & 81.8 \\ Spread & 88.4 & 88.0 & **88.8** & 88.4 \\ Spread Change & 90.3 & 89.5 & **90.4** & 90.1 \\ Nondominant Handshape & **83.5** & 81.7 & 83.2 & 82.8 \\ Handshape & **77.4** & 74.7 & 76.9 & 76.3 \\ \hline Method Average & 85.8 & 85.0 & **86.8** & 85.9 \\ \hline \hline \end{tabular} \end{table} Table 2: Phoneme recognition top-1 accuracy (%) across the proposed methods, evaluated on Sem-Lex (test). All models are pre-trained to predict sign gloss. Conclusion In this work, we provide empirical evidence that modeling sign language phonology is a complex task which benefits from special attention to linguistic theory. By learning models from high-quality, specialized data which reflect phonological features in sign language, we show that phonemes exhibit both co-occurrence and hierarchical relationships. Future work will compare varied curricula, explore the capacity of phonemes to describe a variety of sign constructions, and assess any biases associated with race and gender.
2310.20245
Finding a Maximum Restricted $t$-Matching via Boolean Edge-CSP
The problem of finding a maximum $2$-matching without short cycles has received significant attention due to its relevance to the Hamilton cycle problem. This problem is generalized to finding a maximum $t$-matching which excludes specified complete $t$-partite subgraphs, where $t$ is a fixed positive integer. The polynomial solvability of this generalized problem remains an open question. In this paper, we present polynomial-time algorithms for the following two cases of this problem: in the first case the forbidden complete $t$-partite subgraphs are edge-disjoint; and in the second case the maximum degree of the input graph is at most $2t-1$. Our result for the first case extends the previous work of Nam (1994) showing the polynomial solvability of the problem of finding a maximum $2$-matching without cycles of length four, where the cycles of length four are vertex-disjoint. The second result expands upon the works of B\'{e}rczi and V\'{e}gh (2010) and Kobayashi and Yin (2012), which focused on graphs with maximum degree at most $t+1$. Our algorithms are obtained from exploiting the discrete structure of restricted $t$-matchings and employing an algorithm for the Boolean edge-CSP.
Yuni Iwamasa, Yusuke Kobayashi, Kenjiro Takazawa
2023-10-31T08:05:34Z
http://arxiv.org/abs/2310.20245v1
# Finding a Maximum Restricted \(t\)-Matching ###### Abstract The problem of finding a maximum \(2\)-matching without short cycles has received significant attention due to its relevance to the Hamilton cycle problem. This problem is generalized to finding a maximum \(t\)-matching which excludes specified complete \(t\)-partite subgraphs, where \(t\) is a fixed positive integer. The polynomial solvability of this generalized problem remains an open question. In this paper, we present polynomial-time algorithms for the following two cases of this problem: in the first case the forbidden complete \(t\)-partite subgraphs are edge-disjoint; and in the second case the maximum degree of the input graph is at most \(2t-1\). Our result for the first case extends the previous work of Nam (1994) showing the polynomial solvability of the problem of finding a maximum \(2\)-matching without cycles of length four, where the cycles of length four are vertex-disjoint. The second result expands upon the works of Berczi and Vegh (2010) and Kobayashi and Yin (2012), which focused on graphs with maximum degree at most \(t+1\). Our algorithms are obtained from exploiting the discrete structure of restricted \(t\)-matchings and employing an algorithm for the Boolean edge-CSP. **Keywords** Polynomial algorithm, \(C_{k}\)-free \(2\)-matching, Jump system, Boolean edge-CSP ## 1 Introduction The matching problem and its generalizations have been one of the most primary topics in combinatorial optimization, and have been the subject of a large number of studies. A typical generalization of a matching is a _\(t\)-matching_ for an arbitrary positive integer \(t\): an edge subset \(M\) in a graph is a \(t\)-matching1 if each vertex is incident to at most \(t\) edges in \(M\). Footnote 1: Such an edge set is sometimes called a _simple \(t\)-matching_ in the literature, but we omit the adjective “simple” because in this article a \(t\)-matching is always an edge subset and we never put multiplicities on the edges. While the problem of finding a \(t\)-matching of maximum cardinality can be solved in polynomial time by a matching algorithm, the problem becomes much more difficult, typically NP-hard, when additional constraints are imposed. The constraints discussed in this paper is to exclude certain subgraphs. Let \(G=(V,E)\) be a graph and let \(\mathcal{K}\) be a family of subgraphs of \(G\). For a subgraph \(K\) of \(G\), let \(V(K)\) and \(E(K)\) denote the vertex set and the edge set of \(K\), respectively. **Definition 1.1**.: An edge subset \(M\subseteq E\) is _\(\mathcal{K}\)-free_ if \(E(K)\not\subseteq M\) for any \(K\in\mathcal{K}\). \(\blacksquare\) The problem formulated below is the central issue in this paper, whose relevance will be described in detail in Section 1.1. Maximum \(\mathcal{K}\)-Free \(t\)-Matching Problem Given a graph \(G=(V,E)\) and a family \(\mathcal{K}\) of subgraphs of \(G\), find a \(\mathcal{K}\)-free \(t\)-matching \(M\subseteq E\) of maximum cardinality. Our primary contribution are the following two theorems, showing the polynomial solvability of certain classes of Maximum \(\mathcal{K}\)-Free \(t\)-Matching Problem. The first result concerns the case where \(\mathcal{K}\) is an edge-disjoint family of \(t\)_-regular complete partite subgraphs_ of \(G\). While we defer the definition to Section 2.1, here we remark that a complete graph \(K_{t+1}\) and a complete bipartite graph \(K_{t,t}\) are examples of a \(t\)-regular complete partite graph. **Theorem 1.2**.: _For a fixed positive integer \(t\), Maximum \(\mathcal{K}\)-Free \(t\)-Matching Problem can be solved in polynomial time if all the subgraphs in \(\mathcal{K}\) are \(t\)-regular complete partite and pairwise edge-disjoint._ In the second result, instead of the edge-disjointness of the subgraphs in \(\mathcal{K}\), we assume that the maximum degree of the input graph \(G\) is bounded. **Theorem 1.3**.: _For a fixed positive integer \(t\), Maximum \(\mathcal{K}\)-Free \(t\)-Matching Problem can be solved in polynomial time if all the subgraphs in \(\mathcal{K}\) are \(t\)-regular complete partite and the maximum degree of \(G\) is at most \(2t-1\)._ Theorems 1.2 and 1.3 offer larger polynomially solvable classes of Maximum \(\mathcal{K}\)-Free \(t\)-Matching Problem than the previous work introduced in Section 1.1 below. In addition, we will describe the relevance of Theorems 1.2 and 1.3 to the literature, together with their extensions and variants in the subsequent sections. Here we just remark that the assumption on the complete partiteness of the forbidden subgraphs in Theorems 1.2 and 1.3 is unavoidable, because the problem is NP-hard without this assumption (see Proposition 1.4 below). ### Previous Work on Restricted \(t\)-Matchings Maximum \(\mathcal{K}\)-Free \(t\)-Matching Problem has its origin in the case where \(t=2\) and \(\mathcal{K}\) is composed of short cycles. Let \(k\) be a positive integer. If \(\mathcal{K}\) is the set of all cycles of length at most \(k\), then a \(\mathcal{K}\)-free \(2\)-matching is referred to as a _\(C_{\leq k}\)-free \(2\)-matching_, and Maximum \(\mathcal{K}\)-Free \(2\)-Matching Problem as the _\(C_{\leq k}\)-free \(2\)-matching problem_. Similarly, if \(\mathcal{K}\) is the set of all cycles of length exactly \(k\), then a \(\mathcal{K}\)-free \(2\)-matching is referred to as a _\(C_{k}\)-free \(2\)-matching_, and Maximum \(\mathcal{K}\)-Free \(2\)-Matching Problem as the _\(C_{k}\)-free \(2\)-matching problem_. The \(C_{\leq k}\)-free and \(C_{k}\)-free \(2\)-matching problems have attracted significant attention because of their relevance to the Hamilton cycle problem; for \(k\geq|V|/2\), a \(C_{\leq k}\)-free \(2\)-matching of cardinality \(|V|\) is a Hamilton cycle. When \(k\) is small, the \(C_{\leq k}\)-free \(2\)-matching problem is not directly used to find Hamilton cycles, but it can be applied to designing approximation algorithms for related problems such as the graph-TSP and the minimum \(2\)-edge-connected spanning subgraph problem. For example, in a recent paper [26], an approximation algorithm for the minimum \(2\)-edge-connected spanning subgraph problem is provided using a maximum \(C_{\leq 3}\)-free \(2\)-matching. The complexity of the \(C_{\leq k}\)-free \(2\)-matching problem depends on the value of \(k\). It is straightforward to see that this problem can be solved in polynomial time for \(k\leq 2\). For \(k=3\), Hartvigsen [14] gave a polynomial-time algorithm for the \(C_{\leq 3}\)-free \(2\)-matching problem. For \(k\geq 5\), Papadimitriou proved the NP-hardness of the \(C_{\leq k}\)-free \(2\)-matching problem (see [8]). For the case \(k=4\), it is open whether the \(C_{\leq 4}\)-free and \(C_{4}\)-free \(2\)-matching problems can be solved in polynomial time, and these problems have rich literature of polynomial-time algorithms for several special cases. First, for subcubic graphs, i.e., graphs with maximum degree at most three, polynomial-time algorithms for the \(C_{4}\)-free and the \(C_{\leq 4}\)-free \(2\)-matching problems were given by Berczi and Kobayashi [3] and Berczi and Vegh [4], respectively. Simpler algorithms for both problems in subcubic graphs (and for some of their weighted variants) were designed by Hartvigsen and Li [16] and by Paluch and Wasylkiewicz [36]. It is worth noting that a connection between the \(C_{4}\)-free matching problem and a connectivity augmentation problem is highlighted in [3], underscoring the significance of the \(C_{4}\)-free matching problem. Second, for the graphs in which the cycles of length four are vertex-disjoint, Nam [34] gave a polynomial-time algorithm for the \(C_{4}\)-free \(2\)-matching problem. Finally, for bipartite graphs, several of polynomial-time algorithms are devised; see Section 1.3 for details. Let \(t\) be an arbitrary positive integer. The \(C_{k}\)-free \(2\)-matching problem is generalized to Maximum \(\mathcal{K}\)-Free \(t\)-Matching Problem for general \(t\) in the following way. Let \(K_{t}\) denote the complete graph with \(t\) vertices, and \(K_{t,t}\) the complete bipartite graph in which each color class has \(t\) vertices. Here, note that a cycle of length three is isomorphic to \(K_{3}\). Thus, the \(C_{3}\)-free \(2\)-matching problem can be naturally generalized to Maximum \(\mathcal{K}\)-Free \(t\)-Matching Problem, where \(\mathcal{K}\) is the set of all subgraphs that are isomorphic to \(K_{t+1}\). We refer to this special case of Maximum \(\mathcal{K}\)-Free \(t\)-Matching Problem as the \(K_{t+1}\)_-free \(t\)-matching problem_. Similarly, by noting that a cycle of length four is isomorphic to \(K_{2,2}\), we can generalize the \(C_{4}\)-free \(2\)-matching problem to the \(K_{t,t}\)_-free \(t\)-matching problem_. This is another special class of Maximum \(\mathcal{K}\)-Free \(t\)-Matching Problem, where \(\mathcal{K}\) is the set of all subgraphs isomorphic to \(K_{t,t}\). The polynomial solvability of these two problems are open. For certain special cases of Maximum \(\mathcal{K}\)-Free \(t\)-Matching Problem, however, several polynomial-time algorithms are presented, corresponding to those for the \(C_{\leq k}\)-free and \(C_{k}\)-free \(2\)-matching problems. First, Berczi and Vegh [4] gave a polynomial-time algorithm for Maximum \(\mathcal{K}\)-Free \(t\)-Matching Problem for the case where \(\mathcal{K}\) consists of \(K_{t+1}\)'s and \(K_{t,t}\)'s and the input graph \(G\) has maximum degree at most \(t+1\). This extends that for the \(C_{\leq 4}\)-free \(2\)-matching problem in subcubic graphs. Second, Kobayashi and Yin [28] presented a polynomial-time algorithm for Maximum \(\mathcal{K}\)-Free \(t\)-Matching Problem for the case where \(\mathcal{K}\) consists of all the subgraphs isomorphic to a fixed \(t\)-regular complete partite graph and the input graph \(G\) has maximum degree at most \(t+1\). Kobayashi and Yin [28] also proved that this assumption on \(\mathcal{K}\) is inevitable. **Proposition 1.4** (follows from Kobayashi and Yin [28]).: _If \(H\) is a connected \(t\)-regular graph which is not complete partite and \(\mathcal{K}\) is the set of all subgraphs isomorphic to \(H\), then Maximum \(\mathcal{K}\)-Free \(t\)-Matching Problem is NP-hard even when the maximum degree of \(G\) is at most \(t+1\) and the subgraphs in \(\mathcal{K}\) are pairwise edge-disjoint._ As mentioned above, this NP-hardness explains that the assumption on the complete partiteness of the forbidden subgraphs is also unavoidable in Theorems 1.2 and 1.3. Finally, for the \(K_{t,t}\)-free \(t\)-matching problem in bipartite graphs, some polynomial-time algorithms are designed, extending those for the \(C_{4}\)-free \(2\)-matching problem in bipartite graphs (see Section 1.3). ### Our Contribution We have seen that the polynomial solvability of the \(K_{t+1}\)-free \(t\)-matching and \(K_{t,t}\)-free \(t\)-matching problems is unkown. As well as these problems, the polynomial solvability of Maximum \(\mathcal{K}\)-Free \(t\)-Matching Problem in general graphs for \(\mathcal{K}\) being an arbitrary family of \(t\)-regular complete partite subgraphs is unknown. The contribution of this paper is to present polynomial-time algorithms for several special cases of this problem. #### 1.2.1 Overview of Our Results Recall our first result, Theorem 1.2, solving the case where \(\mathcal{K}\) is an edge-disjoint family of \(t\)-regular complete partite subgraphs of \(G\). By setting \(t=2\) in Theorem 1.2, we immediately obtain the following corollary. **Corollary 1.5**.: Maximum \(\mathcal{K}\)-Free \(2\)-Matching Problem _can be solved in polynomial time if all the subgraphs in \(\mathcal{K}\) are isomorphic to \(C_{3}\) or \(C_{4}\), and are pairwise edge-disjoint._ Corollary 1.5 extends the result by Nam [34], solving the \(C_{4}\)-free \(2\)-matching problem where the cycles of length four are vertex-disjoint. Namely, Corollary 1.5 extends vertex-disjointness to edge-disjointness, and allows \(\mathcal{K}\) to include not only \(C_{4}\) but also \(C_{3}\). Next, recall our second result, Theorem 1.3, which solves the case where the maximum degree of the input graph is at most \(2t-1\). Theorem 1.3 expands upon the works of Berczi and Vegh [4] and Kobayashi and Yin [28], which focused graphs with maximum degree at most \(t+1\). That is, Theorem 1.3 improves the degree bound from \(t+1\) to \(2t-1\), where \(2t-1>t+1\) if \(t>2\). We further present some extensions of Theorems 1.2 and 1.3. Below is one extension of Theorem 1.2, which will be used in our proof for Theorem 1.3. The pairwise edge-disjointness of the subgraphs in \(\mathcal{K}\) is relaxed to the following condition: * The subgraph family \(\mathcal{K}\) is partitioned into subfamilies \(\mathcal{K}_{1},\ldots,\mathcal{K}_{\ell}\) such that * for each subfamily \(\mathcal{K}_{i}\) (\(i=1,\ldots,\ell\)), the number \(\big{|}\bigcup_{K\in\mathcal{K}_{i}}V(K)\big{|}\) of its vertices is bounded by a fixed constant, and * for distinct subfamilies \(\mathcal{K}_{i}\) and \(\mathcal{K}_{j}\) (\(i,j\in\{1,\ldots,\ell\}\)) and for each pair of subgraphs \(K\in\mathcal{K}_{i}\) and \(K^{\prime}\in\mathcal{K}_{j}\), it holds that \(K\) and \(K^{\prime}\) are edge-disjoint. Here "RD" stands for "Relaxed Disjointness." **Theorem 1.6**.: _For a fixed positive integer \(t\), Maximum \(\mathcal{K}\)-Free \(t\)-Matching Problem can be solved in polynomial time if \(\mathcal{K}\) is a family of \(t\)-regular complete partite subgraphs of \(G\) satisfying the condition (RD)._ Other results include extensions from \(t\)-matchings to \(b\)-matchings (Theorems 3.1, 3.4, 3.5, 4.1, and 4.3). For a vector \(b\in\mathbb{Z}^{V}\), a \(b\)_-matching_ is an edge subset \(M\subseteq E\) such that each vertex \(v\in V\) is incident to at most \(b(v)\) edges in \(M\). Namely, we can deal with inhomogeneous degree constraints. Moreover, we provide an extension from forbidding subgraphs to forbidding degree sequences (Theorem 5.1). In particular, the latter extension offers some new results on restricted \(2\)-matchings (Examples 5.2 and 5.3). #### 1.2.2 Technical Ingredients Technically, our algorithms are established by exploiting two important previous results, one is on the discrete structure of \(\mathcal{K}\)-free \(t\)-matchings and the other is on the constraint satisfaction problem (CSP). This is in contrast to the fact that the previous algorithms [4, 28, 34] are based on graph-theoretical methods. The first result is outlined as follows. Let \(b\in\mathbb{Z}^{V}\) with \(b(v)\leq t\) for each \(v\in V\) and let \(J\subseteq\mathbb{Z}^{V}\) be the set of the degree sequences of all \(\mathcal{K}\)-free \(b\)-matching in \(G\). Kobayashi, Szabo, and Takazawa [27] proved that \(J\) forms a _constant-parity jump system_ if all the subgraphs in \(\mathcal{K}\) are \(t\)-regular complete partite (see Theorem 2.3 below). Here a constant-parity jump system is a subset of \(\mathbb{Z}^{V}\), which offer a discrete structure generalizing matroids; see Section 2.2 for the definition. The second is on the polynomial-time solvability of a class of the CSP. The _Boolean edge-CSP_ is the problem of finding an edge subset \(M\subseteq E\) of a given graph \(G=(V,E)\) such that the set of edges in \(M\) incident to each vertex \(v\in V\) satisfies a certain constraint associated with \(v\); see Section 2.3 for formal description. While the Boolean edge-CSP is NP-hard in general, Kazda, Kolmogorov, and Rolinek [18] showed that this problem can be solved in polynomial time if the constraint associated with \(v\) is described by a constant-parity jump system for each \(v\in V\) (see Theorem 2.6 below). The most distinctive part of this paper is a reduction of Maximum \(\mathcal{K}\)-Free \(t\)-Matching Problem to the Boolean edge-CSP. It appears in the proof of Theorem 3.1 below, which deals with the problem of finding a \(\mathcal{K}\)-free \(b\)-factor, i.e., a \(t\)-matching with specified degree sequence \(b\in\mathbb{Z}^{V}\). Here, on the basis of the relationship between \(\mathcal{K}\)-free \(b\)-matchings and jump systems (Theorem 2.3), we construct a polynomial reduction of the problem of finding a \(\mathcal{K}\)-free \(b\)-factor to the Boolean edge-CSP with constant-parity jump system constraints. Theorem 1.2 is then derived from Theorem 3.1. In order to prove Theorem 1.2, we iteratively solve subproblems of finding a \(\mathcal{K}\)-free \(b\)-factor. We remark that constant-parity jump systems play a key role here, as well as the reduction mentioned above. The fact that \(J\) is a constant-parity jump system guarantees that the number of the iterations is polynomially bounded by the input size (see Lemma 2.5 below). Theorem 1.6 is proved in the same manner. We then derive Theorem 1.3 from Theorem 1.6 by constructing a subfamily \(\mathcal{K}^{\prime}\subseteq\mathcal{K}\) such that \(\mathcal{K}^{\prime}\) satisfies (RD), a \(\mathcal{K}^{\prime}\)-free \(t\)-matching exists in \(G\) if and only if a \(\mathcal{K}\)-free \(t\)-matching exists in \(G\), and we can construct a \(\mathcal{K}\)-free \(t\)-matching from a \(\mathcal{K}^{\prime}\)-free \(t\)-matching in polynomial time. ### Further Related Work The \(C_{4}\)-free \(2\)-matching problem has been actively studied in the setting when the input graph is restricted to be bipartite. Hartvigsen [15], Kiraly [19, 20], and Frank [13] gave min-max theorems for the \(C_{4}\)-free \(2\)-matching problem in bipartite graphs, and more generally for the \(K_{t,t}\)-free \(t\)-matching problem in bipartite graphs, which implies the polynomial solvability of the problems. To the best of our knowledge, Frank [13] is the first work to generalize the restricted \(2\)-matching in this context to \(t\)-matchings. For the \(C_{4}\)-free \(2\)-matching problem in bipartite graphs, Hartvigsen [15] and Pap [37] designed combinatorial polynomial-time algorithms, Babenko [2] improved the running time, and Takazawa [45] showed a decomposition theorem. Takazawa [46, 47] extended these results to more generalized classes of Maximum \(\mathcal{K}\)-Free \(t\)-Matching Problem. The weighted variant of Maximum \(\mathcal{K}\)-Free \(t\)-Matching Problem has also attracted much attention. In the weighted problem, an input consists of a graph, a family \(\mathcal{K}\) of subgraphs, and a non-negative weight function on the edge set, and the objective is to find a \(\mathcal{K}\)-free \(t\)-matching with maximum total weight. It is shown by Kiraly (see [13]) and by Berczi and Kobayashi [3] that the weighted \(C_{\leq 4}\)-free \(2\)-matching problem is NP-hard even if the input graph is restricted to be cubic, bipartite, and planar. For the weighted \(C_{4}\)-free \(2\)-matching problem in bipartite graphs, and more generally for the weighted \(K_{t,t}\)-free \(t\)-matching problem in bipartite graphs, under the assumption that the weight function satisfies a certain property, Makai [31] gave a polyhedral description, Takazawa [44] designed a combinatorial polynomial-time algorithm, and Paluch and Wasylkiewicz [35] presented a faster and simpler algorithm. It is still open whether the weighted \(C_{3}\)-free \(2\)-matching problem can be solved in polynomial time. For the weighted \(C_{3}\)-free \(2\)-matching problem in subcubic graphs, Hartvigsen and Li [17] gave a polyhedral description and a polynomial-time algorithm, and faster polynomial-time algorithms were presented by Kobayashi [21] and by Paluch and Wasylkiewicz [36]. Recently, Kobayashi [23] designed a polynomial-time algorithm for the weighted \(C_{3}\)-free \(2\)-matching problem in which the cycles of length three are edge-disjoint. The relationship between \(\mathcal{K}\)-free \(t\)-matchings and jump systems has been studied in [3, 9, 27], some of which will be used in this paper. More generally, the relationship between weighted \(\mathcal{K}\)-free \(t\)-matchings and discrete convexity has been studied in [3, 21, 22, 27]. ### Organization The rest of the paper is organized as follows. In Section 2, we present the basic definitions and results in a formal manner. In Section 3, we solve the problem under the assumption that the subgraphs in \(\mathcal{K}\) are pairwise edge-disjoint, and then under the relaxed condition (RD). Section 4 is devoted to a solution to the graphs with maximum degree at most \(2t-1\). Finally, in Section 5, we deal with a more generalized problem where the forbidden structure is described in terms of degree sequences. ## 2 Preliminaries Let \(\mathbb{Z}_{+}\) denote the set of nonnegative integers, and \(\mathbf{0}\) (resp. \(\mathbf{1}\)) denote the all-zero (resp. all-one) vector of appropriate dimension. For a finite set \(V\), its subset \(U\subseteq V\), and a vector \(x\in\mathbb{Z}^{V}\), let \(x(U)=\sum_{v\in U}x(u)\). ### Basic Definitions on Graphs Throughout this paper, we assume that graphs have no self-loops to simplify the description, while they may have parallel edges. Let \(G=(V,E)\) be a graph. For a subgraph \(H\) of \(G\), let \(V(H)\) and \(E(H)\) denote the vertex set and edge set of \(H\), respectively. For a vertex set \(X\subseteq V\), let \(G[X]\) denote the subgraph induced by \(X\). Let \(F\subseteq E\) be an edge subset and let \(v\in V\) be a vertex. The set of edges in \(F\) incident to \(v\) is denoted by \(\delta_{F}(v)\). If \(F=E(H)\) for some subgraph \(H\) of \(G\), then \(\delta_{E(H)}(v)\) is often abbreviated as \(\delta_{H}(v)\). When no confusion arises, \(\delta_{G}(v)\) is further abbreviated as \(\delta(v)\). The number of edges incident to \(v\), i.e., \(|\delta(v)|\), is referred to as the _degree_ of \(v\). The _degree sequence_\(d_{F}\) of \(F\subseteq E\) is a vector in \(\mathbb{Z}_{+}^{V}\) defined by \(d_{F}(u)=|\delta_{F}(u)|\) for each \(u\in V\). For a positive integer \(t\), a graph is called \(t\)_-regular_ if every vertex has degree \(t\). A graph \(G=(V,E)\) is said to be a _complete partite graph_ if there exists a partition \(\{V_{1},\ldots,V_{p}\}\) of \(V\) such that \(E=\{uv\colon u\in V_{i},v\in V_{j},i\neq j\}\) for some positive integer \(p\). In other words, a complete partite graph is the complement of the disjoint union of complete graphs. Each \(V_{i}\) is called a _color class_ of \(G\). As defined in Section 1, for a positive integer \(t\), an edge set \(M\subseteq E\) is called a \(t\)_-matching_ if \(d_{M}(v)\leq t\) for every \(v\in V\). In particular, if \(d_{M}(v)=t\) holds for every \(v\in V\), then \(M\) is called a \(t\)_-factor_. For a vector \(b\in\mathbb{Z}_{+}^{V}\), an edge set \(M\subseteq E\) is called a \(b\)_-matching_ (resp. \(b\)_-factor_) if \(d_{M}(v)\leq b(v)\) (resp. \(d_{M}(v)=b(v)\)) for every \(v\in V\). In what follows, instead of Maximum \(\mathcal{K}\)-Free \(t\)-Matching Problem, we deal with the following slightly generalized problems. \(\mathcal{K}\)-Free \(b\)-Factor ProblemGiven a graph \(G=(V,E)\), \(b\in\mathbb{Z}_{+}^{V}\), and a family \(\mathcal{K}\) of subgraphs of \(G\), find a \(\mathcal{K}\)-free \(b\)-factor (if one exists). Maximum \(\mathcal{K}\)-Free \(b\)-Matching ProblemGiven a graph \(G=(V,E)\), \(b\in\mathbb{Z}_{+}^{V}\), and a family \(\mathcal{K}\) of subgraphs of \(G\), find a \(\mathcal{K}\)-free \(b\)-matching with maximum cardinality. Note that Maximum \(\mathcal{K}\)-Free \(t\)-Matching Problem is a special case of Maximum \(\mathcal{K}\)-Free \(b\)-Matching Problem, where \(b(v)=t\) for each \(v\in V\). **Remark 2.1**.: In this paper, we only consider the case where \(\mathcal{K}\) consists of subgraphs of size bounded by a fixed constant (e.g., \(t\)-regular complete partite subgraphs for a fixed integer \(t\)). In such a case, since \(|\mathcal{K}|\) is polynomially bounded by the size of the input graph, the representation of \(\mathcal{K}\) does not affect the polynomial solvability of the problem. Therefore, in what follows, we suppose that \(\mathcal{K}\) is explicitly given as the list of its elements. \(\blacksquare\) **Remark 2.2**.: Let \(G=(V,E)\) be a graph, \(b\in\mathbb{Z}_{+}^{V}\) with \(b(v)\leq t\) for each \(v\in V\), and \(K\) a connected \(t\)-regular subgraph of \(G\). We can easily observe that, if a \(b\)-matching \(M\subseteq E\) of \(G\) contains \(K\), then \(K\) forms a connected component of the induced subgraph \((V,M)\) of \(G\) by \(M\). \(\blacksquare\) ### Jump System Let \(V\) be a finite set. For a subset \(U\subseteq V\), let \(\chi_{U}\in\{0,1\}^{V}\) denote the characteristic vector of \(U\), that is, \(\chi_{U}(v)=1\) for \(v\in U\) and \(\chi_{U}(v)=0\) for \(v\in V\setminus U\). If \(U=\{u\}\) for an element \(u\in V\), then \(\chi_{\{u\}}\) is simply denoted by \(\chi_{u}\). For two vectors \(x,y\in\mathbb{Z}^{V}\), a vector \(s\in\mathbb{Z}^{V}\) is called an _\((x,y)\)-increment_ if \(s=\chi_{u}\) and \(x(u)<y(u)\) for some \(u\in V\), or \(s=-\chi_{u}\) and \(x(u)>y(u)\) for some \(u\in V\). A nonempty set \(J\subseteq\mathbb{Z}^{V}\) is said to be a _jump system_ if it satisfies the following exchange axiom (see [5]): For any \(x,y\in J\) and for any \((x,y)\)-increment \(s\) with \(x+s\not\in J\), there exists an \((x+s,y)\)-increment \(t\) such that \(x+s+t\in J\). In particular, a jump system \(J\subseteq\mathbb{Z}^{V}\) is called a _constant-parity jump system_ if \(x(V)-y(V)\) is even for any \(x,y\in J\). Constant-parity jump systems include several discrete structures as special classes. First, for a matroid with a basis family \(\mathcal{B}\), it follows from the exchange property of matroid bases that \(\{\chi_{B}\colon B\in\mathcal{B}\}\) is a constant-parity jump system. Second, the characteristic vectors of all the feasible sets of an even delta-matroid form a constant-parity jump system (see [5]). Finally, for a graph \(G=(V,E)\), the set \(\{d_{F}\colon F\subseteq E\}\) of the degree sequences of all the edge subsets is also a constant-parity jump system. See [30, 5, 32] for details on jump systems. The following theorem shows a relationship between \(\mathcal{K}\)-free \(b\)-matchings and jump systems. **Theorem 2.3** (follows from [27, Proposition 3.1]).: _Let \(G=(V,E)\) be a graph, let \(t\) be a positive integer, and let \(b\in\mathbb{Z}_{+}^{V}\) be a vector such that \(b(v)\leq t\) for each \(v\in V\). For a family \(\mathcal{K}\) of complete partite \(t\)-regular subgraphs in \(G\), the degree sequences of all \(\mathcal{K}\)-free \(b\)-matchings in \(G\) form a constant-parity jump system._ **Remark 2.4**.: Theorem 2.3 is a modest extension of the original statement [27, Proposition 3.1], in which \(b(v)=t\) for each \(v\in V\) and \(\mathcal{K}\) is the set of all subgraphs in \(G\) that are isomorphic to a graph in a given list of complete partite \(t\)-regular subgraphs. The same proof, however, works for Theorem 2.3 as well. \(\blacksquare\) We here describe a few basic operations on jump systems, which will be used in the subsequent sections. Intersection with a box.A _box_ is a set of the form \(\{x\in\mathbb{R}^{V}\colon\underline{b}\leq x\leq\overline{b}\}\) for some vectors \(\underline{b}\in(\mathbb{R}\cup\{-\infty\})^{V}\) and \(\overline{b}\in(\mathbb{R}\cup\{+\infty\})^{V}\). If \(J\subseteq\mathbb{Z}^{V}\) is a constant-parity jump system, then the intersection \[J\cap\{x\in\mathbb{R}^{V}\colon\underline{b}\leq x\leq\overline{b}\}\] of \(J\) and a box is also a constant-parity jump system unless it is empty. Minkowski sum.For two sets \(J_{1},J_{2}\subseteq\mathbb{Z}^{V}\), their _Minkowski sum_\(J_{1}+J_{2}\) is a subset of \(\mathbb{Z}^{V}\) defined by \[J_{1}+J_{2}=\{x+y\colon x\in J_{1},\ y\in J_{2}\}.\] It was shown by Bouchet and Cunningham [5] that the Minkowski sum of two constant-parity jump systems is also a constant-parity jump system. Splitting.Let \(\{U_{v}\colon v\in V\}\) be a family of nonempty disjoint finite sets indexed by \(v\in V\), and let \(U=\bigcup_{v\in V}U_{v}\). For a set \(J\subseteq\mathbb{Z}^{V}\), we define the _splitting_ of \(J\) to \(U\) as \[J^{\prime}=\{x^{\prime}\in\mathbb{Z}^{U}\colon x^{\prime}(U_{v})=x(v)\text{ for each }v\in V\text{ for some }x\in J\}.\] The splitting of a constant-parity jump system is also a constant-parity jump system; see [25, 33]. If the degree sequences of all the \(\mathcal{K}\)-free \(b\)-matchings form a constant-parity jump system, then Maximum\(\mathcal{K}\)-Free\(b\)-Matching Problem reduces to \(\mathcal{K}\)-Free\(b\)-Factor Problem which is formally stated as follows. **Lemma 2.5**.: _Let \(G=(V,E)\) be a graph, \(\mathcal{K}\) be a family of subgraphs of \(G\), and let \(b\in\mathbb{Z}^{V}_{+}\). If the degree sequences of all the \(\mathcal{K}\)-free \(b\)-matchings in \(G\) form a constant-parity jump system, then a \(\mathcal{K}\)-free \(b\)-matching in \(G\) with maximum cardinality can be computed by testing the existence of a \(\mathcal{K}\)-free \(b^{\prime}\)-factor in \(G\) for polynomially many vectors \(b^{\prime}\in\mathbb{Z}^{V}_{+}\) with \(b^{\prime}\leq b\)._ Proof.: Denote by \(J\subseteq\mathbb{Z}^{V}\) the constant-parity jump system consisting of the degree sequences of all the \(\mathcal{K}\)-free \(b\)-matchings in \(G\). Given an initial vector in \(J\), we can maximize a given linear function over \(J\) by using the membership oracle of \(J\) at most polynomially many times [1, 5, 42]. Here, the _membership oracle of \(J\)_ is an oracle that answers whether a given vector is in \(J\) or not. Since an empty edge set is a \(\mathcal{K}\)-free \(b\)-matching, it holds that \(\mathbf{0}\in J\). That is, we can take \(\mathbf{0}\) as the initial vector in \(J\). Now the lemma follows because accessing the membership oracle of \(J\) corresponds to testing the existence of a \(\mathcal{K}\)-free \(b^{\prime}\)-factor in \(G\). ### Boolean Edge-CSP The _constraint satisfaction problem_ (_CSP_) is a fundamental topic in theoretical computer science and has been intensively studied in various fields (see, e.g., [38]). In this paper, we focus on the _Boolean edge-CSP_, which is formulated as follows. An instance of the Boolean CSP is a pair \((E,\mathcal{C})\), where \(E\) is the set of Boolean variables and \(\mathcal{C}\) is that of constraints. A constraint \(C\in\mathcal{C}\) is a pair \((\sigma_{C},R_{C})\), where the _scope_\(\sigma_{C}\subseteq E\) is the set of the variables appearing in \(C\) and the _relation_\(R_{C}\) is a subset of \(\{0,1\}^{\sigma_{C}}\). In general a scope can be a multi-subset of \(E\), but for notational simplicity we define a scope as a subset of \(E\). The objective of the Boolean CSP is to find a mapping \(f\colon E\to\{0,1\}\) such that \((f(e))_{e\in\sigma_{C}}\in R_{C}\) for each constraint \(C\in\mathcal{C}\). A central topic of the CSP is the classification of the computational complexity according to the relations that can appear in the constraints. Let \(\Gamma\) denote a set of relations, which is referred to as a _language_. The problem of finding a solution to a Boolean CSP instance in which every relation appearing in the constraints belongs to \(\Gamma\) is denoted by Boolean CSP(\(\Gamma\)). Schaefer [39] established a dichotomy theorem stating that Boolean CSP(\(\Gamma\)) is in class P if the language \(\Gamma\) satisfies one of certain six conditions, and is NP-hard otherwise. Bulatov [6] and Zhuk [48] independently established its generalization, i.e., a dichotomy theorem for the CSP over any finite domain, which affirmatively settled a long-standing open question posed by Feder and Vardi [12]. By imposing a structural restriction on the set \(\mathcal{C}\) of constraints, we may obtain another class of the Boolean CSP which can be solved in polynomial time. An example is the _Boolean edge-CSP_, a class of the Boolean CSP in which each variable appears in exactly two constraints. An instance \((E,\mathcal{C})\) of the Boolean edge-CSP over the language \(\Gamma\), denoted by Boolean Edge-CSP\((\Gamma)\), is interpreted in terms of a graph \(G=(V,E)\) in the following way. The variable set \(E\) coincides with the edge set of the graph \(G\). A mapping \(f\colon E\to\{0,1\}\) corresponds to a subset \(M\subseteq E\) of edges determined by \(\chi_{M}(e)=f(e)\) for each \(e\in E\). One constraint \(C=(\sigma_{C},R_{C})\in\mathcal{C}\) is described by one vertex \(v\in V\) and the set \(\delta(v)\) of its incident edges: the scope \(\sigma_{C}\subseteq E\) is the edge set \(\delta(v)\subseteq E\); and the relation \(R_{C}\subseteq\{0,1\}^{\delta(v)}\) is described by an edge subset family \(\mathcal{F}_{v}\subseteq 2^{\delta(v)}\) by \(R_{C}=\{\chi_{F}\colon F\in\mathcal{F}_{v}\}\). Observe that each variable (edge) appears exactly in two constraints (vertices). Boolean Edge-CSP\((\Gamma)\)Given a graph \(G=(V,E)\) and an edge subset family \(\mathcal{F}_{v}\subseteq 2^{\delta(v)}\) whose corresponding relation \(\{\chi_{F}\colon F\in\mathcal{F}_{v}\}\) belongs to \(\Gamma\) for each vertex \(v\in V\), find an edge set \(M\subseteq E\) such that \(\delta_{M}(v)\in\mathcal{F}_{v}\) for each \(v\in V\) (if one exists). We remark that the relation \(\mathcal{F}_{v}\subseteq 2^{\delta(v)}\) (\(v\in V\)) is not given by the membership oracles but by the list of the edge subsets, and hence the input size is \(O(|V|+|E|+\sum_{v\in V}|\mathcal{F}_{v}|)\). For a language \(\Gamma\), if Boolean CSP\((\Gamma)\) belongs to class P, then so is Boolean Edge-CSP\((\Gamma)\). We thus focus on Boolean languages \(\Gamma\) such that Boolean CSP\((\Gamma)\) is NP-hard. Feder [11] showed that if \(\Gamma\) contains the unary relations \(\{(0)\}\) and \(\{(1)\}\) and a relation that is not a delta-matroid, then Boolean Edge-CSP\((\Gamma)\) is NP-hard. On the other hand, Kazda, Kolmogorov, and Rolinek [18] proved that Boolean Edge-CSP\((\Gamma)\) belongs to class P if every relation is an even delta-matroid. Since an even delta-matroid can be identified with a constant-parity jump system, with each coordinate in \(\{0,1\}\), in what follows in this paper, we refer to an even delta-matroid as a constant-parity jump system for the unity of terminology. Let \(\Gamma_{\text{cp-jump}}\) denote the set of all constant-parity jump systems over the Boolean domain. **Theorem 2.6** (Kazda, Kolmogorov, and Rolinek [18]).: Boolean Edge-CSP\((\Gamma_{\text{cp-jump}})\) _can be solved in polynomial time._ ## 3 Edge-Disjoint Forbidden Subgraphs In this section, we consider the case when \(\mathcal{K}\) is an edge-disjoint family of \(t\)-regular complete partite subgraphs. We first give a polynomial-time algorithm for \(\mathcal{K}\)-Free \(b\)-Factor Problem by reducing the problem to Boolean Edge-CSP\((\Gamma_{\text{cp-jump}})\) in Theorem 3.1. Then, by using this algorithm as a subroutine, we present a polynomial-time algorithm for Maximum \(\mathcal{K}\)-Free \(b\)-Matching Problem (Theorem 3.4), which implies Theorem 1.2. Finally, we prove the polynomial solvability under the condition (RD) in Theorem 3.5, which will be used in the next section. **Theorem 3.1**.: _For a fixed positive integer \(t\), \(\mathcal{K}\)-Free \(b\)-Factor Problem can be solved in polynomial time if \(b(v)\leq t\) for each \(v\in V\) and all the subgraphs in \(\mathcal{K}\) are \(t\)-regular complete partite and pairwise edge-disjoint._ Proof.: We prove the theorem by constructing a polynomial reduction to Boolean Edge-CSP\((\Gamma_{\text{cp-jump}})\). Let \((G,b,\mathcal{K})\) be an instance of \(\mathcal{K}\)-Free \(b\)-Factor Problem, where \(G=(V,E)\), \(b\in\mathbb{Z}_{+}^{V}\), and \(\mathcal{K}\) is a family of subgraphs in \(G\). Recall that an input of the Boolean edge-CSP consists of a graph and a constraint on each vertex. Our input graph \(G^{\prime}=(V^{\prime},E^{\prime})\) of the Boolean edge-CSP is constructed as follows (see also Figure 1): * Introduce a new vertex \(r_{K}\) for each \(K\in\mathcal{K}\), and define the vertex set \(V^{\prime}\) by \[V^{\prime}=V\cup\{r_{K}\colon K\in\mathcal{K}\}.\] * For each \(K\in\mathcal{K}\) and \(v\in V(\mathcal{K})\), introduce new \(t\) parallel edges between \(r_{K}\) and \(v\), and let \(E^{\prime}_{v,K}\) denote the set of these new \(t\) parallel edges. Define the edge set \(E^{\prime}\) by \[E^{\prime}=\left(E\cup\bigcup_{K\in\mathcal{K}}\bigcup_{v\in V(K)}E^{\prime}_ {v,K}\right)\setminus\bigcup_{K\in\mathcal{K}}E(K),\] Our input constraint \(\mathcal{F}_{v}\subseteq 2^{\delta_{G^{\prime}}(v)}\) (\(v\in V^{\prime}\)) is constructed as follows: * For each subgraph \(K\in\mathcal{K}\), compute a set \(D_{K}\subseteq\mathbb{Z}_{+}^{V(K)}\) of the degree sequences in the \(K\)-free \(b\)-matchings in \(K\), i.e., \[D_{K} =\left\{d_{F}\in\mathbb{Z}_{+}^{V(K)}\colon F\text{ is a $K$-free $b$-matching in $K$}\right\}\] \[=\left\{d_{F}\in\mathbb{Z}_{+}^{V(K)}\colon F\text{ is a $b$-matching in $K$}\right\}\setminus\{(t,\dots,t)\}.\] Then, for each vertex \(v\in V^{\prime}\), define \(\mathcal{F}_{v}\subseteq 2^{\delta_{G^{\prime}}(v)}\) by \[\mathcal{F}_{v}=\begin{cases}\{F^{\prime}\subseteq\delta_{G^{\prime}}(v)\colon |F^{\prime}|=b(v)\}&\text{if $v\in V$,}\\ \{F^{\prime}\subseteq\delta_{G^{\prime}}(v)\colon\left.\left(d_{F^{\prime}}(u )\right)_{u\in V(K)}\in D_{K}\right\}&\text{if $v=r_{K}$ for some $K\in\mathcal{K}$}.\end{cases}\] (1) Note that each \(D_{K}\) and each \(\mathcal{F}_{v}\) can be computed efficiently in a brute force way: \(|V(K)|=O(t)\) and hence \(D_{K}\) has \(t^{O(t)}\) elements for the fixed integer \(t\); and \(\mathcal{F}_{v}\) has a polynomial size. Now we have constructed an instance of the Boolean edge-CSP consisting of \(G^{\prime}=(V^{\prime},E^{\prime})\) and \((\mathcal{F}_{v})_{v\in V^{\prime}}\). We first show the following claim, which implies that this instance actually belongs to Boolean Edge-CSP(\(\Gamma_{\text{cp-jump}}\)). **Claim 3.2**.: _For each \(v\in V^{\prime}\), the set \(\{\chi_{F^{\prime}}\in\mathbb{Z}^{\delta_{G^{\prime}}(v)}\colon F^{\prime}\in \mathcal{F}_{v}\}\) of the characteristic vectors of the edge sets in \(\mathcal{F}_{v}\) is a constant-parity jump system._ Figure 1: The graph on the left shows the edge set \(E(K)\) of the \(t\)-regular complete partite graph \(K\) by the thick edges, while the thin edges belong to \(E\setminus E(K)\). In this example, \(K\) is a \(3\)-regular complete bipartite graph. The thick edges in the graph on the right depict the newly added three parallel edges between \(r_{K}\) and each vertex \(v\in V(K)\). Proof of Claim 3.2.: If \(v\in V\), then the claim follows from the fact that \(\mathcal{F}_{v}\) is the basis family of a uniform matroid. Suppose that \(v=r_{K}\) for \(K\in\mathcal{K}\). By applying Theorem 2.3 with \(G=K\) and \(\mathcal{K}=\{K\}\), we obtain that \(D_{K}\) is a constant-parity jump system. Now, \(\{\chi_{F^{\prime}}\in\mathbb{Z}^{\delta_{G^{\prime}}(v)}\colon F^{\prime}\in \mathcal{F}_{v}\}\) is obtained from splitting \(D_{K}\) to \(\bigcup_{u\in V(K)}E^{\prime}_{u,K}\) and then taking the intersection with a box \(\{x\in\mathbb{R}^{\delta_{G^{\prime}}(v)}\colon\mathbf{0}\leq x\leq\mathbf{1}\}\), and thus is a constant-parity jump system; see Section 2.2. It follows from Claim 3.2 and Theorem 2.6 that the instance \((G^{\prime},(\mathcal{F}_{v})_{v\in V^{\prime}})\) belongs to Boolean Edge-CSP\((\Gamma_{\text{cp-jump}})\) and can be solved in polynomial time, respectively. Namely, we can find an edge set \(M^{\prime}\subseteq E^{\prime}\) such that \[\delta_{M^{\prime}}(v)\in\mathcal{F}_{v}\text{ for each }v\in V^{\prime} \tag{2}\] or conclude that such \(M^{\prime}\) does not exist in polynomial time. In what follows, we show that the existence of such an edge set \(M^{\prime}\subseteq E^{\prime}\) is equivalent to the existence of a \(\mathcal{K}\)-free \(b\)-factor in the original graph \(G\). **Claim 3.3**.: _The graph \(G^{\prime}\) has an edge set \(M^{\prime}\subseteq E^{\prime}\) satisfying (2) if and only if the original graph \(G\) has a \(\mathcal{K}\)-free \(b\)-factor \(M\subseteq E\)._ Proof of Claim 3.3.: We first show the sufficiency ("if" part). Let \(M\subseteq E\) be a \(\mathcal{K}\)-free \(b\)-factor in \(G\). We construct an edge set \(M^{\prime}\subseteq E^{\prime}\) satisfying (2) in the following way. For each subgraph \(K\in\mathcal{K}\), let \(F_{K}\subseteq\delta_{G^{\prime}}(r_{K})\) be an edge set in \(G^{\prime}\) composed of exactly \(d_{M\cap E(K)}(u)\) parallel edges between \(u\) and \(r_{K}\) for each vertex \(u\in V(K)\). Note that such an edge set \(F_{K}\) must exist, because \(M\) is a \(b\)-factor, \(b(u)\leq t\), and \(G^{\prime}\) has \(t\) parallel edges between \(u\) and \(r_{K}\). Now define \(M^{\prime}\subseteq E^{\prime}\) by \[M^{\prime}=\left(M\setminus\bigcup_{K\in\mathcal{K}}E(K)\right)\cup\bigcup_{ K\in\mathcal{K}}F_{K}.\] Here we show that this edge set \(M^{\prime}\) satisfies (2). If \(v\in V\), it holds that \(\delta_{M^{\prime}}(v)\in\mathcal{F}_{v}\), since \(|\delta_{M^{\prime}}(v)|=|\delta_{M}(v)|=b(v)\). Let \(K\in\mathcal{K}\) and \(v=r_{K}\). The fact that \(M\) is \(\mathcal{K}\)-free implies \[(d_{M\cap E(K)}(u))_{u\in V(K)}\in D_{K}.\] Since \(d_{F_{K}}(u)=d_{M\cap E(K)}(u)\) for each vertex \(u\in V(K)\), it follows from the definition (1) of \(\mathcal{F}_{r_{K}}\) that \(F_{K}\in\mathcal{F}_{r_{K}}\), and hence \(\delta_{M^{\prime}}(r_{K})=F_{K}\in\mathcal{F}_{r_{K}}\). We thus conclude that \(M^{\prime}\) satisfies (2). We next show the necessity ("only if" part). Let \(M^{\prime}\subseteq E^{\prime}\) be an edge set satisfying (2). We construct a \(\mathcal{K}\)-free \(b\)-factor \(M\) in \(G\) in the following manner. For each subgraph \(K\in\mathcal{K}\), let \(F_{K}:=\delta_{M^{\prime}}(r_{K})\). It follows from (2) that \(F_{K}\in\mathcal{F}_{r_{K}}\), namely, there exists a \(b\)-matching \(N_{K}\subsetneq E(K)\) such that \(d_{N_{K}}(u)=d_{F_{K}}(u)\) for each vertex \(u\in V(K)\). Now define \(M\subseteq E\) by \[M=\left(M^{\prime}\setminus\bigcup_{K\in\mathcal{K}}F_{K}\right)\cup\bigcup_{ K\in\mathcal{K}}N_{K}.\] We complete the proof by showing that \(M\) is a \(\mathcal{K}\)-free \(b\)-factor in \(G\). Let \(v\in V\) be an arbitrary vertex in \(G\). Since \(d_{F_{K}}(u)=d_{N_{K}}(u)\) for each \(K\in\mathcal{K}\) and each \(u\in V(K)\), it holds that \(d_{M}(v)=d_{M^{\prime}}(v)=b(v)\), where the last equality follows from \(\delta_{M^{\prime}}(v)\in\mathcal{F}_{v}\). We thus have that \(M\) is a \(b\)-factor. Furthermore, since \(N_{K}\subsetneq E(K)\) for each \(K\in\mathcal{K}\), we conclude that \(M\) is \(\mathcal{K}\)-free. The proof of Claim 3.3 provides a polynomial-time construction of a \(\mathcal{K}\)-free \(b\)-factor \(M\) in \(G\) from an edge set \(M^{\prime}\subseteq E^{\prime}\) satisfying (2). We thus conclude that the original instance \((G,b,\mathcal{K})\) of \(\mathcal{K}\)-Free \(b\)-Factor Problem can be solved in polynomial time. By using Theorem 3.1, we can give a polynomial-time algorithm for Maximum \(\mathcal{K}\)-Free \(b\)-Matching Problem under the same assumptions. **Theorem 3.4**.: _For a fixed positive integer \(t\), Maximum \(\mathcal{K}\)-Free \(b\)-Matching Problem can be solved in polynomial time if \(b(v)\leq t\) for each \(v\in V\) and all the subgraphs in \(\mathcal{K}\) are \(t\)-regular complete partite and pairwise edge-disjoint._ Proof.: It follows from Theorem 2.3 that the set of the degree sequences of all \(\mathcal{K}\)-free \(b\)-matchings in \(G\) is a constant-parity jump system. Therefore, by Lemma 2.5 and Theorem 3.1, we can solve Maximum \(\mathcal{K}\)-Free \(b\)-Matching Problem in polynomial time. We remark that Theorem 1.2 is immediately derived from Theorem 3.4 by setting \(b(v)=t\) for every \(v\in V\). As described in Section 1, the edge-disjointness of the subgraphs in \(\mathcal{K}\) is relaxed to the condition (RD), which we restate here: * The subgraph family \(\mathcal{K}\) is partitioned into subfamilies \(\mathcal{K}_{1},\ldots,\mathcal{K}_{\ell}\) such that * for each subfamily \(\mathcal{K}_{i}\) (\(i=1,\ldots,\ell\)), the number \(\big{|}\bigcup_{K\in\mathcal{K}_{i}}V(K)\big{|}\) of vertices is bounded by a fixed constant, and * for distinct subfamilies \(\mathcal{K}_{i}\) and \(\mathcal{K}_{j}\) (\(i,j\in\{1,\ldots,\ell\}\)) and for each pair of subgraphs \(K\in\mathcal{K}_{i}\) and \(K^{\prime}\in\mathcal{K}_{j}\), it holds that \(K\) and \(K^{\prime}\) are edge-disjoint. **Theorem 3.5**.: _For a fixed positive integer \(t\), \(\mathcal{K}\)-Free \(b\)-Factor Problem and Maximum \(\mathcal{K}\)-Free \(b\)-Matching Problem can be solved in polynomial time if \(b(v)\leq t\) for each \(v\in V\) and \(\mathcal{K}\) is a family of \(t\)-regular complete partite subgraphs of \(G\) and satisfies the condition (RD)._ Proof.: It follows from Theorem 2.3 and Lemma 2.5 that Maximum \(\mathcal{K}\)-Free \(b\)-Matching Problem can also be solved in polynomial time if \(\mathcal{K}\)-Free \(b\)-Factor Problem is so. Hence, below we prove that \(\mathcal{K}\)-Free \(b\)-Factor Problem can be solved in polynomial time in a similar way to Theorem 3.1. Let \((G,b,\mathcal{K})\) be an instance of \(\mathcal{K}\)-Free \(b\)-Factor Problem, where \(G=(V,E)\), \(b\in\mathbb{Z}_{+}^{V}\), and \(\mathcal{K}\) is a family of subgraphs in \(G\) satisfying the condition (RD). Let \(\mathcal{K}_{1},\ldots,\mathcal{K}_{\ell}\) be the partition of \(\mathcal{K}\) in the condition (RD). For each \(i\in\{1,\ldots,\ell\}\), execute the following procedure. Let \(H_{i}\) be the graph defined as the union of all \(K\in\mathcal{K}_{i}\), i.e., \[H_{i}:=\left(\bigcup_{K\in\mathcal{K}_{i}}V(K),\bigcup_{K\in\mathcal{K}_{i}}E (K)\right).\] Then, * add a new vertex \(r_{i}\) and \(t\) parallel edges between \(r_{i}\) and \(v\) for each \(v\in V(H_{i})\), and remove the original edges in \(E(H_{i})\); and * compute a set \(D_{H_{i}}\subseteq\mathbb{Z}_{+}^{V(H_{i})}\) of the degree sequences in the \(\mathcal{K}_{i}\)-free \(b\)-matchings in \(H_{i}\), i.e., \[D_{H_{i}}=\left\{d_{F}\in\mathbb{Z}_{+}^{V(H_{i})}\colon F\text{ is a $\mathcal{K}_{i}$-free $b$-matching in $H_{i}$}\right\}.\] For each \(i\in\{1,\ldots,\ell\}\), it follows from Theorem 2.3 that the set \(D_{H_{i}}\) is a constant-parity jump system. We also remark that \(D_{H_{i}}\) can be computed efficiently in a brute force way, since \(|V(H_{i})|\) and \(t\) are bounded by a fixed constant. Now, by the same argument as in the proof of Theorem 3.1, we can solve \(\mathcal{K}\)-Free \(b\)-Factor Problem in polynomial-time with the aid of Theorem 2.6. We conclude this section by showing that a subgraph family \(\mathcal{K}\) with a certain laminar structure described below satisfies the condition (RD). **Corollary 3.6**.: _For a fixed positive integer \(t\), \(\mathcal{K}\)-Free \(b\)-Factor Problem and Maximum \(\mathcal{K}\)-Free \(b\)-Matching Problem can be solved in polynomial time if \(b(v)\leq t\) for each \(v\in V\) and \(\mathcal{K}\) is a family of \(t\)-regular complete partite subgraphs of \(G\) satisfying \(E(K)\cap E(K^{\prime})=\emptyset\), \(V(K)\subseteq V(K^{\prime})\), or \(V(K)\supseteq V(K^{\prime})\) for each pair of subgraphs \(K,K^{\prime}\in\mathcal{K}\)._ Proof.: We construct a partition of \(\mathcal{K}\) certifying that \(\mathcal{K}\) satisfies the condition (RD). Then the corollary immediately follows from Theorem 3.5. Let \(\mathcal{X}^{*}\) be the family of all inclusionwise maximal sets in \(\{V(K)\colon K\in\mathcal{K}\}\). For each vertex set \(X\in\mathcal{X}^{*}\), define a subfamily \(\mathcal{K}_{X}\) of \(\mathcal{K}\) by \(\mathcal{K}_{X}:=\{K\in\mathcal{K}\colon V(K)\subseteq X\}\). It suffices to show that \(E(K)\cap E(K^{\prime})=\emptyset\) for each pair of distinct vertex sets \(X,X^{\prime}\in\mathcal{X}^{*}\) and for each pair of subgraphs \(K\in\mathcal{K}_{X}\) and \(K^{\prime}\in\mathcal{K}_{X^{\prime}}\); this implies that \(\mathcal{K}_{X}\) (\(X\in\mathcal{X}^{*}\)) form a partition of \(\mathcal{K}\) satisfying the condition (RD). Suppose to the contrary that \(E(K)\cap E(K^{\prime})\neq\emptyset\) for some distinct vertex sets \(X,X^{\prime}\in\mathcal{X}^{*}\) and for some subgraphs \(K\in\mathcal{K}_{X}\) and \(K^{\prime}\in\mathcal{K}_{X^{\prime}}\). It follows from the assumption of \(\mathcal{K}\) that \(V(K)\subseteq V(K^{\prime})\) or \(V(K)\supseteq V(K^{\prime})\). Without loss of generality, assume \(V(K)\subseteq V(K^{\prime})\). Let \(K_{X}\in\mathcal{K}_{X}\) (resp. \(K_{X^{\prime}}\in\mathcal{K}_{X^{\prime}}\)) be a \(t\)-regular complete partite graph attaining \(V(K_{X})=X\) (resp. \(V(K_{X^{\prime}})=X^{\prime}\)). It follows from the maximality of \(X\) and \(X^{\prime}\) that \(V(K_{X})\not\subseteq V(K_{X^{\prime}})\) and \(V(K_{X})\not\supseteq V(K_{X^{\prime}})\). In the following, we prove that \(E(K_{X})\cap E(K_{X^{\prime}})\neq\emptyset\), which contradicts the assumption of \(\mathcal{K}\). Define a vertex set \(Y\) by \(Y:=V(K_{X})\cap V(K_{X^{\prime}})\). It is derived from \(V(K)\subseteq X=V(K_{X})\) and \(V(K)\subseteq V(K^{\prime})\subseteq X^{\prime}=V(K_{X^{\prime}})\) that \(V(K)\subseteq Y\). It then follows that \(|Y|\geq t+1\), which implies that both of the induced subgraphs \(K_{X}[Y]\) and \(K_{X^{\prime}}[Y]\) are complete partite graphs having at least two color classes. Since the complement of \(K_{X}[Y]\) is the disjoint union of (at least two) complete graphs, it is disconnected. On the other hand, \(K_{X^{\prime}}[Y]\) is connected. Hence we have \(E(K_{X}[Y])\cap E(K_{X^{\prime}}[Y])\neq\emptyset\), implying that \(E(K_{X})\cap E(K_{X^{\prime}})\neq\emptyset\). ## 4 Degree Bounded Graphs In this section, we consider the case where the maximum degree of \(G\) is at most \(2t-1\). **Theorem 4.1**.: _For a fixed positive integer \(t\), \(\mathcal{K}\)-Free \(b\)-Factor Problem can be solved in polynomial time if the maximum degree of \(G\) is at most \(2t-1\), \(b(v)\leq t\) for each \(v\in V\), and all the subgraphs in \(\mathcal{K}\) are \(t\)-regular complete partite._ Proof.: If \(t=1\), then the problem is trivial, because the maximum degree is one and a \(t\)-regular complete partite subgraph must be composed of a single edge. Therefore, it suffices to consider the case where \(t\geq 2\). Without loss of generality, we may assume that each subgraph \(K\in\mathcal{K}\) satisfies \[b(v)=t\text{ for each vertex }v\in V(K), \tag{3}\] since otherwise we can remove \(K\) from \(\mathcal{K}\). Define a vertex subset family \(\mathcal{X}\subseteq 2^{V}\) by \(\mathcal{X}=\{V(K)\colon K\in\mathcal{K}\}\). Construct a subfamily \(\mathcal{X}^{*}\subseteq\mathcal{X}\) of disjoint vertex subsets in \(\mathcal{X}\) in the following manner: start with \(\mathcal{X}^{*}=\emptyset\); and while there exists a set in \(\mathcal{X}\) disjoint from every set in \(\mathcal{X}^{*}\), add an inclusionwise maximal one to \(\mathcal{X}^{*}\). We denote \(\mathcal{X}^{*}=\{X_{1},X_{2},\ldots,X_{\ell}\}\). It follows from the construction that \(\mathcal{X}^{*}\subseteq\mathcal{X}\) satisfies the following property: for each \[X\in\mathcal{X}\setminus\mathcal{X}^{*}\] , there exists \[X_{i}\in\mathcal{X}^{*}\] such that \[X\cap X_{i}\neq\emptyset\] and \[X_{i}\not\subseteq X\] . (4) For each \(X_{i}\in\mathcal{X}^{*}\), let \(\mathcal{K}_{i}=\{K\in\mathcal{K}\colon V(K)\subseteq X_{i}\}\) and let \(H_{i}\) be the union of all subgraphs in \(\mathcal{K}_{i}\), i.e., \[H_{i}=\left(X_{i},\bigcup_{K\in\mathcal{K}_{i}}E(K)\right).\] Let \(\mathcal{K}^{*}=\bigcup_{i=1}^{\ell}\mathcal{K}_{i}\). Note that \(\mathcal{K}_{1},\ldots,\mathcal{K}_{\ell}\) form a partition of \(\mathcal{K}^{*}\), and they satisfy the condition (RD). By using Theorem 3.5, in polynomial time, we can find a \(\mathcal{K}^{*}\)-free \(b\)-factor \(M\) in \(G\) or conclude that \(G\) has no \(\mathcal{K}^{*}\)-free \(b\)-factor. In the latter case, we can conclude that \(G\) has no \(\mathcal{K}\)-free \(b\)-factor, because \(\mathcal{K}^{*}\) is a subfamily of \(\mathcal{K}\). In the former case, we transform \(M\) into a \(\mathcal{K}\)-free \(b\)-factor as shown in the following claim. **Claim 4.2**.: _Given a \(\mathcal{K}^{*}\)-free \(b\)-factor \(M\) in \(G\), we can construct a \(\mathcal{K}\)-free \(b\)-factor in polynomial time._ Proof of Claim 4.2.: For a \(b\)-factor \(M\) in \(G\), define a subgraph family \(\mathcal{K}(M)\) by \[\mathcal{K}(M)=\{K\in\mathcal{K}\colon E(K)\subseteq M\},\] the set of forbidden subgraphs included in \(M\). Obviously, \(M\) is \(\mathcal{K}\)-free if and only if \(\mathcal{K}(M)=\emptyset\). In what follows, given a \(\mathcal{K}^{*}\)-free \(b\)-factor \(M\), we modify \(M\) so that \(\mathcal{K}(M)\) becomes smaller. Let \(M\) be a \(\mathcal{K}^{*}\)-free \(b\)-factor and suppose that \(\mathcal{K}(M)\neq\emptyset\). Then, there exists a subgraph \(K\in\mathcal{K}\setminus\mathcal{K}^{*}\) such that \(K\in\mathcal{K}(M)\), i.e., \(E(K)\subseteq M\). It follows from \(K\in\mathcal{K}\setminus\mathcal{K}^{*}\) that \(V(K)\in\mathcal{X}\setminus\mathcal{X}^{*}\). Then, (4) implies that there exists \(X_{i}\in\mathcal{X}^{*}\) such that \[V(K)\cap X_{i}\neq\emptyset\quad\text{and}\quad X_{i}\not\subseteq V(K).\] It holds that \(X_{i}=V(K^{*})\) for some \(K^{*}\in\mathcal{K}_{i}\), which follows from the construction of \(\mathcal{X}^{*}\) and the definition of \(\mathcal{K}_{i}\). We thus obtain \[V(K)\cap V(K^{*})\neq\emptyset\quad\text{and}\quad V(K^{*})\not\subseteq V(K).\] Take a vertex \(u\) in \(V(K)\cap V(K^{*})\). Since \(|\delta_{K}(u)|=|\delta_{K^{*}}(u)|=t\) and \(|\delta_{G}(u)|\leq 2t-1\), there exists an edge \(e\in\delta_{K}(u)\cap\delta_{K^{*}}(u)\), in particular \(e\in E(K)\cap E(K^{*})\). We denote \(e=uu^{\prime}\). Note that \(e\in M\) since \(E(K)\subseteq M\). Since \(V(K^{*})\not\subseteq V(K)\), there exists a vertex \(v\in V(K^{*})\setminus V(K)\). From (3) and \(K^{*}\in\mathcal{K}\), we obtain \(|\delta_{M}(v)|=b(v)=t\). It then follows from \(|\delta_{G}(v)|\leq 2t-1\) and \(|\delta_{K^{*}}(v)|=t\) that \(\delta_{M}(v)\cap\delta_{K^{*}}(v)\neq\emptyset\), that is, there exists an edge \(e^{*}\in\delta_{K^{*}}(v)\) contained in \(M\). We denote \(e^{*}=vv^{\prime}\). Since \(K\) is a connected component of the subgraph induced by \(M\) (see Remark 2.2), it holds that \(v^{\prime}\in V(K^{*})\setminus V(K)\); see Figure 2. Since \(e,e^{*}\in E(K^{*})\) and \(K^{*}\) is a complete partite graph, \(u\) and \(u^{\prime}\) are contained in different color classes of \(K^{*}\), and so are \(v\) and \(v^{\prime}\). This shows that \(K^{*}\) contains two edges: \(uv\) and \(u^{\prime}v^{\prime}\); or \(uv^{\prime}\) and \(u^{\prime}v\). By symmetry, assume that \(f=uv\) and \(f^{\prime}=u^{\prime}v^{\prime}\) are contained in \(K^{*}\); see Figure 2 again. Note that \(f\) and \(f^{\prime}\) are not contained in \(M\), because \(\delta_{M}(u)=\delta_{K}(u)\) and \(\delta_{M}(u^{\prime})=\delta_{K}(u^{\prime})\) hold. Define \(M^{\prime}=(M\setminus\{e,e^{*}\})\cup\{f,f^{\prime}\}\), which is also a \(b\)-factor. In what follows, we prove that \(M^{\prime}\) is the desired \(\mathcal{K}^{*}\)-free \(b\)-factor, i.e., \(\mathcal{K}(M^{\prime})\subsetneq\mathcal{K}(M)\). Since \(K\not\in\mathcal{K}(M^{\prime})\), it suffices to show that \(\mathcal{K}(M^{\prime})\subseteq\mathcal{K}(M)\). Assume to the contrary that there exists a subgraph \(K^{\prime}\in\mathcal{K}(M^{\prime})\setminus\mathcal{K}(M)\). Then, \(K^{\prime}\) must contain at least one of \(f\) and \(f^{\prime}\), and without loss of generality assume that \(f\in E(K^{\prime})\). Since \(K-e\) is connected by \(t\geq 2\) and \(M^{\prime}\) contains \((E(K)\setminus\{e\})\cup\{f\}\) by \(e^{*}\notin E(K)\), it follows from Remark 2.2 that \(V(K)\cup\{v\}\) is contained in \(K^{\prime}\), in particular \(u,u^{\prime},v\in V(K^{\prime})\). Since all the edges in \(\delta_{M}(u^{\prime})\) are contained in \(K\) and \(v\not\in V(K)\), \(M\) has no edge connecting \(u^{\prime}\) and \(v\), and neither does \(M^{\prime}\). It then follows from \(K^{\prime}\in\mathcal{K}(M^{\prime})\), i.e., \(E(K^{\prime})\subseteq M^{\prime}\), that \(u^{\prime}v\not\in E(K^{\prime})\). Since \(e\) is the only edge in \(M\) connecting \(u\) and \(u^{\prime}\), we have \(uu^{\prime}\not\in M^{\prime}\), which implies that \(uu^{\prime}\not\in E(K^{\prime})\). It now follows from \(u^{\prime}v,uu^{\prime}\not\in E(K^{\prime})\) that \(u,u^{\prime}\) and \(v\) are contained in the same color class of \(K^{\prime}\), since \(K^{\prime}\) is complete partite. This contradicts the fact that \(K^{\prime}\) contains \(f=uv\), and thus we conclude that \(\mathcal{K}(M^{\prime})\subsetneq\mathcal{K}(M)\). By repeating the above procedure, we obtain a \(b\)-factor \(M\) with \(\mathcal{K}(M)=\emptyset\), i.e., \(M\) is \(\mathcal{K}\)-free. It is straightforward to see that this procedure can be executed in polynomial time, which completes the proof. Therefore, we conclude that \(\mathcal{K}\)-Free \(b\)-Factor Problem can be solved in polynomial time. From Theorem 4.1, we can derive the following theorem by applying the same argument as Theorem 3.4. **Theorem 4.3**.: _For a fixed positive integer \(t\), Maximum \(\mathcal{K}\)-Free \(b\)-Matching Problem can be solved in polynomial time if the maximum degree of \(G\) is at most \(2t-1\), \(b(v)\leq t\) for each \(v\in V\), and all the subgraphs in \(\mathcal{K}\) are \(t\)-regular complete partite._ From Theorem 4.3, we immediately obtain Theorem 1.3 by setting \(b(v)=t\) for every \(v\in V\). ## 5 Generalization: Forbidden Degree Sequences In this section, we extend Theorems 3.1, 3.4, and 3.5 so that the forbidden structure is not an edge-disjoint family of \(t\)-regular subgraphs but that of the subgraphs with specified degree sequences. A similar problem of _general factors_ is of classical and recent interest [7, 10, 24, 29, 40, 41]. Let \(G=(V,E)\) be a graph and \(\mathcal{H}\) be a family of subgraphs of \(G\). Each subgraph \(H\in\mathcal{H}\) is associated with a set of degree sequences \(\overline{D}_{H}\subseteq\mathbb{Z}_{+}^{V(H)}\). Define an edge subset family \(\mathcal{F}_{H}\subseteq 2^{E(H)}\) by \[\mathcal{F}_{H}=\{F\subseteq E(H)\colon d_{F}\in\overline{D}_{H}\},\] and subgraph families \[\mathcal{K}_{H}=\{(V(H),F)\colon F\in\mathcal{F}_{H}\},\quad\mathcal{K}_{ \mathcal{H}}=\bigcup_{H\in\mathcal{H}}\mathcal{K}_{H}.\] Figure 2: All of the edges are in \(E(K^{*})\) and, particularly, all of the solid edges are in \(M\). The solid bold edge is in \(E(K^{*})\cap E(K)\) and the other thin edges are in \(E(K^{*})\setminus E(K)\). We are interested in \(\mathcal{K}_{\mathcal{H}}\)-free \(b\)-matchings. Namely, for a subgraph \(H\in\mathcal{H}\), \(\overline{D}_{H}\) represents the set of forbidden degree sequences on \(V(H)\), and \(\mathcal{K}_{H}\) represents the family of the forbidden subgraphs of \(H\), i.e., those attaining the degree sequences in \(\overline{D}_{H}\). We now extend Theorems 3.1, 3.4, and 3.5 in the following way. **Theorem 5.1**.: \(\mathcal{K}\)-Free \(b\)-Factor Problem _and Maximum \(\mathcal{K}\)-Free \(b\)-Matching Problem can be solved in polynomial time if the following conditions are satisfied:_ 1. \(b(v)\) _is bounded by a fixed constant for each_ \(v\in V\)_; and_ 2. \(\mathcal{K}=\mathcal{K}_{\mathcal{H}}\) _for an edge-disjoint family_ \(\mathcal{H}\) _of subgraphs of_ \(G\) _such that, for each_ \(H\in\mathcal{H}\)_,_ 1. \(|V(H)|\) _is bounded by a fixed constant, and_ 2. \(D_{H}:=\{d_{F}\colon F\text{ is a b-matching in }H\}\setminus\overline{D}_{H}\) _is a constant-parity jump system._ Proof.: Suppose that the conditions 1 and 2 are satisfied. By following the proof of Theorem 3.1, we see that \(\mathcal{K}\)-Free \(b\)-Factor Problem can be solved in polynomial time. We now prove that Maximum \(\mathcal{K}\)-Free \(b\)-Matching Problem can be solved in polynomial time. Define \(J\subseteq\mathbb{Z}_{+}^{V}\) as the set of the degree sequences of all \(\mathcal{K}\)-free \(b\)-matchings in \(G\). In order to apply Lemma 2.5, we show that \(J\) is a constant-parity jump system. Define \(J_{0}\subseteq\mathbb{Z}_{+}^{V}\) by \[J_{0}=\left\{d_{F}\in\mathbb{Z}_{+}^{V}\colon F\subseteq E\setminus\bigcup_{H \in\mathcal{H}}E(H)\right\},\] which is a constant-parity jump system. For each subgraph \(H\in\mathcal{H}\), regard \(D_{H}\subseteq\mathbb{Z}_{+}^{V(H)}\) as a subset of \(\mathbb{Z}_{+}^{V}\) by setting \(x(v)=0\) for each \(x\in D_{H}\) and \(v\in V\setminus V(H)\). Then, \(J\) is obtained from \(J_{0}\) by taking the Minkowski sum with \(D_{H}\) for all \(H\in\mathcal{H}\), and then taking the intersection with a box \(\{x\in\mathbb{R}^{V}\colon\mathbf{0}\leq x\leq b\}\). This shows that \(J\) is a constant-parity jump system. Thus, by applying Lemma 2.5, we conclude that Maximum \(\mathcal{K}\)-Free \(b\)-Matching Problem can be solved in polynomial time. Observe that Theorems 3.1 and 3.4 are exactly special cases of Theorem 5.1, where \(H\) is a \(t\)-regular complete partite graph and \(\overline{D}_{H}=\{(t,t,\ldots,t)\}\) for each \(H\in\mathcal{H}\). Observe also that Theorem 3.5 is a special case of Theorem 5.1, where \(\mathcal{H}=\{H_{1},\ldots,H_{\ell}\}\). We conclude this paper with a few applications of Theorem 5.1. **Example 5.2**.: Suppose that each subgraph \(H\in\mathcal{H}\) is obtained from \(K_{5}\) by removing a matching of size two (which is unique up to isomorphism). Let \(\overline{D}_{H}=\{(2,\ldots,2)\}\) for each \(H\in\mathcal{H}\) and let \(b(v)=2\) for each \(v\in V\). It then follows that \(D_{H}=\{d_{F}\colon F\text{ is a 2-matching in }H\}\setminus\{(2,\ldots,2)\}\) is a constant-parity jump system. It also follows that the subgraph family \(\mathcal{K}_{H}\) consists of cycles of length five in \(H\). Now Theorem 5.1 shows that we can find a maximum 2-matching which does not contain the cycles of length five in \(H\) for each subgraph \(H\in\mathcal{H}\). This is an interesting contrast to the fact that finding a maximum \(C_{5}\)-free 2-matching in graphs is NP-hard (see [8]). To the best of our knowledge, this is the first polynomially solvable class of the restricted 2-matching problem excluding cycles of length five. **Example 5.3**.: Suppose that each subgraph \(H\in\mathcal{H}\) is obtained from \(K_{3,3}\) by removing an edge (which is unique up to isomorphism). Let \(\overline{D}_{H}=\{(2,\ldots,2)\}\) for each \(H\in\mathcal{H}\) and let \(b(v)=2\) for each \(v\in V\). It then follows that \(D_{H}=\{d_{F}\colon F\text{ is a 2-matching in }H\}\setminus\{(2,\ldots,2)\}\) is a constant-parity jump system. It also follows that the subgraph family \(\mathcal{K}_{H}\) consists of the cycles of length six in \(H\). Now Theorem 5.1 shows that we can find a maximum 2-matching which does not contain the cycles of length six in \(H\) for each subgraph \(H\in\mathcal{H}\). This is an interesting contrast to the fact that finding a maximum \(C_{6}\)-free 2-matching in bipartite graphs is NP-hard (Geelen, see [13, 20]). Note also that such a graph \(H\) (i.e., \(K_{3,3}-e\)) is discussed by Takazawa [46] as an example of so called _Hamilton-laceable graphs_[43]. ## Acknowledgments The first author was supported by JSPS KAKENHI Grant Numbers JP20K23323, JP20H05795, JP22K17854. The second author was supported by JSPS KAKENHI Grant Numbers JP20K11692, JP20H05795, JP22H05001. The third author was supported by JSPS KAKENHI Grant Number JP20K11699.
2309.12394
Bridging the gap in the mass-size relation of compact galaxies with MaNGA
We present the analysis of the full MaNGA DR17 sample to characterize its population of compact galaxies. We focus on galaxies that fill the stellar mass (M$_{\star}$) gap between compact elliptical galaxies (cEs; $8 \lesssim \log \left(M_{\star} / M_{\odot} \right) \lesssim 10$) and compact massive galaxies (CMGs; $10 \lesssim \log \left(M_{\star} / M_{\odot} \right)$). We study their stellar populations and kinematics to reveal how their properties depend on stellar mass. We select compact galaxies in the MaNGA DR17 sample according to their effective radius ($R_e$) and stellar mass. 37 galaxies fulfill our selection criteria in the bridging region between cEs and CMGs. We derive their kinematics and stellar population parameters from the stacked spectra at 1~$R_e$ using a full spectral fitting routine. We then classify the selected compact galaxies in three main groups based on their stellar population properties. One of the groups shows characteristics compatible with relic galaxies, i.e. galaxies that have remained mostly unchanged since their early formation epoch ($z \sim 2$). Another group shows more extended and continuous star formation histories (SFHs). The third group shows a low star-forming rate at initial times, which increases at around $\sim4$ Gyr. We compare the derived properties of the selected galaxies with those of previously studied compact galaxies at different mass ranges. The selected galaxies successfully fill the mass gap between cEs and CMGs. Their properties are compatible with the assumption that the scaling relations of compact galaxies at different mass ranges are related, although galaxies in the first group are clear outliers in the fundamental plane, suggesting different formation mechanisms for this relic population.
P. Grèbol-Tomàs, A. Ferré-Mateu, H. Domínguez-Sánchez
2023-09-21T18:00:03Z
http://arxiv.org/abs/2309.12394v1
# Bridging the gap in the mass-size relation of compact galaxies with MaNGA ###### Abstract We present the analysis of the full MaNGA DR17 sample to characterize its population of compact galaxies. We focus on galaxies that fill the stellar mass (M\({}_{*}\)) gap between compact elliptical galaxies (cEs; \(8\lesssim\log{(M_{*}/M_{\odot})}\lesssim 10\)) and compact massive galaxies (CMGs; \(10\lesssim\log{(M_{*}/M_{\odot})}\)). We study their stellar populations and kinematics to reveal how their properties depend on stellar mass. We select compact galaxies in the MaNGA DR17 sample according to their effective radius (\(R_{e}\)) and stellar mass. 37 galaxies fulfill our selection criteria in the bridging region between cEs and CMGs. We derive their kinematics and stellar population parameters from the stacked spectra at 1 \(R_{e}\) using a full spectral fitting routine. We then classify the selected compact galaxies in three main groups based on their stellar population properties. One of the groups shows characteristics compatible with relic galaxies, i.e. galaxies that have remained mostly unchanged since their early formation epoch (\(z\sim 2\)). Another group shows more extended and continuous star formation histories (SFHs). The third group shows a low star-forming rate at initial times, which increases at around \(\sim 4\) Gyr. We compare the derived properties of the selected galaxies with those of previously studied compact galaxies at different mass ranges. The selected galaxies successfully fill the mass gap between cEs and CMGs. Their properties are compatible with the assumption that the scaling relations of compact galaxies at different mass ranges are related, although galaxies in the first group are clear outliers in the fundamental plane, suggesting different formation mechanisms for this relic population. keywords: galaxies: evolution - galaxies: formation - galaxies: kinematics and dynamics - galaxies: stellar content - galaxies: compact galaxies ## 1 Introduction It is well established that massive galaxies were more compact in the early Universe (e.g. Daddi et al., 2005; Trujillo et al., 2007; Buitrago et al., 2008). Although compact galaxies are observed in the local Universe, their number density increases with redshift. The realm of compact galaxies, i.e. galaxies which have smaller radii than the majority of the galaxies at a given mass, covers approximately 5 orders of magnitude in the stellar mass range. At the lowest stellar masses, \(6<\log{(M_{*}/M_{\odot})}<8\), ultra compact dwarf galaxies (UCDs) present the smallest projected effective radii (up to \(R_{e}\sim 20\) pc), making them the most compact galaxies in the Universe (e.g. Drinkwater et al., 2000; Phillipps et al., 2001; Brodie et al., 2011). In the intermediate mass range (\(8<\log{(M_{*}/M_{\odot})}<10\)), compact elliptical galaxies (cEs) have sizes of \(100<R_{e}\) (pc) \(<900\)(e.g. Faber, 1973; Choi et al., 2002; Drinkwater et al., 2003). Finally, the massive end of the compact realm is populated by compact massive galaxies (CMGs). These galaxies present high stellar masses (\(\log{(M_{*}/M_{\odot})}>10\)) and small radii (\(R_{e}<1.5\) kpc ; Shen et al., 2003), and have been extensively shown to be outliers of the local mass-size relations. The current galaxy formation paradigm states that the ETGs that we observe today grow in a two-phase formation scenario (Bezanson et al., 2009; Naab et al., 2009; Oser et al., 2010, 2012; Hilz et al., 2013). In the first phase, which takes place at the earliest stages of the Universe, a gas-rich star-forming system is created (Dekel et al., 2009). The result is an extremely compact object, often referred to as a _blue nugget_. These galaxies show blue colors and high luminosity (Zolotov et al., 2015), fuelled by an intense star formation (SFR\(\geq 10^{3}\)M\({}_{\odot}\) yr\({}^{-1}\); Smith, 2020). At some stage this dissipative phase ends, and the compact object is quenched, becoming a massive, red and metal-rich object. These CMGs, also nick-named _red nuggets_(Damjanov et al., 2014; Schreiber et al., 2018; Martin-Navarro et al., 2019; Valentino et al., 2020), mark the end of the first phase of formation, at most by \(z\sim 2\). Recent observations with the James Webb Space Telescope have discovered red nuggets even at \(z\sim 7\)(Nanayakkara et al., 2022; Carnall et al., 2023, 2023). The second phase of ETGs growth is driven by dry minor merger events, which induce the growth of the red nugget, by adding accreted material to the outskirts of these galaxies. This process could explain the mild grow in stellar mass but the large increase in size, driving the strong size evolution seen over cosmic time and building up the massive ETGs population observed at \(z=0\)(Daddi et al., 2005; Trujillo et al., 2007; van Dokkum et al., 2010). Since the second phase is driven by stochastic events, there is a low probability that a galaxy avoids such a phase, remaining unchanged since the early stages and presenting the properties of a red nugget (Trujillo et al., 2009; Quilis and Trujillo, 2013; Poggianti et al., 2013; Damjanov et al., 2014). These untouched galaxies, found at \(z~{}\sim~{}0\), are often referred to as _massive relic galaxies_. These are rare and hard-to-find objects but extremely valuable for understanding galaxy formation, as they have similar properties as high-\(z\) ETGs but observed with the spectral and spatial resolution of local galaxies. Their importance relies on their relation with those of ETGs at high redshift (Garguito et al., 2016; Belli et al., 2017; Tanaka et al., 2019). Theoretical models predicting red nuggets survival are sensitive to galaxy merging processes (Wellons et al., 2016), and their estimates expect that 0.15% of the massive galaxies formed at \(z\simeq 2\) could end up being a massive relic galaxy (Quilis and Trujillo, 2013). The current number of confirmed massive relic galaxies up to \(z\sim 0.5\) is 13 (Trujillo et al., 2014; Ferre-Mateu et al., 2017; Spiniello et al., 2021). However, only three massive relic galaxies in the local Universe have been characterized in full detail: Mrk1216, PGC032873, NGC1277 (Trujillo et al., 2014; Ferre-Mateu et al., 2017). All these massive relic galaxies feature a high mass (\(\log{(M_{*}/M_{\odot})}>11\)) and small radius (\(R_{e}<1\) kpc), with a fast star formation episode as early as at the time of the Big Bang (\(t\sim t_{BB}\sim 14\) Gyr). They all present disk-like morphologies, similar to those in observed massive red nuggets (Buitrago et al., 2008; van der Wel et al., 2011; Trujillo et al., 2014). Massive relics are found in all environments, although they seem to thrive in clusters of galaxies (Poggianti et al., 2013; Cebrian and Trujillo, 2014; Damjanov et al., 2015; Stringer et al., 2015; Peralta de Arriba et al., 2016). This is a combination of serendipitiousness (e.g. finding them in a cluster is easier), and the extreme conditions from the cluster itself. The high gravitational fields accelerate the galaxies, and given their high velocities, this prevents mergers from taking place, promoting the occurrence of massive relic galaxies. Conditions in the field are expected to happen later and therefore galaxies in this environment will tend to be less extreme in their properties (Ferre-Mateu et al., 2017). In the latter, a 'degree of relicness' linked to the environment was proposed for the known massive relic galaxies, later supported by Spiniello et al. (2021). However, not all CMGs are massive relic galaxies. In fact, the majority of CMGs found in the local Universe show surprisingly large fractions of young stellar populations (e.g. Trujillo et al., 2009, Poggianti et al., 2013; Damjanov et al., 2014; Buitrago et al., 2018). How these galaxies are formed still poses a great challenge within the current cosmological models (Ferre-Mateu et al., 2012). While some of them could be the remnant of a more massive galaxy that has lost its stars due to external processes, such formation scenario is less likely to happen and CMGs are mostly expected to be formed by in-situ processes (Cappellari, 2016). As we move towards lower stellar masses, the leading formation scenario changes from an in-situ based to external processes playing a more relevant role. This change seems to occur around the characteristic mass scale of \(3\times 10^{10}M_{\odot}\)(e.g. Cappellari, 2016; Ferre-Mateu et al., 2018, 2021; Dominguez Sanchez et al., 2020), where several relations of ETGs seem to have relevant changes. As a result, cEs are thought to be a mix-bag of objects, although they are mostly thought to be the result of stripping a dwarf elliptical galaxy or a low-mass ETG or spiral (e.g. Faber, 1973; Bekki et al., 2001; Choi et al., 2002; Graham, 2002; Paudel et al., 2016). However, some of these galaxies are also expected to form in situ, describing cEs like the true low-mass end of ETGs (e.g. Kormendy et al., 2009; Kormendy and Bender, 2012; Du et al., 2019). In the first case, where the cE is the result of a stripping process, they are expected to be outliers in most of the scaling relations, such as the black hole-galaxy mass, or the mass-metallicity relation (e.g. Norris et al., 2014; Janz et al., 2016; Ferre-Mateu et al., 2018, 2021; Kim et al., 2020). This is further supported by the large SMBHs typically found in their centers (e.g. Forbes et al., 2014; Paudel et al., 2016; Pechetti et al., 2017; Ferre-Mateu et al., 2021). But the strongest evidence for this evolutionary path has been seen observationally, with cEs currently being stripped by a larger galaxy (e.g. Huxor et al., 2011; Paudel and Ree, 2014; Ferre-Mateu et al., 2018). Nonetheless, evidence for some cEs being formed in-situ has also been seen, in particular outside the cluster environment, where stripping is not likely to happen (Huxor et al., 2013; Paudel et al., 2014; Ferre-Mateu et al., 2018, 2021; Kim et al., 2020). As they are thought to be the very low-mass end of ETGs, it is expected that they will follow the scaling relations at such low-mass end. Unfortunately, there is no precise number of compact galaxies for each formation pathway yet (e.g. in-situ vs. ex-situ), due to the incomplete samples we have at hand. Interestingly, Ferre-Mateu et al. (2021) suggested that there may be a connection between the cEs and CMGs families. In their mass-size relation plot (Ferre-Mateu et al., 2021, Figure 11), they showed that the distribution of CMGs and cEs presented similar stellar populations, while also sharing similar kinematic features. However, there was a noticeable gap between these two groups of compact galaxies, which could be the clue to reveal whether such connection exists in reality. To this end, the following study is aimed at looking for compact galaxies bridging this gap. We study their kinematic and stellar populations properties, in order to relate the evolutionary paths of compact galaxies at different masses. To that purpose, we use local galaxies from the MaNGA survey (Bundy et al., 2015) due to their large statistics and wealth of data, including IFU observations. In this work we present the study of the MaNGA sample and the global properties of the selected compact galaxies, whereas the spatially-resolved analysis will be done in a future work. In Section 2 we present the MaNGA survey and our criteria to select compact galaxies. In Section 3 we obtain the main kinematic and stellar populations properties of the selected sample. We then classify the compact galaxies in different groups based on these properties. In Section 4 we discuss the stellar population and kinematic properties of each group independently and we compare them with the properties shown by cEs and CMGs in the literature. Finally, we present in Section 5 a summary of our conclusions by sketching how the properties of each group can be linked with different galaxy formation pathways. ## 2 Sample In this work, we use the Mapping Nearby Galaxies at APO (MaNGA; Bundy et al., 2015) survey, a Sloan Digital Sky Survey (SDSS; York et al., 2000) survey. With its latest data release, DR17, spectroscopy for over 10 000 galaxies up to \(z<0.17\) has been obtained. This survey takes advantage of the _Integrated Field Unit_ (IFU) technology to obtain spatially resolved spectroscopy for each single galaxy. Data is presented as datacubes, where two dimensions correspond to spatial coordinates (known as _spaxels_) and the third contains the spectrum of each spaxel. The spectra cover a wavelength range from 3600 A to 10300 A with a spectral resolution of \(R\sim 2000\), which roughly corresponds to 2.51 A at 5000 A. We select compact galaxies by imposing mass and size criteria. The structural photometric parameters are obtained from the MaNGA PyMorph DR17 catalog (Fischer et al., 2019). It provides parameters from fitted Sersic and Sersic+Exponential profiles to the 2D surface brightness profiles of MaNGA DR17 galaxies. From this catalog we use the effective radii (\(R_{\rm e}\)), axis ratios and galaxy's luminosities. The latter are translated into stellar mass (\(M_{*}\)) using the mass-to-light ratio from Mendel et al. (2014). The MaNGA PyMorph catalog provides a flagging system (FLAG_FIT) which separates galaxies which are better described by a Sersic or a Sersic+Exponential profile. We therefore use the parameters returned by the optimal model for each galaxy. When FLAG_FIT equals 0 (both models are acceptable), we use the parameters returned by the Sersic+Exponential parametrization. There are a number of different criteria in the literature to define compact galaxies, particularly at the high-mass end (e.g. Buitrago et al., 2018; Valentinuzzi et al., 2010; Scognamiglio et al., 2020). Since we are aiming to fill the gap that connects the high-mass end of cEs with the low-mass end of CMGs, here we impose the following criteria: * \(M_{*}>10^{9}\)\(M_{\sun}\) * \(R_{\rm e}<2\) kpc * \(\log\left(\Sigma_{1.5}\right)>10.3\) dex where \(\Sigma_{1.5}=\frac{M}{(R_{\rm e}~{}\left[\rm{kpc}\right])\times 5}\) is a modified surface mass density, as in Barro et al. (2013). The first condition is set to select all galaxies with stellar masses that cover the high-mass end of cEs, which is sometimes missed with the low-mass end of regular elliptical galaxies (see e.g. Ferre-Mateu et al., 2021, Figure 11). The second corresponds to the largest size limit used to select CMGs (Buitrago et al., 2018; Charbonnier et al., 2017). The third criteria ensures the compactness of the candidate (based in Barro et al., 2013; Damjanov et al., 2015; Charbonnier et al., 2017). We show in Figure 1 the \(R_{\rm e}\) vs M\({}_{*}\) for the complete sample of 10293 MaNGA DR17 galaxies. Galaxies are marked according to their morphology. We take advantage of the morphological classification presented in the MaNGA Deep Learning Morphological Value Added catalog Dominguez Sanchez et al. (2022, MDLM-VAC). It provides a series of binary classifications which separate ETGs from LTGs, pure ellipticals (Es) from lenticulars (S0), barred from non-barred galaxies and edge-on from non-edge-on galaxies. In addition, the catalog also reports a T-Type value, analogue to the Hubble (1926) sequence. For the figure, we select ETGs by requiring: T-Type<0 and PS0<0.5. The selection results in 3834 out of 10293 galaxies from the MaNGA DR17 parent sample. In Figure 1, compact galaxies selected by the above criteria are shown in green, being all of them ellipticals. This selection returns 38 galaxies. After visually inspecting each individually, we discard object 8092 - 12794 as it corresponds to two interacting galaxies. Our final sample thus consists of 37 compact galaxies. Their stellar masses, effective radii and redshifts are quoted in Table 1. The region corresponding to where compact galaxies would be located is colored in Figure 1. The yellowish region shows the high-mass end of cEs, while CMGs are expected to populate the bluelsion region. The galaxies selected in this work do, precisely, fall in the mass gap between cEs and CMGs (turquoise region). These mass limits are purely illustrative, as there is not a unique mass threshold in the literature to distinguish compact families. In fact, uncertainties in the \(M_{*}\) and \(R_{\rm e}\) estimations can vary the number of selected galaxies. To check the robusness of our selection, for each galaxy, we have considered the combination of mass and \(R_{e}\) more favourable and more difabourable, within errors, to be considered as compact. We assume a standard error on the \(M_{*}\) of 20%, while the error in Re is quoted from the PyMorph catalog. The most favorable conditions (largest \(M_{*}\), smallest \(R_{\rm e}\)) would provide 63 galaxies, an increase of \(\times 1.5\) in respect to the nominal value. The least favorable ones (smallest \(M_{*}\) and largest \(R_{e}\)) would instead only provide 23 galaxies, roughly 60% of the selection from the nominal values. This galaxies are therefore considered as the most robust (highlighted in Table 1), but we will use hereafter the nominal values. According to the two-phase formation paradigm, compact galaxies should be already formed by at least \(z\sim 2\), after the red nugget \begin{table} \begin{tabular}{l l l l} \hline **Plate-IFU** & \(\log\left(M_{*}/M_{\sun}\right)\) & \(R_{\rm e}\) [kpc] & \(z\) \\ \hline [MISSING_PAGE_POST] & 10.53 & 1.10 \(\pm\) 0.01 & 0.0274 \\ \hline \end{tabular} * [https://www.sdss4.org/dr17/manga/](https://www.sdss4.org/dr17/manga/) * manga-target-selection/nsa/ \end{table} Table 1: The 37 selected MaNGA compact galaxies. Each galaxy is labelled according to its Plate-IFU given by the MaNGA DR17 survey. The redshift value is obtained from the NASA-Sloan Atlas catalog. Stellar mass and effective radius values (including their errors) are estimations from the PyMorph and Deep Learning VACs (Dominguez Sánchez et al., 2022; Fischer et al., 2019). Stellar mass errors are asumed to be uniform and the 20% of the nominal \(M_{*}\) value. Galaxies with an asterisk in their Plate-IFU are those selected even when considering the most unfavorable \(M_{*}\) and \(R_{\rm e}\) values according to their uncertainties. is formed. If compact galaxies in the mass gap are somehow the remnants of this early stage or directly connected to it, they should roughly match the mass-size relation at \(z\sim 2\). For example, all massive relic galaxies studied to date are consistent with the mass-relation of \(z\sim 2\) galaxies (e.g. Ferre-Mateu et al., 2017; Yildirim et al., 2017; Spiniello et al., 2021). In the right panel of Figure 1 we compare the 37 selected compact galaxies to mass-size relations at different redshifts using CANDELS/3D-HST (from van der Wel et al., 2014). We find that, although all the galaxies in our sample are found in the nearby Universe, they are in reality more compatible with the mass-size relations at \(z\sim 1.25-1.75\). Only the most massive galaxy, 11020-1902, is compatible with a \(z\sim\)2 relation, making it the best candidate for a relic galaxy in this sample. We will discuss this particular galaxy in more detail in Section 4.1. ### Methodology In this work we aim at characterizing the global properties of the galaxies bridging the gap between cEs and CMGs in the mass-size relation of the MaNGA DR17 sample. In order to increase the signal-to-noise ratio and to simplify the statistical analysis, we have stacked together all the spaxels within 1 \(R_{e}\) for each galaxy cube. We have used a.dguer code applied to the QFitsView FITS file viewer (Ott, 2012) to stack the spectra. For each galaxy, we have retrieved a single spectrum from stacking all pixels within 1 \(R_{e}\) and the effective radius value from PyMorph VAC. We have centered the circular stacking region at the pixel with the highest photon count. ### Full spectral fitting Kinematic and stellar populations from stacked galaxy spectra are obtained using the Penalized Pixel-Fitting method (rPXF, Cappellari & Emsellem, 2004), implemented in the rPXF Python package (Cappellari, 2012), and the GandALF routine (Sarzi et al., 2006). We have used the full MILES stellar population models (Vazdekis et al., 2015) to fit the spectra, in a wavelength range between 3800 A and 5600 A, with a nominal spectral resolution of FWHM = 2.5A(Falcon-Barroso et al., 2011). The stellar models considered stellar ages from 0.03 to 14 Gyr and metallicities between -2.27 and +0.40 dex. We have used the Base models, corresponding to BaSTI isochrones (Pietrinferni et al., 2014; Hidalgo et al., 2018). Massive relic galaxies have shown to phase an overall steep IMF (Martin-Navarro et al., 2015; Ferre-Mateu et al., 2017; Martin-Navarro et al., 2023). Ferre-Mateu et al. (2013) characterized the impact of the IMF on the derived star formation histories (SFHs). They showed that a slight change in the IMF slope does not significantly change the results of the derived SFHs. Based on their conclusions, we worked with a Kroupa-bimodal function of \(\Gamma=1.30\), so that we can compare our results with previous works. Figure 1: \(R_{e}\) vs. \(M_{*}\) for the full MaNGA DR17 dataset (10293 nearby galaxies). The effective radii are extracted from MaNGA’s PyMorph catalog (Dominguez Sanchez et al., 2022). Mass values are obtained by applying the mass-to-light ratios from Mendel et al. (2014) to the luminosities presented in PyMorph. We use the MaNGA Deep Learning Morphological Value Added catalog (Dominguez Sanchez et al., 2022) to classify galaxies according to their morphology. Crosses indicate spiral or \(S0\) galaxies, while circles represent elliptical galaxies. The dash-dotted black line represents the compactness criterion defined in Section 2, based on the conditions set in Barro et al. (2013); Charbonnier et al. (2017). The black dashed line marks the upper threshold of \(R_{e}=2\) kpc. The background colors intuitively show the compact galaxy family in terms of stellar mass. Yellowish colors represent the high-mass end of the cEs family, whereas the blue region represents the CMG region. In the inset zoom figure we overplot different mass-size relations for ETG at various redshifts from van der Wel et al. (2014), along with the selected compact galaxies for this work. #### 2.2.1 Stellar kinematics The stellar kinematic measurements were obtained with the pPXF routine with an additive Legendre polynomial of degree 5 (used to correct the template continuum shape)2. From this first pPXF iteration, we derive the recessional velocity, \(v\), and its velocity dispersion, \(\sigma\). These two parameters are obtained after fitting the line-of-sight velocity distribution, \(\mathcal{L}\left(\mathcal{V}\right)\), as Gauss-Hermite series (van der Marel & Franx, 1993; Gerhard, 1993): Footnote 2: We tested different values for the polynomia, following a similar methodology to D’Ago et al. (2023). We found that 5 is the one that optimizes the fitting errors \[\mathcal{L}\left(\mathcal{V}\right)=\frac{e^{-y^{2}/2}}{\sigma\sqrt{2\pi}} \left[1+\sum_{m=3}^{M}h_{m}H_{m}\left(y\right)\right], \tag{1}\] where \(y=\left(\mathcal{V}-v\right)/\sigma\) and \(H_{m}\left(y\right)\) are Hermite polynomials. As suggested in Cappellari & Emsellem (2004) a first second-order fitting is conducted to recover \(v\) and \(\sigma\). A second fitting with \(v\) and \(\sigma\) fixed was applied to retrieve higher order kinematic Hermite coefficients. Figure 2 illustrates this procedure by showing the spectrum of a random galaxy in our sample fitted with the pPXF routine. Another relevant kinematic parameter is the specific angular momentum, \(\lambda_{R}\)(Emsellem et al., 2007). It provides information about the internal dynamics of the galaxy, and it is commonly used to classify galaxies as fast or slow rotators (e.g. Zoldan et al., 2018; Sweet et al., 2020; Romeo et al., 2023). This dichotomy has been found to be related with the galaxy morphology, with most massive ETGs being more likely slow rotators (Falcon-Barroso et al., 2019). Here we calculate the \(\lambda_{R}\) values as in Fischer et al. (2019), using the IFU observations provided by the MaNGA survey. The \(\lambda_{R}\) is calculated as a weighted mean over the values in each spaxel: \[\lambda_{R}=\frac{\sum_{i}^{N}R_{i}F_{i}\left|v_{i}\right|}{\sum_{i}^{N}R_{i}F _{i}\sqrt{v_{i}^{2}+\sigma_{i}^{2}}}, \tag{2}\] where \(R\), \(F\), \(v\) and \(\sigma\) denote the radial position, flux, rotational velocity and velocity dispersion at the \(i\)-th spaxel. The sum is done up to \(1\)\(R_{e}\) for spaxels with \(S/N>5\). The number of spaxels used in the \(\lambda_{R}\) estimation strongly depends on the projected angular size of the galaxy. When deriving \(\lambda_{R}\) for our 37 compact galaxies, 13 of them did not have \(S/N\) high enough to estimate this parameter. This corresponds to a 32% of the total number of selected compact galaxies. For the other 24 galaxies for which we could calculate their \(\lambda_{R}\), the typical value for spaxels used was \(\sim 40\). In all cases, all the spaxels within \(1\)\(R_{e}\) fulfilled the requirements to be included in the \(\lambda_{R}\) calculation. Additionally, \(\lambda_{R}\) were corrected for seeing following Graham et al. (2018). The \(\lambda_{R}\) values are shown in Table 2. #### 2.2.2 Stellar populations, SFHs and characteristic timescales We run pPXF again, fixing the kinematics to the values obtained in the first iteration and using a multiplicative Legendre polynomial of degree 7 (to correct for low-frequency continuum variations). From this second run we obtain the mean stellar ages and total metallicities, but also the SFHs of each galaxy. As illustration, we present in Figure 3 an illustrative SFH of a galaxy in our sample. In this case, the SFH is shown as the 'cumulative' stellar mass that the galaxy builds up over cosmic time. From this, several characteristic look-back times can be computed, such as the time when the galaxy formed the 50% and 90% of its stellar mass (\(t_{50}\) and \(t_{90}\), respectively). We also define \(t_{0}\) as the look-back time at which the galaxy started forming stars, which does not need to correspond to the time of the Big Bang. These look-back times are shown in Figure 3 as red vertical dash-dotted lines. From these times, we define the characteristic timescales that can provide information about how fast the star formation occurred. We define \(\Delta_{90}=t_{90}-t_{50}\) and \(\Delta_{50}=t_{50}-t_{0}\), also shown in Figure 3. For example, a high value of these parameters is representative of an extended SFH, whereas a low value would represent very early and fast formation timescales. Figure 3: Example of a derived SFH using pPXF, corresponding to the galaxy 11954-1902. The figure shows how the mass fraction of the galaxy increases with time. The steep increments are the result of not applying a regularization in the pPXF routine. Dashed gray lines show the 50% and 90% values of the total mass fraction. Dash-dotted vertical lines mark the position of \(t_{0}\), \(t_{50}\) and \(t_{60}\) parameters, the times at which the total mass fraction surpasses 0%, 50% and 90%, respectively. Based on these parameters, we define \(\Delta_{90}=t_{90}-t_{50}\) and \(\Delta_{50}=t_{50}-t_{0}\) to characterize the SFH of a galaxy. Figure 2: Spectrum of the selected compact galaxy 11943-9102 (black) and its fitted spectrum using pPXF (green). Purple ranges correspond to masked regions in the spectrum and the gray line shows the residuals from the fit. The yellow line is the emission line result from the fitting. In this case, there is zero emission (horizontal line), which is a further evidence of passive galaxy. The fit has been derived using the full MILES stellar population models with nominal resolution FWHM = 2.5 Å with Base-Fe models. Table 2 quotes the most relevant stellar population properties derived in this section. The stellar population parameters can be affected by the stellar population models employed (Dominguez Sanchez et al., 2019). In addition, the use of scaled-solar or \(\alpha\)-enhanced models can also impact on the results, as shown in Spiniello et al. (2021a,b); D'Ago et al. (2023). The reader is referred to Appendix A, where we have investigated the possible impact from the use of different SSP models. We conclude that our results are robust against \(\alpha\)-enhancements and IMF slopes (except for very steep IMF values). It is out of the scope of this paper to study the effect of using different SSP libraries. For the determination of the \(\alpha\)-abundance we use the more classical line index technique. We compare the metallic absorption line strengths of Mg\({}_{8}\), and \(<\)Fe\(>\) (a combination of Fe5270, Fe5335) (Gonzalez et al., 1993) for the scaled solar ([\(\alpha\)/Fe]=0.0 dex) and enhanced ([\(\alpha\)/Fe]=0.4 dex) SSP models. However, because this method is based on single features, it is also prone to have some of the lines affected by bad sky residuals or bad pixels in the spectra. To minimize this effect, rather than measuring this value for each galaxy individually, we will only measure the value for the three classes described in Section 3.1. ## 3 Analysis Combining the mean ages and the formation timescales help us to understand the evolutionary paths of these compact galaxies. The left-side plot of Figure 4 presents the \(\Delta_{90}\) and \(\Delta_{50}\) values for each galaxy, color-coded by its mean age, as obtained in Section 2.1. As we have not applied any regularization in the \(p\)PXF analysis3, the SFHs are bursty, similar to the one presented in Figure 3. This makes it more likely for galaxies to have the same \(\Delta\) values. We have introduced a small gaussian shift to the in the figure values for illustrative purposes. Footnote 3: Introducing a regularization in the \(p\)PXF analysis can produce a slight differences on the derived values of the stellar populations. However, we have checked that the regularization does not significantly affect the results presented hereafter, as shown in Appendix B. The location of the galaxies in Figure 4 can provide information about the different formation channels they have undergone. For example, relic galaxies, which are expected to form very early and extremely fast, almost in a single-like star formation burst, are expected to be located in the lower left corner of Figure 4, i.e. with both small \(\Delta_{90}\) and \(\Delta_{50}\). On the contrary, younger galaxies with more extended SFH will show larger \(\Delta_{90}\) and/or \(\Delta_{50}\) values. ### Galaxy clustering via \(k\)-means To gain further insight on the formation processes of the compact galaxies in our sample, we have grouped them according to their stellar populations and SFHs. For this, we have used a \(k\)-means algorithm to classify each of the 37 compact galaxies in three4 different clusters according to their observed properties. The properties considered by the algorithm are: \(\Delta_{90}\), \(\Delta_{50}\), Age, [M/H], \(\Sigma_{1.5}\), and \(M_{*}\). In particular, we wish to focus on the SFH parametrization, therefore we have not introduced any kinematic or size measurements in the clustering algorithm. Footnote 4: According to the elbow method, the optimal number of clusters is 5. However, we have decided to use \(k=3\) given the small number of galaxies to be considered. \(k=3\) maximizes the differences between groups while keeping enough number of galaxies in each cluster for a reasonable statistical analysis. We must emphasize that this galaxy allocation in groups only allows to describe the variety of SFHs in our sample. Having only 37 galaxies in our subset prevents from relating these groups with physical galaxy families, with distinct physical properties. Instead, it only allows us to make statements about their different stellar population properties, which are the relevant property for this work. The mean values of the centroids in each parameter space and the number of galaxies in each group can be found in Table 3. The algorithm gave more weight to \(\Delta_{90}\) and \(\Delta_{50}\) in the classification, where the division between groups is more evident. Other stellar population parameters, like metallicity, were used to allocate galaxies with intermediate \(\Delta_{90}\) and \(\Delta_{50}\) values. Table 2 shows the group each galaxy has been allocated into according to this clustering algorithm, which we will refer to as A, B and C. We also show the clustering results in the right plot of Figure 4. Similar to the left panel, galaxies are color-coded according to the group they belong. We show in Figure 5 the stacked spectra of each group of galaxies, along with some relevant spectroscopic lines. It is clear that Group B and C show similar features, while Group A is much different. These behaviors are also seen from the centroid positions in Table 3. Figure 6 shows the mean SFH of each group from the \(k\)-means classification and their \(1\sigma\) errors. As in Figure 3, dashed lines show the 50% and 90% of the total mass. This figure confirms that the different classes show significant differences in the way they build their stellar mass. These clear differences reinforce the robustness of the \(k\)-means classification. According to the behaviors seen in the previous figures and the mean values of the groups (Table 3), the general properties of the three groups of compact galaxies studied in this work are: \(\bullet\) Group A: The majority of our compact galaxies, 76% of the sample, belong to this group. They are old galaxies (\(>12\) Gyr), metal rich (\(\sim 0.3\)) and with extremely steep SFHs (\(\Delta_{90},\ \Delta_{50}\sim 0\)), further supported by their high \([\alpha/\mathrm{Fe}]\) values (\([\alpha/\mathrm{Fe}]=0.3\) dex). They formed in a single burst-like star formation event. Relic galaxies, if any, would belong to this group. However, this group also includes galaxies with slightly younger ages, of \(\sim 10\) Gyr, but still with very steep SFHs. These could be 'late bloomers', i.e. red nuggets that started their formation at later times. \(\bullet\) Group B: This group includes a \(\sim 13\)% of our compact galaxies. They are intermediate-age galaxies (\(\sim 8\) Gyr). They have a wide range of metallicities around the solar-like value (\(\sim 0.0\pm 0.2\) dex), which are consistent with their low \([\alpha/\mathrm{Fe}]\) values (\([\alpha/\mathrm{Fe}]=0.1\) dex). Their SFHs are extended over time, forming stars until recently. These would be the best candidates for the true low-mass end of ETGs. \(\bullet\) Group C: It is the least populated group, with an 11% of the compact galaxy sample. This group hosts the youngest galaxies (\(\sim 5\) Gyr), which show two main star-forming episodes: one at early times (\(t\sim 14\) Gyr), which formed roughly the 40% of their stellar mass, and a later one around \(t\sim 4\) Gyr ago lasting until recent times. This is indicated by their high \(\Delta_{50}\) values but low \(\Delta_{90}\) ones. These galaxies have slightly super-solar metallicities (\(\sim 0.1\) dex) and \([\alpha/\mathrm{Fe}]=0.1\) dex. In this case, these could be galaxies that experimented a recent enhancement of their star formations, maybe due to interaction events. ## 4 Discussion We next compare the properties obtained for the 37 compact galaxies, grouped according to the classification presented in Section 3, to other compact galaxies in the literature. We want to investigate the gap \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline **Plate-IFU** & \(\mathbf{v}\) & \(\mathbf{\sigma}\) & \(\mathbf{\lambda_{R}}\) & **Age** & **[M/H]** & \(\mathbf{\Delta_{50}}\) & \(\mathbf{\Delta_{90}}\) & **Cluster** \\ & **(km s\({}^{-1}\))** & **(km s\({}^{-1}\))** & & **(Gyr)** & **(dex)** & **(Gyr)** & **(Gyr)** & **group** \\ \hline [MISSING_PAGE_POST] 77 & \(4.1 region and its compact galaxies in order to unveil possible relations between compact galaxies at different masses. We base the following discussion on the parameters derived from the stacked spectra within \(1R_{e}\). Using the values within \(1R_{e}\) also allows to compare our galaxies with other known cEs and CMGs. However, there may be some galaxies for which the parameters derived within \(1R_{e}\) are not fully representative of their behavior. We expect to exploit the spatially-resolved information from MaNGA DR17 IFU observations in future works. ### Insights from the stellar populations In this work we have used the mass-size relation to select the compact galaxies in the MaNGA DR17 sample. However, the mass-metallicity relation (MZR) is one of the most tight relations, whereby the more Figure 4: Characteristic formation timescales of the compact galaxies in our sample. _Left:_ Each galaxy is colored according to its derived mean age measured within \(1~{}R_{e}\). The values have been randomly shifted to avoid overlapping points. This figure summarizes when and how fast a galaxy has formed its entire stellar mass. The different locations within this plot will thus characterize different formation pathways. _Right:_ Same as in left panel, but with galaxies colored according to the results of the \(k\)-means clustering algorithm (see Section 3.1). We label each group as described in the text. Figure 5: Stacked spectrum of each galaxy group. Relevant spectroscopic lines are also shown, from left to right: H\(\alpha\), H\(\beta\), H\(\delta\), H\(\gamma\), Mg\({}_{\rm B}\), Fe4303, Fe5159, Fe5270, Fe5335, NaD1 and NaD2. massive galaxies tend to be more metal rich (e.g. Tremonti et al., 2004; Gallazzi et al., 2006; Panter et al., 2008; Saviane et al., 2014; Kirby et al., 2020; Henry et al., 2021; Langerodti et al., 2022). Figure 10 shows the relation between the stellar mass, stellar metallicity and effective radius, along with their projections. In addition to the 37 compact galaxies analyzed in this work, we also show the location of a sample of cEs from Janz et al. (2016); Ferre-Mateu et al. (2018, 2021a), and CMGs from Ferre-Mateu et al. (2012, 2015, 2017); Trujillo et al. (2014); Yildirim et al. (2017); Spiniello et al. (2021b). The top-right projection in Figure 10 shows the stellar mass-size relation. This projection was already studied in Section 2 (Figure 1). We now can confirm that our compact galaxy sample effectively fills the gap between cEs and CMGs, as suggested by Ferre-Mateu et al. (2021a). The bottom-left projection shows the metallicity-size relation, where no clear relation is seen (neither previously known). The bottom-right panel of Figure 10 presents the mass-metallicity relation of compact galaxies, together with the Gallazzi et al. (2021) scaling relation for ETGs at \(z\sim 0\). This is a crucial projection to better understand the nature of the galaxies. Overall, galaxies that were larger and more massive, but become compact due to stripping events are expected to have higher metallicities than the average scaling relation. On the contrary, those formed in-situ are expected to follow the local scaling MZR. In this projection, cEs present the largest deviations from the MZR, with the majority of them laying above the MZR (Ferre-Mateu et al., 2021a). This is representative of the fact that the majority of cEs are known to be the result of stripping a dwarf or low-mass galaxy. However, there is a small fraction of cEs that are closer to the scaling relation (or within the scatter), which have been proposed to have an intrinsic origin (Ferre-Mateu et al., 2018, 2021a; Kim et al., 2020). CMGs follow in general the MZR of massive galaxies, with a scatter consistent with the intrinsic one of the relation. Most of the relic galaxies are indeed outliers of the relation. They show higher metallicities than the non-relic CMGs and normally-sized ETGs, probably due to the fact that they missed the second evolutionary phase, which decreases the galaxy total metallicity (Ferre-Mateu et al., 2017; Spiniello et al., 2021b). As described in Gallazzi et al. (2021), their MZR is calculated based on line indices and a library of parametrized SFHs, which is a different methodology than the one in the present work. We have overplotted in the stellar mass-metallicity plot in Figure 10 the density distribution of \(\sim\) 60 ETGs galaxies with \(10.3<\log M_{*}/M_{\odot}<10.7\) and \(3.5<R_{\rm e}[{\rm kpc}]<4.5\) in the MaNGA DR17 sample. These ranges of mass and size are considered to be usual among ETGs. The metallicity values for these galaxies were estimated using the same methodology as described in Section 2.1. The contour plot reveals that the Gallazzi et al. (2021) relation describes well the behavior of non-compact ETGs analyzed with our rPXF -based methods. Hence, it can be used to analyze our compact galaxy sample as well. We find that the compact galaxies in this work show a variety of properties in this projection. Group A galaxies are clear outliers to the MZR, being much more metal-rich than the non-compact ETGs. Given their steep SFHs and old stellar populations, some of these galaxies could be good candidates for intermediate-mass relic galaxies. Group C galaxies appear in the limiting region between being outliers and the intrinsic scatter of the MZR. Finally, the MZR is best followed by Group B galaxies. This agrees with their continuous and extended SFH, as the newborn stars should follow the current MZR, suggesting an intrinsic origin. Interestingly, there is one extreme outlier from Group B in the MZR. This galaxy is also the one with the highest mass in our sample, 11020-1902, which followed the mass-size relation of \(z\sim 2\) galaxies in Figure 1. In Figure 4 we see that this galaxy is the only in Group B that has \(\Delta_{50}\sim 0\) but very large \(\Delta_{90}\), with intermediate stellar ages (\(\sim 8\) Gyr). The initial hypothesis was that this could be the best candidate for a relic galaxy, but the recovered SFH does not fully support this. Another possibility to explain the origin of this galaxy is being a 'late-biomer' (Ferre-Mateu et al., 2012, 2021a). These are galaxies that followed the formation pathway of massive galaxies, but that started forming stars much later in cosmic time. 'Late-bloomers' will thus have intermediate ages (\(\sim\)8-10 Gyr) but very short \(\Delta_{50}\). If their \(\Delta_{90}\) is also small, then these could be the replica of the massive relic galaxies. In fact, we see that only one galaxy in Group A shows small \(\Delta_{50},~{}\Delta_{90}\) and intermediate ages, 8323-1901. However, if the 'late-biomer' suffers wet interacting processes, these would trigger star-forming events, increasing the value of \(\Delta_{90}\), as we see for 11020-1902. All these speculations on the actual nature of individual galaxies will be revisited in future works exploiting MaNGA IFU data. Another interesting parameter that can help to gain insight into the formation channels of galaxies is the \([\alpha/{\rm Fe}]\) ratio. This value is deeply related to stellar formation processes. A high \([\alpha/{\rm Fe}]\) ratio is representative of a quick star-forming episode, almost single-burst like, while low \([\alpha/{\rm Fe}]\) values are related to more extended SFHs (Matteucci and Recchi, 2001; Thomas et al., 2005; de La Rosa et al., 2011; McDermid et al., 2015). We present in Figure 8 the \([\alpha/{\rm Fe}]\) distribution values of our compact galaxies, compared with those from the cEs and CMGs in the literature. Group A galaxies show the highest \([\alpha/{\rm Fe}]\) values. Their high \([\alpha/{\rm Fe}]\) values are consistent with their early and steep SFHs (see Section 3). Such high \([\alpha/{\rm Fe}]\) values are particularly similar to those found in confirmed relics, being slightly higher than general CMGs in some cases. On the other hand, Group B and Group C show lower \([\alpha/{\rm Fe}]\) values, compatible with their more extended SFHs. cEs show the largest dispersion of \([\alpha/{\rm Fe}]\) values, indicative of the mixed origin they have. Figure 6: Averaged cumulative SFH for each class of galaxies. The shaded region corresponds to the 1\(\sigma\) deviation error. Dotted lines show the position of the 50% and 90% of the total mass fraction. Distinctive SFHs are seen for each class. ### Insights from the stellar kinematics In this section we use kinematics to compare the properties of our compact galaxies with those at different masses. One interesting relation is the velocity dispersion-stellar mass relation (\(\sigma-M_{\star}\)). These two parameters are well correlated as a power law, showing a break at \(\log(M_{\star}/M_{\odot})=10.26\), which is reported by several authors (e.g. Hyde & Bernardi, 2009; Bernardi et al., 2011; Cappellari et al., 2013). This power law break appears to be similar to the characteristic mass scale at which there is a transition between in-situ to ex-situ processes in compact galaxies (see e.g., Cappellari, 2016; Ferre-Mateu et al., 2018, 2021; Dominguez Sanchez et al., 2020). This relation has been found to have a narrow intrinsic scatter, despite the break (Evrard et al., 2008; Hyde & Bernardi, 2009; Sereno & Ettori, 2015; Zahid et al., 2016). Moreover, an important result from the INSPIRE DR1 analysis (Spiniello et al., 2021), and confirmed in INSPIRE DR2 (D'Ago et al., 2023), is that extreme relics and non-relics behave differently in a \(\sigma-M_{\star}\) plot. At a given mass, massive relic galaxies seem to have overall higher stellar velocity dispersion than their non-relic counterparts. And relic galaxies with more extreme SFHs also show higher \(\sigma\) values than less extreme ones. Figure 9 shows the \(\sigma-M_{\star}\) relation for our selected compact galaxies and the cEs and CMGs from the literature, as in previous figures. We find that compact galaxies, regardless of the stellar mass, seem to deviate of the \(\sigma\)-stellar mass relation, in particular at the low-mass end. Only a handful of cEs seem to fit with the Zahid et al. (2016) trend. The vast majority of considered cEs show higher velocity dispersion than predicted. However, there is a significant fraction of CMGs that appear to follow the \(\sigma-M_{\star}\) relation, although many are still outliers. As found by Spiniello et al. (2021), extreme relics in the high-mass end are generally CMGs with the highest deviations. Regarding the compact galaxies selected in this work, Group A galaxies present the highest velocity dispersions in our sample, being clear outliers of the local scaling relation. They have velocity dispersion Figure 7: Stellar mass-metallicity-radius fundamental plane. Yellow dots show the values of low-mass cEs from Janz et al. (2016); Ferré-Mateu et al. (2018, 2021). Blue dots show the CMG from Ferré-Mateu et al. (2012, 2015, 2017); Trujillo et al. (2014); Yildimim et al. (2017); Spiniello et al. (2021), where darker blue dots show the those CMG that have been confirmed as relic galaxies. Our compact galaxies are separated according to their classification using the \(k\)-means algorithm (described in Section 3.1). The different projections of the fundamental plane are also shown. The dash-dotted line in the mass-size plot shows the compactness limit adopted in this work, as in Figure 1. The mass-metallicity relation at \(z\sim 0\) from Gallazzi et al. (2021) is shown as a dashed line in the corresponding plot, where the solid line shows the median value of the fitting and the dashed lines the 16% and 84% percentiles. The gray contour in this plot shows the position in the plane of non-compact MaNGA ETGs. Our compact galaxies successfully fill the mass gap between cEs and CMGs. Each group shows characteristic metallicities which may be the result of their origins. sions similar to higher mass CMGs, and in particular to the confirmed relic galaxies. On the other hand, both Group B and Group C galaxies follow the trend from Zahid et al. (2016). Given their more extended SFHs, this is a further confirmation that these compact galaxies could indeed be the low-mass end of ETGs. Aiming to investigate the consequences of such high \(\sigma\) values for Group A galaxies, we located them in the fundamental plane from Bernardi et al. (2020). In the fundamental plane, the enclosed surface brightness within \(1~{}R_{e}\) is related with the stellar velocity dispersion enclosed in the same surface, \(\sigma_{c}\), and \(R_{e}\). This relation is a direct result from the virial theorem, in which the stellar velocity dispersion is related with the mass and the size of the galaxy as \(\sigma^{2}\sim M_{\star}/R_{e}\)(Courteau and van den Bergh, 1999; Hartl and Strigari, 2022). One expects that a virialized system behaviour is well described by the fundamental plane. We show in Figure 10 the position of our selected compact galaxies in the MaNGA ETGs fundamental plane from Bernardi et al. (2020). Only a handful of our compact galaxies appear to be described by this fundamental plane. Particularly, Group A galaxies seem to follow an overall different relation, clearly outside the fundamental plane scatter. This would suggest that these galaxies have undergone different formation channels than regular ETGs, such that they would not follow the virial theorem. Due to the small amount of galaxies in Group B and in Group C, we are not able to state whether these groups fortuitously include particular outliers of the plane or these groups are also outliers of the fundamental plane. In any case, understanding what makes these galaxies outliers of that relation would require further investigation, maybe with larger samples of compact galaxies. Finally, another relevant kinematic parameter is the specific angular momentum, \(\lambda_{R}\) (introduced in Equation 2). There appears to be a dichotomy in the kinematics of ETGs: fast rotators (FR) and slow rotators (SR). A fast-rotating galaxy shows a uniform rotational pattern in the innermost regions of their kinematic maps (e.g. Emsellem et al., 2011; Weijmans et al., 2014; Foster et al., 2017; Blek et al., 2022). On the other hand, the kinematic maps of slow-rotating galaxies can show either no rotation or complex features (e.g. Emsellem et al., 2011; Weijmans et al., 2014; Foster et al., 2017). The specific angular momentum is generally used to distinguish between FR and SR (Emsellem et al., 2004). Figure 11 shows the relation between \(\lambda_{R}\) and the ellipticity of the galaxy (\(\varepsilon\)) for our 37 compact galaxies. The background is colored to illustrate the density of LTGs (blue) and ETGs (red) galaxies of the whole MaNGA DR17 sample, according to the morphological classification described in Section 2. The \(\lambda_{R}\) values were calculated as described in Section 2.2.1 and \(\varepsilon\) were extracted from the PyMorph VAC. The Emsellem et al. (2004) line sets the limit between FR and SR galaxies. As in previous plots, we also show the sample of cEs and CMG for which \(\lambda_{R}\) measurements are available. The \(\lambda_{R}\) parameter was originally conceived to analyse well-resolved galaxy maps. However, our selected compact galaxies have a mean effective radius of \(\sim 1.75\) arcsec. This value is roughly equivalent to 1.5 arcsec, which is the MANGA point-spread-function. Therefore, the analysis concerning \(\lambda_{R}\) remains qualitative due to the lack of proper spatial resolution. Both CMGs and cEs are, overall, fast rotators, as predicted in simulations from Naab et al. (2014). Ferre-Mateu et al. (2021) observationally checked that cEs tend to be FR. We find that the Figure 8: \([\alpha/\mathrm{Fe}]\) distributions of compact galaxies. Each group histogram is colored as in previous figures: the yellow and blue histograms show the distributions of a sample of cEs (Ferre-Mateu et al., 2018, 2021) and CMGs (Spiniello et al., 2021; Trujillo et al., 2014; Yildirim et al., 2017; Ferre-Mateu et al., 2012), respectively. Confirmed relics \([\alpha/\mathrm{Fe}]\) distribution is shown in solid lines. Each bar is normalized by the maximum number of counts of that sample. For the three groups studied in this work, the mean values of \([\alpha/\mathrm{Fe}]\) for each group are shown in dashed lines, with the solid line on the left being their typical uncertainties. Figure 9: \(\sigma\)-stellar mass plane for compact galaxies. Our selected 37 cEs are green-colored according to their group belonging. Yellow dots show the position in the plane of the low-mass compact elliptical galaxies from Ferre-Mateu et al. (2017); Kim et al. (2020); Ferre-Mateu et al. (2021). Blue dots show the compact massive galaxies values from Ferre-Mateu et al. (2012); Trujillo et al. (2014); Ferre-Mateu et al. (2015, 2017); Yildirim et al. (2017); Spiniello et al. (2021). Darker blue dots represent the position in the plane of current confirmed relics. The solid black line shows the \(\sigma\)-stellar mass relation from Zahid et al. (2016). compact galaxies in our sample are also fast-rotating galaxies5. They all show, however, a wide range of \(\lambda_{R}\) and \(\varepsilon\). Those with smaller \(\lambda_{R}\) and smaller \(\varepsilon\) show similar distributions to cEs from the literature, while those with larger \(\lambda_{R}\) and \(\varepsilon\) kinematics may resemble those of confirmed relics within scatter. Footnote 5: The FR/SR distinction is performed using \(\lambda_{R}\), which is measured within 1 \(R_{\rm c}\). In any case, rotation would only increase at larger radii. This would not affect the FR classification. ## 5 Summary Among the diverse family of ETGs, compact galaxies are interesting objects, typically outliers of the local mass-size relation. The compact realm spans over five orders of magnitude in stellar mass. Previous studies have noted that some of their properties, like the stellar populations and kinematics, appear to be related. From this, it is thought that compact galaxies may follow their own scaling relations. However, there was a gap in the mass range between cEs and CMGs, preventing to reach a more firm conclusion. In this work, we have analyzed the full MaNGa DR17 sample to find and characterize compact galaxies in this mass region. We have combined the standard mass and size cut criteria with a modified surface mass density threshold to characterize the compactness of the galaxies. Our final sample consists of 37 compact galaxies out of the 10293 galaxies from the MaNGA DR17 dataset. These galaxies seem to follow the mass-size relation at \(z\sim 1.5\), despite being local galaxies. Using stacked spectra up to \(1R_{e}\), we have measured the recessional velocity and the stellar velocity dispersion of each galaxy, along with their stellar population properties, such as age, total metallicity and star formation histories. We find that our selected compact galaxies are all but one fast-rotating galaxies. We define two parameters, \(\Delta_{50}\) and \(\Delta_{90}\), to characterize the formation timescales of a galaxy. We have then applied a \(k\)-means algorithm to classify the selected compact galaxies in three different groups, as some galaxies showed clear SFH similarities. The main caveat in this step is that the classification is constrained by the small number of galaxies in the sample. We have compared our sample to other compact galaxies such as cEs and CMGs, including confirmed relic galaxies. By comparing their main stellar population and kinematic properties, we can suggest different formation pathways for each class. Overall, we find that the main properties shown by each group are: * Group A: old galaxies with early and steep SFHs. They were born 14 Gyr ago and have formed all their stars in less than 4 Gyr. They show high mean metallicities and \([\alpha/{\rm Fe}]\) ratio (both with \(\sim 0.3\) dex). At a given mass, they show larger velocity dispersions than normal ETGs (\(\sigma~{}\sim~{}212\,{\rm km~{}s^{-1}}\)). Most of them are clear outliers of the current stellar mass-metallicity relation. Therefore, we expect that some of them could be intermediate-mass relics, analogues to those at the high mass end. Moreover, some of the galaxies in this sample could be the so-called 'late-biomeers' (i.e. younger relic analogues). * Group B: intermediate-age galaxies (\(\sim 8\) Gyr) with continuous SFH over time. Their overall metallicities are lower than those of Group A ([M/H] \(\sim 0\) dex). Given the extended SFHs and the fact that they mostly follow the scaling relations, these galaxies are consistent with the low-mass end of ETGs. Their properties make it unlikely that these galaxies have suffered any interaction with other galaxies, and were probably assembled in-situ. * Group C: young galaxies with a mean age of \(\sim 5\) Gyr. Their SFHs reveal an early initial star formation burst, which was then Figure 11: \(\lambda_{R}-\varepsilon\) for the 37 selected compact galaxies. \(\lambda_{R}\) values are corrected for seeing following the methodology in Graham et al. (2018). Literature cEs and CMGs are also shown as in previous figures (when available). Background colors show the density of ETGs (red) or LTGs (blue), according to the values from Fischer et al. (2019). The solid black line represents the Emsellem et al. (2004) relation to classify fast-rotating and slow-rotating galaxies. Galaxies below this line are considered SR and above it lie the FR. We find that the majority of compact galaxies, regardless of their mass, are FR. Figure 10: MaNGa ETGs fundamental plane from Bernardi et al. (2020). The plane relates the galaxy effective radius, \(R_{\rm e}\), with the stellar velocity dispersion \(\sigma\) and the enclosed surface brightness \(I_{e}\). Bernardi et al. (2020) also introduced the semi-axis ratio, \(b/a\), as a variable to fit, as well as the free parameter \(z_{\rm p}\). Formerly, kinematics in the fundamental plane are represented as the enclosed stellar velocity dispersion, \(\sigma_{\rm e}\). Due to the small size of our compact galaxies, we have considered \(\sigma_{\rm e}\equiv\sigma\). The plane span is represented by the dashed gray lines (Table 1 from Bernardi et al. (2020), corresponding to the MaNGa ETGs luminosity fundamental plane), which are separated a distance equal to the reported root mean square scatter (rms\({}_{\rm obs}\) = 0.077). Our selected compact galaxies are overlapped with the plane. hated in time and resumed \(\sim 4\) Gyr ago. A possible explanation is that these galaxies have experienced some recent interaction that drove a cold gas flow into the galaxy center. This could have triggered the late star-forming burst. They show intermediate metallicities ([M/H] \(\sim~{}0.1\) dex), and the lowest mean \([\alpha/\)Fe] ratio ([\(\alpha/\)Fe] \(\sim 0\) dex) among our sample. We have shown that the properties of compact galaxies shift as we consider higher masses. Both \(\sigma-M_{\star}\) and \(M_{\star}-\)[M/H] planes show that in general, cEs are prone to be formed ex-situ. They show unusual high metallicities compared to their stellar mass and some of them also feature high \(\sigma\) values. Overall, this suggests a stripping from a larger host galaxy. At the high mass end the number of CMG outliers in the \(\sigma-M_{\star}\) and \(M_{\star}-\)[M/H] relations is lower. It is thus expected that the majority of these have an in-situ origin. The sample of compact galaxies analyzed in this study completely fills the gap between these two families. In fact, they appear to have intermediate properties, which further supports the idea that compact galaxies at different masses are all related. Even though some of our compact galaxies appear to have similar properties as relic galaxies, we need one more step to reach firm conclusions. We require checking whether the galaxies have undergone any actual changes during its lifetime. To this aim, we expect to take advantage of the spatially-resolved IFU data from MaNGA in future works. With this technology, we would be able to study stellar population gradients, which can reveal more details on the assembly mechanism of these galaxies. We expect to eventually reveal whether a galaxy in the local Universe is still in is pristine stage of a red nugget. ## Acknowledgements The authors thank M. Bernardi for his help with the FP and the \(\lambda_{R}\) estimates. The authors also thank the referee for their constructive and insightful comments that improved the quality of the paper. Part of this work was performed during a JAE fellowship JAEC10-21-ICE-3. AFM acknowledges support from CEX2019-000920-S and from RXC2021-031099-I and PID2021-123313NA-100 of MICIN/AEI/10.13039/501100011033/ FEDER,UE,NextGenerationEU/PRT. HDS acknowledges support by the PID2020-115098RJ-I00 grant from MCIN/AEI/10.13039/501100011033 and from the Spanish Ministry of Science and Innovation and the European Union - NextGenerationEU through the Recovery and Resilience Facility project ICTS-MRR-201-03-CEFCA. ## Data availability The MaNGA DR17 data presented are available via the Sloan Digital Sky Survey (SDSS) Science Access Service (SAS): [https://data.sdss.org/sas/dr17/manga/](https://data.sdss.org/sas/dr17/manga/). The different Value Added Catalogs available are listed in [https://www.sdss4.org/dr17/data_access/value-added-catalogs/](https://www.sdss4.org/dr17/data_access/value-added-catalogs/).
2309.05830
Photodetachment dynamics using nonlocal dicrete-state-in-continuum model
In this preprint I propose that the non local discrete-state-in-continuum model previously successfully used to describe the inelastic electron molecule collisions can also be used to model the electron photo-detachment from the molecular anions. The basic theory is sketched and the approach is tested on the model of electron photodetachment from diatomic molecular anion.
Martin Čížek
2023-09-11T21:16:55Z
http://arxiv.org/abs/2309.05830v1
# Photodetachment dynamics using nonlocal dicrete-state-in-continuum model ###### Abstract In this preprint I propose that the non local discrete-state-in-continuum model previously successfully used to describe the inelastic electron molecule collisions can also be used to model the electron photo-detachment from the molecular anions. The basic theory is sketched and the approach is tested on the model of electron photodetachment from diatomic molecular anion. Resonances, molecular anions, photoelectron spectroscopy ## I Introduction The nonlocal discrete-state-in-continuum model is very successful approach in the description of the low-energy inelastic electron collisions [1; 2] leading to vibrational excitation (VE) \[e^{-}+M(v_{i})\to M^{-}\to e^{-}+M(v_{f}) \tag{1}\] and the dissociative attachment (DA) \[e^{-}+M(v_{i})\to M^{-}\to A^{-}+B. \tag{2}\] The key ingredient of the theory is that both processes proceed through formation of a metastable molecular anion state \(M^{-}\) out of equilibrium, that undergoes the vibronic dynamics and decays either back into electron-molecule scattering continuum \(e^{-}+M(v_{f})\) or dissociates into fragment \(A^{-}+B\). The convenient method to study the dynamics of such process is through electron energy loss spectroscopy (EELS) [3]. This technique is based on the energy conservation \[E=\epsilon_{i}+E_{v_{i}}=\epsilon_{f}+E_{v_{f}}, \tag{3}\] where \(\epsilon\) are electron energies and \(E_{v}\) energies of vibrational states of the molecule for the initial \(|\nu_{i}\rangle\) and the final \(|\nu_{f}\rangle\) vibrational states, before and after the collision. Furthermore the cross section of the processes is enhanced if the total energy \(E\) attains value close to energies of the metastable vibronic states of the temporary anion \(M^{-}\). The most complete experimental picture is provided by scanning through both initial \(\epsilon_{i}\) and final \(\epsilon_{f}\) electron energies (or equivalently energy loss \(\Delta\epsilon=\epsilon_{i}-\epsilon_{f}\)) creating thus 2D EELS picture. The dependence on the scattering angle for electron can also be monitored. Such spectra are still not well understood [4; 5; 6; 7]. The nonlocal discrete state in continuum theory has recently been successfully used to calculate the 2D EELS for CO\({}_{2}\) molecule [8; 9; 10]. In this paper we propose to use the same kind of theory to calculate the electron spectrum for the photodetachment of an electron \(e^{-}\) from a molecular anion \(M^{-}\) \[\gamma+M^{-}\rightarrow(M^{-})^{*}\to e^{-}+M(v_{f}), \tag{4}\] with initial energy \(E\) of the system determined by the energy of the photon \(\gamma\) shone on the anion to excite it to the state \((M^{-})^{*}\). The vibronic dynamics of this molecular metastable anion state is driven by the same principles as in the case of electron-molecule collisions. The energies of the released electron can be monitored as function of the photon energy giving thus the 2D spectrum similar to the 2D EELS [6; 11]. The modeling of the 2D photodetachment spectrum can thus proceed along the same lines as for 2D EELS and we can use the iterative methods recently developed to threat the dynamics for polyatomic molecules [12; 13; 14] also for the electron photodetachment. In this paper we develop the theory of the resonance inelastic photodetachment process and propose to treat the resulting equations numerically with the codes developed for the electron-molecule collisions. The theory also includes the photodissociation process in analogy with the dissociative attachment (2). We also propose a simple model inspired by LiH\({}^{-}\)[15; 16] to test the numerical methods and to discuss the resulting phenomena. ## II Theory In this section we explain the basic ideas used to theoretically treat the inelastic resonance photodetachment based on the projection formalism for description of the dynamics of the discrete electron state in electron scattering continuum. The photon absoption is treated in the dipole approximation, but we do not see obstacles to include also higher order corrections considering the photon absorption. Once the photon is absorbed and the metastable negative ion is formed, the dynamics is treated in complete analogy with electron-molecule collisions where the anion is formed by electron attachment (see Fig. 1 for the schematic scatch of the process). The following paragraph thus describes the dynamics in similar way as in electron-molecule collisions (we refer to [1; 2] for reviews on the approach). ### Photodetachment in dipole approximation The initial state \(|\Psi_{0}\rangle\rangle\) for the photodetachment process is the assumed to be the ground state of the molecular anion \(M^{-}\), which we consider in Born-Oppenheimer approximation. We used the double-ket notation to stress that the wavefunction is product of vibrational and electronic part \(|\Psi_{0}\rangle\rangle=|\chi_{0}\rangle|\phi_{0}\rangle\), where \(|\phi_{0}\rangle\) is the ground electronic state of the anion and the vibrational wavefunction \(\chi_{0}(R)\) solves the usual Schrodinger equation \[\left[T_{N}+V_{0}(R)\right]\chi_{0}(R)=\varepsilon_{0}\chi_{0}(R), \tag{5}\] with the potential energy surface \(V_{0}(R)\) depending on the nuclear geometry \(R\), \(T_{N}\) is the vibrational kinetic energy operator and \(\varepsilon_{0}\) is energy of the initial vibrational state \(|\chi_{0}\rangle\). The main goal of this paper is to evaluate the photodetachment amplitude \[A=\langle\langle\Psi^{(-)}|D|\Psi_{0}\rangle\rangle=\langle\langle\Psi^{(-)}| D|\chi_{0}\rangle|\phi_{0}\rangle, \tag{6}\] where \(D\) is the electrical dipole operator and \(|\Psi^{(-)}\rangle\rangle\) is the scattering wavefunction subjected to outgoing boundary condition fixing the final state vibrational state of the neutral molecule \(|\nu_{f}\rangle\) and the outgoing electron state \(|\epsilon_{f}\rangle\), with the energy subjected to conservation law \[E=\varepsilon_{\gamma}+\varepsilon_{0}=\epsilon_{f}+E_{v_{f}}, \tag{7}\] with photon energy \(\varepsilon_{\gamma}\) and vibrational energy of the final state of the molecule \(E_{v_{f}}\). We apply the discrete-state-in-continuum model and the projection-operator formalism to calculate the scattering wavefunction \(|\Psi^{(-)}\rangle\rangle\). The starting point is the assumption of the existence of the diabatic basis in the Hilbert space of electrons with the fixed nuclei of the molecule, consisting of 1) at least two discrete states: already described ground state of the anion \(|\phi_{0}\rangle\) and the excited metastable anion state \(|\phi_{1}\rangle\) and 2) electron scattering continuum states. More discrete states can in principle be included but here we will limit the discussion to one bound and one discrete metastable state for simplicity. We can define the projector to the discrete state part of the electronic Hilbert space \[\mathcal{Q}=\sum_{d}|\phi_{d}\rangle\langle\phi_{d}|. \tag{8}\] and the complementary operator \[\mathcal{P}=I-\mathcal{Q} \tag{9}\] projecting on the background electron continuum \(e^{-}+M\). The basis in this subspace can be chosen as solutions the background scattering problem \[\mathcal{P}\mathcal{H}_{el}\mathcal{P}|\Phi_{0},\epsilon\mu\rangle=(W+ \epsilon)|\Phi_{0},\epsilon\mu\rangle, \tag{10}\] \(W(R)\) is the potential energy surface of the neutral molecule, i. e. the energy of the ground electronic state \(|\Phi_{0}\rangle\) of the neutral molecule and energy \(\epsilon\) and quantum number \(\mu\) identify the state of outgoing electron. Here we consider only the case that the neutral molecule has only one energetically accessible electronic state and we will suppress the symbol \(\Phi_{0}\) in the notation. We also assume only one dominant partial wave and suppress also the symbol \(\mu\). We thus ended with the basis states \(|\epsilon\rangle\) which can be used to expand the projector on the continuum part of the Hilbert stat for each \(R\) \[\mathcal{P}=\int|\epsilon\rangle\langle\epsilon|d\epsilon. \tag{11}\] Now the fixed nuclei electronic hamiltonian \(\mathcal{H}_{el}\) can be expandent in the basis \[\langle\phi_{d}|\mathcal{H}_{el}|\phi_{d^{\prime}}\rangle = V_{d}(R)\delta_{dd^{\prime}}, \tag{12}\] \[\langle\phi_{d}|\mathcal{H}_{el}|\epsilon\rangle = V_{de}(R),\] (13) \[\langle\epsilon|\mathcal{H}_{el}|\epsilon^{\prime}\rangle = \left[W_{0}(R)+\epsilon\right]\delta(\epsilon-\epsilon^{\prime}). \tag{14}\] Note that we neglected the coupling \(\langle\phi_{0}|\mathcal{H}_{el}|\phi_{1}\rangle\simeq 0\) of the two discrete states, which assumes well isolated bound state \(|\phi_{0}\rangle\) with noncrossing potentials \(V_{0}(R)\) and \(V_{1}(R)\). This assumption can be released but we will avoid it in this work. We will also neglect the coupling of the bound state to the continuum by setting \(V_{de}(R)=0\) for \(d=0\), but we include the nonzero amplitude \(V_{1e}(R)\) which is responsible for the electron autodetachment from the state \(|\phi_{1}\rangle\). The wavefunction \(|\Psi^{(-)}\rangle\rangle\) can also be expanded in this basis \[|\Psi^{(-)}\rangle\rangle=|\psi_{d}\rangle|\phi_{d}\rangle+\int|\psi_{c} \rangle|\epsilon\rangle d\epsilon. \tag{15}\] Figure 1: Schematic picture of the photodetachment process. All potentials, energies and states related to anion are drawn in green. The neutral molecule and the continuum-electron related quantities are in red and and the absorbed photon energy in blue. Note that due to the decoupling of the ground state \(d=0\) we can consider only \(d=1\) in this expansion. This wavefunction is subjected to the same outgoing boundary condition like in the case of electron-molecule scattering and the \(R\)-dependent expansion coefficients \(\psi_{d}(R)\equiv|\psi_{d}\rangle\) and \(\psi_{\epsilon}(R)\equiv|\psi_{\epsilon}\rangle\) can be found in the same wave like in that case [1; 2]. In the case of the vibrational excitation process the relevant T-matrix element reads (1) \[T_{VE}=\langle\langle\Psi^{(-)}|V|\nu_{i}\rangle|\epsilon_{i}\rangle=\langle \psi_{d}|V_{d\epsilon_{i}}|\nu_{i}\rangle, \tag{16}\] where \(V={\cal P}{\cal H}_{el}{\cal Q}+{\cal Q}{\cal H}_{el}{\cal P}\). Notice that the last expression uses only the \(\psi_{d}\) component of the expansion (15). Now we will remind how this component is evaluated and we would also like to process the expression (6) for the photodetachment amplitude in analogy with the expression (16) for the vibrational excitation process. The components of the wavefunction (15) can be found by solving the Schrodinger equation with the hamiltonian \(H=T_{N}+{\cal H}_{el}\) with the appropriate boundary condition \[|\psi_{d}\rangle = 0+[E-H_{d}]^{-1}\int V_{d\epsilon}|\psi_{\epsilon}\rangle d\epsilon, \tag{17}\] \[|\psi_{\epsilon}\rangle = |\nu_{f}\rangle\delta(\epsilon-\epsilon_{f})+[E-h_{0}-\epsilon]^ {-1}V_{d\epsilon}|\psi_{d}\rangle. \tag{18}\] By substituting the second of the equations into the first and slight rearangement we get \[\left[E-H_{d}-F^{\dagger}(E-h_{0})\right]|\psi_{d}\rangle=V_{d\epsilon_{f}}| \nu_{f}\rangle, \tag{19}\] where \[H_{d} = T_{N}+V_{d}(R), \tag{20}\] \[h_{0} = T_{N}+W(R),\] (21) \[F(\varepsilon) = \int_{0}^{\infty}V_{d\epsilon}(R)[\varepsilon-\epsilon+i\eta]^{- 1}V_{d\epsilon}(R^{\prime})d\epsilon. \tag{22}\] Note that the final vibrational states of the neutral molecule after photodetachment are solution of the Schrodinger equation \[h_{0}|\nu_{f}\rangle=E_{\nu_{f}}|\nu_{f}\rangle. \tag{23}\] The equation (19) is used in theory of VE process to solve the dynamics numerically. The formal solution can be written as \[|\psi_{d}\rangle=\left[E-H_{d}-F^{\dagger}(E-h_{0})\right]^{-1}V_{d\epsilon_{f }}|\nu_{f}\rangle. \tag{24}\] This leads to the well known simple expression for the vibrational excitation \[T_{VE}=\langle\nu_{f}|V_{d\epsilon_{f}}\left[E-H_{d}-F(E-h_{0})\right]^{-1}V_{ d\epsilon_{i}}|\nu_{i}\rangle. \tag{25}\] For the photodetachment amplitude we also need the continuum component \[|\psi_{\epsilon}\rangle = |\nu_{f}\rangle\delta(\epsilon-\epsilon_{f})+\] \[+ \left[E-h_{0}-\epsilon\right]^{-1}V_{d\epsilon}\left[E-H_{d}-F^{ \dagger}(E-h_{0})\right]^{-1}V_{d\epsilon_{f}}|\nu_{f}\rangle.\] Before we substitute the solution (15) with the components (24) and (26) into photodetachment amplitude (6) we define the fixed-\(R\) transition dipole moments to discrete state \(|\phi_{1}\rangle\) and to background continuum \(|\epsilon\rangle\) \[\mu_{d}(R) = \langle\phi_{1}|D|\phi_{0}\rangle, \tag{27}\] \[\mu_{\epsilon}(R) = \langle\epsilon|D|\phi_{0}\rangle \tag{28}\] so that \[A=\langle\psi_{d}|\mu_{d}|\chi_{0}\rangle+\int\langle\psi_{\epsilon}|\mu_{ \epsilon}|\chi_{0}\rangle d\epsilon \tag{29}\] or after substituting for the wavefunction components \[A = \langle\nu_{f}|V_{d\epsilon_{f}}\left[E-H_{d}-F\right]^{-1}\mu_{d }|\chi_{0}\rangle \tag{30}\] \[+\langle\nu_{f}|\mu_{\epsilon_{f}}|\chi_{0}\rangle\] \[+\langle\nu_{f}|V_{d\epsilon_{f}}\left[E-H_{d}-F\right]^{-1}M(E-h _{0})|\chi_{0}\rangle,\] where in analogy with \(F(\varepsilon)\) we have defined \[M(\varepsilon)=\int_{0}^{\infty}V_{d\epsilon}(R)[\varepsilon-\epsilon+i\eta]^{ -1}\mu_{\epsilon}(R^{\prime})d\epsilon. \tag{31}\] This quantity can be interpreted as the transition amplitude through dipole transition to the continuum state \(|\epsilon\rangle\) from which the electron is captured to the metastable anion state \(|\phi_{1}\rangle\). The three terms in the photodetachment amplitude (30) have the following interpretation. The most simple is the second term \(\langle\nu_{f}|\mu_{\epsilon_{f}}|\chi_{0}\rangle\) which gives the amplitude for the direct dipole photodetachment from the ground state to the background continuum \[\gamma+M^{-}\to M(\nu_{f})+e^{-}. \tag{32}\] The first term \(\langle\nu_{f}|V_{d\epsilon_{f}}\left[E-H_{d}-F\right]^{-1}\mu_{d}|\chi_{0}\rangle\) is little bit more complicated as it describes the dipole transition to the discrete state \(|\phi_{1}\rangle\) followed by an autodetachment to the neutral molecule and electron \[\gamma+M^{-}\rightarrow\left(M^{-}\right)^{*}\to M(\nu_{f})+e^{-}. \tag{33}\] The last term describes a three step process of dipole transition to intermediate continuum state from which electron is captured to \(|\phi_{1}\rangle\) in second step followed by the third step of an autodetachment \[\gamma+M^{-}\to e^{-}+M(\nu)\rightarrow\left(M^{-}\right)^{*}\to M(\nu_{f})+e^{-}. \tag{34}\] It is difficult to estimate the relative importance of different mechanisms. We will discuss them on a simple model in the next section. Before that we notice that from the calculation point of view it is possible to merge the last to process into one expression and write the amplitude as the sum of direct and resonance processes \[A=\langle\nu_{f}|\mu_{\epsilon_{f}}|\chi_{0}\rangle+\langle\nu_{f}|V_{d\epsilon_ {f}}|\tilde{\psi}\rangle, \tag{35}\] where the auxilary wavefunction \(|\tilde{\psi}\rangle\) is obtained by solution of the equation \[\left[E-T_{N}-V_{1}(R)-F\right]|\tilde{\psi}\rangle=\left[\mu_{d}+M(E)\right]| \chi_{0}\rangle. \tag{36}\] ### Photodissociation of the anion If the energy of the photon is small enough the process of the dissociative electron detachment \[\gamma+M^{-}\to e^{-}+A+B, \tag{37}\] where \(A\) and \(B\) are fragments of the neutral molecule \(M\equiv AB\) is forbidden. The potentials sketched in Fig. 1 allow for the resonance dissociation of the anion \[\gamma+M^{-}\rightarrow\left(M^{-}\right)^{*}\to A^{-}+B. \tag{38}\] This process is contained in the amplitude of the wavefunction \(\tilde{\psi}(R)\) for \(R\rightarrow\infty\) or it can be alternatively formulated using the solution with the outgoing boudary condition in the potential \(V_{1}(R)\). ## III Test model calculation In this part we would like to test the proposed approach on a simple model of electron detachment from diatomic anion. The goal of this calculation is not quantitative description of the photodetachment cross sections for any specific molecule, but the parameters of the model are on the qualitative level inspired by lithium hydride anion [15; 16]. The model of the photodetachment as described above is determined by knowledge of the potential of the neutral molecule \(W(R)\), the potential curve of the ground anion state \(V_{0}(R)\), the excited anion state \(V_{1}(R)\) (the discrete state), the discrete-state-continuum coupling function \(V_{de}(R)\) and the dipole moment transition elements \(\mu_{d}(R)\) and \(\mu_{e}(R)\). The discrete-state-continuum coupling function \(V_{de}(R)\) depends on the continuum channel index (for example angular momentum), but in this simple model we assume that there is one dominant channel and neglect this dependence. Similarly the dipole transition moment is vector quantity, but we assume the fixed polarization with respect to molecular axis and treat is as a single number. These details would be important in comparison with specific experimental data, but they are not important in this qualitative discussion. ### Qualitative model for diatomic molecule The potential energy curves \(W(R)\) and \(V_{0}(R)\) for the ground states of the neutral molecule and the anion can be calculated by well established methods of quantum chemistry. For LiH molecule high-level calculation is available [17]. We also calculated the potential energy curves (see Fig. 1 in [15]). Here, for the model calculation we use simply Morse potential, qualitatively similar to the LiH case. The exact form is as follows (numerical values are for the energies in eV and distances in Bohr) \[W(R) = de^{-2a(R-R_{0})}-2de^{-a(R-R_{0})}+b, \tag{39}\] \[V_{0}(R) = d_{0}e^{-2a_{0}*(R-R_{g})}-2d_{0}e^{-a_{0}*(R-R_{g})}+b_{0},\] (40) \[V_{1}(R) = d_{1}e^{-a_{1}*(R-R_{0})}, \tag{41}\] where \(R_{0}=3.0\), \(a=0.6\), \(b=0.57\), \(d=2.5\), \(R_{g}=3.1\), \(a_{0}=0.45\), \(b_{0}=-0.13\), \(d_{0}=2.1\), \(a_{1}=0.6\) and \(d_{1}=1.9\). The discrete state potential \(V_{1}(R)\) and its coupling to the continuum \(V_{de}(r)\) is often extracted by fitting the fixed-nuclei scattering phaseshifts. This is based on the solution of the fixed-R scattering problem with electronic hamiltonian parametrized as \[\mathcal{H}_{el} = |\phi_{d}\rangle V_{d}\langle\phi_{d}|+\int|\epsilon\rangle\left\{ W+\epsilon\right\}\langle\epsilon|d\epsilon \tag{42}\] \[+\int\left\{|\phi_{d}\rangle V_{de}\langle\epsilon|+|\epsilon \rangle V_{de}\langle\phi_{d}|\right\}d\epsilon.\] The solution of the scattering problem \(|\Psi(E)\rangle\) at fixed molecular geometry \(R\) can be expanded in the basis \(|\phi_{d}\rangle\), \(|\epsilon\rangle\) in similar way like (15) \[|\Psi(E)\rangle)=\psi_{d}|\phi_{d}\rangle+\int\psi_{\epsilon}|\epsilon)d\epsilon. \tag{43}\] Now \(\psi_{d}\) and \(\psi_{\epsilon}\) are numbers (dependent on \(R\)) not wavefunctions in vibrational space. It is easy to solve the scattering problem with hamiltonian (42) and to find the components \[\psi_{d} = \left[E-V_{d}-F(E-W)\right]^{-1}V_{de_{i}}, \tag{44}\] \[\psi_{\epsilon} = \delta(\epsilon-\epsilon_{i})+(E-\epsilon-W)^{-1}V_{de}\psi_{d}. \tag{45}\] The scattering phaseshift for the fixed-nuclei problem then reads[1] \[\delta=-\arctan\left(\frac{\Gamma(\epsilon,R)/2}{\epsilon-V_{d}(R)-W(R)- \Delta(\epsilon,R)}\right), \tag{46}\] Figure 2: Potential energy curves in our model. The neutral molecule is in red and anion states in green. The vibrational levels are also shown (every fifth with solid line). with \(\Delta\) and \(\Gamma\) derived from the real and imaginary part of the fixed-nuclei version \(F[E-W(R)]\) \[F(\varepsilon)=\Delta-\tfrac{i}{2}\Gamma=\int V_{d\epsilon}(R)[\varepsilon- \epsilon+i\eta]^{-1}V_{d\epsilon}(R)d\epsilon. \tag{47}\] of the nonlocal level-shift operator \(F(E-h_{0})\) in (22). Notice that the operator-valued argument \(E-h_{0}\) changed into electron energy \(\epsilon=E-W(R)\) relative to the scattering threshold. The fitting of the formula (46) to ab initio scattering data for eigenbases is usually used to obtain \(V_{d}(R)\) and \(V_{d\epsilon}\). Here we just choose the model functions by hand so that the resulting phaseshift (46) is in visual qualitative accordance with the data for LiH molecule [15] as demonstrated in Fig. 3. The model function \(V_{d}(R)=V_{1}(R)\) of Eq. (41) was choosen to obtain the data in the figure and the separable form of discrete-state-continuum coupling \[V_{d\epsilon}(R) = g(R)f(e), \tag{48}\] \[g(R) = \left[1+e^{0.75(R-6}\right]^{-1},\] (49) \[\gamma(\epsilon) \equiv 2\pi f(\epsilon)^{2}=A_{\gamma}[\epsilon/B_{\gamma}]^{\alpha}e^ {-\epsilon/B_{\gamma}}, \tag{50}\] with constants \(A_{\gamma}=1\)eV, \(B_{\gamma}=2\)eV, \(\alpha=0.2\) was used. This form is inspired by earlier studies of electron-molecule collisions [1; 2]. It is convenient that the integral transform (47) can be calculated analytically for this form. The last ingredient of the model is the knowledge of the transition dipole moment \(\mu_{d}\) to the discrete state and transition dipole moment function \(\mu_{\epsilon}\) for each internuclear separation \(R\). To do so we were again guided by the fixed-nuclei scattering calculation [15; 16] for LiH. The calculated moment function \[|\mu|^{2}=|\langle\chi_{0}|D|\Psi(E)\rangle\rangle|^{2}\] is shown in Fig. (4) in the left panel1, but we have to keep in mind that the function \(|\Psi(E)\rangle\) has both discrete-state and continuum components (44), (45). Substituting these in (43) we get relation to \(\mu_{d}\) and \(\mu_{\epsilon}\) Footnote 1: Note that the dipole operator is vector quantity. We show only component along the molecular axis. We also include only the lowest partial wave in continuum to obtain simple picture for this qualitative model. \[\mu=\mu_{\epsilon}+\frac{\mu_{d}V_{d\epsilon}}{\epsilon-V_{d}-F(\epsilon)}+ \frac{M(\epsilon)V_{d\epsilon}}{\epsilon-V_{d}-F(\epsilon)}. \tag{51}\] Notice that this expression is fixed-nuclei version of (6) and the terms thus have similar interpretation: _direct_ transition dipole to background continuum \(\mu_{\epsilon}\) and the next two terms are the _resonant_ contribution due to transition to the discrete state and subsequent autodetachment and term due to _attachment_ to the discrete state from the background continuum. We find that this function within our model (see Fig. 4 right panel) is in reasonable qualitative agreement with the calculation for LiH molecule. The three individual contributions are also shown in the figure but the full result is not direct sum of the individual contributions, because the complex phase has to be taken into account. The model functions producing the figure are \[\mu_{d} = 0.1+0.1i, \tag{52}\] \[2\pi\mu_{\epsilon}^{2} = A_{\mu}[\epsilon/B_{\mu}]^{\alpha}e^{-\epsilon/B_{\mu}}, \tag{53}\] with \(A_{\mu}=150\)a.u. and \(B_{\mu}=0.8\)eV. This form of the functions allows for calculation of the integral transform in the definition (31) of function \(M(\epsilon)\) analytically in the same way as for function \(F(\epsilon)\). ### Notes on numerical treatment To calculate the full photodetachment amplitude (30) we need to be able to apply the operator \[|\Psi\rangle=[E-H_{d}-F]^{-1}\,|\Phi\rangle.\] Figure 3: Fixed-nuclei scattering phaseshifts for few internuclear separations \(R\) (labeled in the graphs). Figure in the left shows the results of R-matrix scattering calculations for LiH+e and the right panel is the result from our model. Figure 4: Transition dipole matrix element from the ground state anion to continuum for fixed-nuclei electronic problem. The internuclear separations \(R\) are marked in the graphs. The left panel show the results of R-matrix calculation for LiH molecule, the right panel is calculated from our model, with contributions of three different terms marked separately. This is equivalent to solving the equation \[\left[E-H_{d}-F\right]^{-1}|\Psi\rangle=|\Phi\rangle.\] There are well developed techniques to perform this task in the treatment of the inelastic electron-molecule collision [1; 2] and we applied our numerical codes to perform this task. The operator \(F(E-h_{0})\) needed there is evaluated by expansion into neutral molecule vibrational basis \(h_{0}|\nu\rangle=E_{\nu}|\nu\rangle\) which converts it to evaluation of the fixed-nuclei quantity \(F(\epsilon)\) calculated by the integral transform \[F(E-h_{0})=\sum_{\nu}|\nu\rangle F(E-E_{\nu})\langle\nu|. \tag{54}\] The same method can be used to calculate also the operator \(M(E-h_{0})\). ### Discussion of the results The calculated results are shown in Fig. (5) and (6). In the first of the two figures the full amplitude \(|A|^{2}\) for the final ground vibrational state of the neutral molecule formed after the detachment is plotted together with the three individual contributions. We see that resonance and attachment contributions have similar shape and are important only close to the resonance energy 4eV. We also show the fixed nuclei amplitude \(|\mu|^{2}\) for three internuclear separations \(R\). Observe that the full amplitude is given by smearing of the fixed nuclei amplitude over the initial vibrational wavefunction \(\chi_{0}(R)\). In Fig. (6) we show the amplitude for two energies \(E=3\)eV and \(E=4\)eV and for first 20 vibrational states of the final neutral molecule. For the energy 3eV, which is below the resonance, the amplitude decreases very fast with the vibrational quantum number. On the other hand the resonance energy \(E=4\)eV allows for creation of highly vibrationally excited energies which much larger probability. We also see that the background contribution continues to decay fast and the resonance and attachment contributions dominate for high vibrational quantum numbers. ## IV Conclusions We derived theory for calculation of the electron photodetachment from molecular anions in resonance condition using the discrete-state-in-continuum model in very similar way like in description of inelastic electron-molecule collisions. The techniques developed for the numerical treatment of the electron-molecule collisions can therefore be directly applied also for resonance photodetachment. We also expect that similar phenomena as the ones studied here (threshold peaks, Wigner cusps, boomerang oscillations) can have their analogs in photodetachment physics. ###### Acknowledgements. I thank members of our group Karel Houfek, Jakub Benda, Premysl Kolorenc for discussion on the subject Figure 5: Dependence of the photodetachment amplitude on the photon energy (full green line). The results of full calculation is shown with contribution of different mechanisms also marked separately. Black lines show fixed nuclei amplitudes for three different values of \(R\) for comparison. Figure 6: Amplitude for photodetachment to different final vibrational states of the neutral molecule. Two different photon energies are included \(E=3eV\) (off-resonance, left) and \(E=4eV\) (in resonance, right). Different symbols show full calculation (full circles) and direct photodetachment to background continuum (empty circles). Vibrational excitation cross sections of the neutral molecule by electron scattering are shown for comparison (squares). during our seminars and especially Zdenek Masin for encouraging me to finish this work. I also acknowledge the work of my student Jiri Trnka on the fitting the LiH model.
2309.03299
Pointer states and quantum Darwinism with 2-body interactions
Quantum Darwinism explains the emergence of classical objectivity within a quantum universe. However, to date most research in quantum Darwinism has focused on specific models and their stationary properties. To further our understanding of the quantum-to-classical transition it appears desirable to identify the general criteria a Hamiltonian has to fulfill to support classical reality. To this end, we categorize all models with 2-body interactions, and we show that only those with separable interaction of system and environment can support a pointer basis. We further show that "perfect" quantum Darwinism can only emerge if there are no intra-environmental interactions. Our analysis is complemented by the solution of the ensuing dynamics. We find that in systems that exhibit information scrambling, the dynamical emergence of classical objectivity is in direct competition with the non-local spread of quantum correlations. Our rigorous findings are illustrated with the numerical analysis of four representative models.
Paul Duruisseau, Akram Touil, Sebastian Deffner
2023-09-06T18:22:49Z
http://arxiv.org/abs/2309.03299v1
# Pointer states and quantum Darwinism with 2-body interactions ###### Abstract Quantum Darwinism explains the emergence of classical objectivity within a quantum universe. However, to date most research in quantum Darwinism has focused on specific models and their stationary properties. To further our understanding of the quantum-to-classical transition it appears desirable to identify the general criteria a Hamiltonian has to fulfill to support classical reality. To this end, we categorize all models with 2-body interactions, and we show that only those with separable interaction of system and environment can support a pointer basis. We further show that "perfect" quantum Darwinism can only emerge if there are no intra-environmental interactions. Our analysis is complemented by the solution of the ensuing dynamics. We find that in systems that exhibit information scrambling, the dynamical emergence of classical objectivity is in direct competition with the non-local spread of quantum correlations. Our rigorous findings are illustrated with the numerical analysis of four representative models. ## I Introduction We live in a quantum universe, yet our everyday reality is well-described by classical physics. Hence, the obvious question to ask is where all the quantum information and correlations hide. The quantum nature of our universe is captured by its ability to be in a superposition of classically allowed states. The transition from quantum to classical is a two-step process. The first, necessary but not sufficient, step is the destruction of quantum superpositions, i.e., destruction of all interference phenomena. The theory of decoherence teaches us that it is the interaction between the quantum system and its environment that is the cause of this phenomenon [1]. The destruction of quantum superpositions presupposes a privileged and unique quantum basis. The elements of this basis are called pointer states [2; 3; 4]. Any quantum superposition written in this basis decomposes into a classical mixture under the effect of environmental interaction. Thus, pointer states are precisely the only quantum states that are stable under this interaction. Quantum Darwinism [5; 6; 7; 8; 9; 10; 11; 12; 13] builds on decoherence theory and goes a step further, approaching the problem from the point of view of quantum information theory. An outside observer has no direct access to a system of interest \(\mathfrak{S}\), but rather the environment \(\mathfrak{E}\) acts as a communication channel. Since any real environment is tremendously large, "observing" \(\mathfrak{S}\) actually means that an observer intercepts only a small, possibly even tiny fragment \(\mathfrak{F}\) of \(\mathfrak{E}\), and then reconstructs the state of \(\mathfrak{S}\) from the information carried by \(\mathfrak{F}\). If the constituents of the environment do not interact, such as, for instance, photons [14; 15], then the information about \(\mathfrak{S}\) is accessible by _local_ measurements on \(\mathfrak{E}\). However, reality is a little more complicated, and in general, the constituents of \(\mathfrak{E}\) do interact. Such intra-environmental interactions lead to the build-up of non-local correlations, which is the root-cause of information scrambling [16; 17; 18; 19; 20; 21]. Thus, an observer has to access a macroscopic fraction of \(\mathcal{E}\) to reconstruct unambiguous information about \(\mathfrak{S}\). Despite the significant attention scrambling dynamics has received in the literature [16; 17; 18; 19; 20; 21; 22; 23], curiously little is known about the quantum-to-classical transition in the presence of scrambling. Only rather recently, several studies have started to unveil the interplay of decoherence and scrambling [23; 24; 25; 26; 27; 28; 29; 30; 31]. More directly relevant to our present work is Ref. [32], which analyzed a specific model where intra-environmental interactions scramble the information encoded in different fragments \(\mathfrak{F}\). This scenario, scrambling only in \(\mathfrak{E}\), but not in \(\mathfrak{S}\), makes it easier to highlight the competition between the local transfer of information from \(\mathfrak{S}\) to each degree of freedom of \(\mathfrak{E}\), and the scrambling of information between the different \(\mathfrak{F}\) of \(\mathfrak{E}\) due to their interactions. Current research in quantum Darwinism [13] is driven by the analysis of increasingly complex model systems. However, the focus has remained on particular qubit-models [32; 33; 34; 35; 36; 37], since their dynamics is tractable. Despite, or rather because of continued progress in our understanding it appears desirable to elucidate the general properties of Hamiltonians that support the emergence of quantum Darwinism. More precisely, it is instrumental to sort all possible interacting many-body Hamiltonians into classes that support a pointer basis for \(\mathfrak{S}\), and which sub-classes of these will further exhibit the emergence of classical objectivity. Such a classification will also unveil if and under what circumstances, quantum Darwinism can emerge in the presence of scrambling dynamics. In the present work, we consider a qubit of interest \(\mathfrak{S}\) that interacts with an environment \(\mathfrak{E}\) also comprised of qubits. Hence, scrambling of information may only occur in \(\mathfrak{E}\), but not in \(\mathfrak{S}\). Further, for the sake of simplicity we restrict ourselves to arbitrary two-body interactions. In a first part of our analysis, we show that the existence of a pointer basis for \(\mathfrak{S}\) imposes a specific structure for the total Hamiltonian describing the evolution of the universe \(\mathfrak{S}\otimes\mathfrak{E}\). In fact, we will see that a pointer basis for \(\mathfrak{S}\) exists for any interactions within \(\mathfrak{E}\), yet \(\mathfrak{S}\) may only interact with all fragments of \(\mathfrak{E}\) identically, cf. Fig. 1. The second part of the analysis is then focused on the dynamics induced by such Hamiltonians that support a pointer basis. We find that the efficiency of the information transfer between \(\mathfrak{S}\) and \(\mathfrak{E}\) is governed by the statistics of the interaction terms. The average information transfer is irreversible if and only if the support of the coupling coefficients is continuous, and the "speed of communication" is determined by the shape of the distribution of the interaction coefficients. Our general findings are illustrated with four models that correspond to a variety of situations, including scrambling or no scrambling, pointer basis or no pointer basis, quantum Darwinism or no quantum Darwinism. ## II Structure of the Hamiltonian We start by defining the problem in mathematically rigorous terms. Consider a set of \((N+1)\) qubits, where the \(0\)th qubit is the system \(\mathfrak{S}\). Hence, the environment \(\mathfrak{E}\) is comprised of \(N\) qubits. For the sake of simplicity, we further restrict ourselves to \(2\)-body interaction models. The most general Hamiltonian corresponding to this scenario then reads \[H=\sum_{i,j}\sum_{\alpha,\beta}J^{\alpha\beta}_{ij}\sigma^{\alpha}_{i}\otimes \sigma^{\beta}_{j}+\sum_{i}\vec{B}_{i}\cdot\vec{\sigma}_{i}, \tag{1}\] where \(\alpha\) and \(\beta\) take the values \(x\), \(y\) and \(z\), corresponding to the Pauli matrices \(\sigma^{x}\), \(\sigma^{y}\) and \(\sigma^{z}\). Indices \(i\) and \(j\) count the qubits, with \(i,j=0\) for \(\mathfrak{S}\) and \(i,j\geq 1\) for \(\mathfrak{E}\). Further, \(J_{ij}\) and \(\vec{B}_{i}\) are real coefficients, which in the following we will choose to be random variables. ### Existence of a pointer basis The natural question now is, what conditions \(J_{ij}\) and \(\vec{B}_{i}\) have to fulfill, such that \(H\) in Eq. (1) supports a pointer basis for \(\mathfrak{S}\). Pointer states are _the_ particular states of \(\mathfrak{S}\) that are stable under the interaction with \(\mathfrak{E}\)[2; 3; 4]. Formally, these states can be identified in the following way: \(|\psi_{\mathfrak{S}}\rangle\in\mathfrak{S}\) is a pointer state of \(\mathfrak{S}\) if and only if for any \(|\psi_{\mathfrak{E}}\rangle\in\mathfrak{E}\), an initial product state \(|\psi_{\mathfrak{S}}\rangle\otimes|\psi_{\mathfrak{E}}\rangle\) evolves under \(H\) (1) to remain within an epsilon ball around the product state \(|\psi_{\mathfrak{S}}\rangle\otimes|\psi_{\mathfrak{E}}(t)\rangle\). In other words, the reduced state \(|\psi_{\mathfrak{S}}\rangle\) remains pure under the evolution of the total Hamiltonian. It will prove convenient to separate the total Hamiltonian into terms corresponding to \(\mathfrak{S}\), \(\mathfrak{E}\), and their interaction. Hence, we write \[H=H_{\mathfrak{S}}\otimes\mathbb{I}_{\mathfrak{E}}+\mathbb{I}_{\mathfrak{S}} \otimes H_{\mathfrak{E}}+H_{\mathfrak{S}\mathfrak{E}}\,. \tag{2}\] Comparing with Eq. (1) we identify the system Hamiltonian as \[H_{\mathfrak{S}}=\vec{B_{0}}\cdot\vec{\sigma_{0}}\,, \tag{3}\] whereas we have for the environment \[H_{\mathfrak{E}}=\sum_{1\leq i<j\leq N}\sum_{\alpha,\beta}J^{\alpha\beta}_{ij }\sigma^{\alpha}_{i}\otimes\sigma^{\beta}_{j}+\sum_{i=1}^{N}\vec{B}_{i}\cdot \vec{\sigma}_{i}\,. \tag{4}\] Notice that the first term in Eq. (4) describes the intra-environmental interactions. The interaction between \(\mathfrak{S}\) and \(\mathfrak{E}\) is given by \[H_{\mathfrak{S}\mathfrak{E}}=\sum_{\alpha}\sigma^{\alpha}_{0}\otimes\sum_{j=1 }^{N}\sum_{\beta}J^{\alpha\beta}_{0j}\sigma^{\beta}_{j}\,. \tag{5}\] Figure 1: Two-body interactions between \(\mathfrak{S}\) (red) and \(\mathfrak{E}\) (blue). Lines depict interaction terms. From this separation of terms it becomes immediately obvious that the pointer basis for \(\mathfrak{S}\) can only exist if certain necessary and sufficient conditions for the interaction term \(H_{\mathfrak{S}\mathfrak{S}}\) are fulfilled. These conditions become particularly intuitive by considering the original motivation for _pointer_ states. These states are not only immune to the dynamics induced by the interaction with the environment, but can be also thought of as states that correspond to the _pointer_ of a measurement apparatus. Mathematically, such an apparatus is described by the pointer observable \[A\equiv A_{\mathfrak{S}}\otimes\mathbb{I}_{\mathfrak{E}}\,. \tag{6}\] By construction, the pointer observable \(A\) commutes with the total Hamiltonian (1), and hence \(A\) and \(H\) share an eigenbasis. Due to form of \(A\) the corresponding eigenstates can be written in tensor-product form \(\ket{S_{i}}\otimes\ket{E_{j}}\) with \(\ket{S_{i}}\in\mathfrak{S}\) and \(\ket{E_{j}}\in\mathfrak{E}\). Correspondingly, we can factorize the time-evolution operator as \[U=\sum_{i}\ket{S_{i}}\!\!\bra{S_{i}}\otimes\exp\left(-i/\hbar\,H_{i}\,t\right) \tag{7}\] where the \(H_{i}\) act only on \(\mathfrak{E}\). Now, choosing any (reduced) eigenstate of \(A\) as initial state of \(\mathfrak{S}\), \(\ket{\psi_{\mathfrak{S}}}=\ket{S_{i}}\), the product state \(\ket{\psi_{\mathfrak{S}}}\otimes\ket{\psi_{\mathfrak{E}}}\) evolves by remaining in product form \(\ket{\psi_{\mathfrak{S}}}\otimes\ket{\psi_{\mathfrak{E}}(t)}\). In Appendix A we show that by enforcing the commutation relation \([A,\,H]=0\) we have that any Hamiltonian (4) supporting a pointer basis for \(\mathfrak{S}\) has to be of the form \[H=H_{\mathfrak{S}}\otimes\mathbb{I}_{\mathfrak{E}}+\mathbb{I}_{\mathfrak{S}} \otimes H_{\mathfrak{E}}+H_{\mathfrak{S}}\otimes\sum_{i=1}^{N}h_{i}, \tag{8}\] where \(H_{\mathfrak{S}}\) is the system Hamiltonian (3), and the \(h_{i}\) are arbitrary traceless Hermitian operators acting on the \(i\)th qubit of \(\mathfrak{E}\). In conclusion, we have shown that any model of interacting qubits that supports a pointer basis for a system qubit \(\mathfrak{S}\) may have at most a separable interaction term \(H_{\mathfrak{S}\mathfrak{E}}\). Moreover, this interaction terms has to be factorizable into the system Hamiltonian \(H_{\mathfrak{S}}\) and traceless terms acting on the environmental qubits \(\mathfrak{E}\). It is important to emphasize that no additional conditions are required pertaining to, for instance, the intra-environmental interactions. Schematically, our findings are illustrated in Fig. 1. ### Further conditions for quantum Darwinism It was shown in Ref. [11] that only a special structure of states is compatible with the emergence of quantum Darwinism. These states are of the _singly-branching form_[38; 12], which are the only states to support epsilon quantum correlations as measured by quantum discord [39]. Singly branching states are pointer states of \(\mathfrak{S}\) correlated with the environment states in the special form, \[\ket{\psi(t)}=\alpha_{0}\ket{0}\bigotimes_{i\in\mathfrak{E}}\ket{\mathfrak{o} _{i}(t)}+\beta_{0}\ket{1}\bigotimes_{i\in\mathfrak{E}}\ket{\mathfrak{l}_{i}(t )}\,. \tag{9}\] It is easy to see that such a singly branching form can emerge if and only if there are no intra-environmental interactions. Thus, we conclude that quantum Darwinism can only be supported by Hamiltonians with separable interaction between \(\mathfrak{S}\) and \(\mathfrak{E}\), cf. Eq. (8), and no intra-environmental interactions, i.e., \(J_{ij}^{\alpha\beta}=0\) in Eq. (4). The remaining question now is whether all such Hamiltonians provide so-called good decoherence, which makes their corresponding \(\mathfrak{E}\) good channels for information transfer. ## III Coefficients of the Hamiltonian To analyze the _dynamical_ emergence of quantum Darwinism, we now solve for the average dynamics under an arbitrary, random Hamiltonian for which the system \(\mathfrak{S}\) has a pointer basis of the single branching form (9). We will find that the efficiency of information transfer within \(\mathfrak{E}\) is governed by the randomness of the interaction coefficients. ### Solving the dynamics To this end, consider an arbitrary Hamiltonian of the form (8), where we further enforce vanishing intra-environmental interactions \(J_{ij}^{\alpha\beta}=0\). Note that in Eq. (8) the \(h_{i}\) are hermitian, and traceless. Hence, we can write equivalently (and without loss of generality) \[H=\sigma_{0}^{z}\otimes\sum_{i=1}^{N}B_{i}\,\sigma_{i}^{z}, \tag{10}\] where the \(B_{i}\) are real random variables. We are now interested in the dynamics induced by Eq. (10), and we choose an arbitrary separable initial condition. Therefore, we write \[\ket{\psi_{0}}=\left(\alpha_{0}\ket{0}+\beta_{0}\ket{1}\right)\bigotimes_{i= 1}^{N}(\alpha_{i}\ket{0_{i}}+\beta_{i}\ket{1_{i}}), \tag{11}\] where the \(\alpha_{i}\) and \(\beta_{i}\) are arbitrary, complex coefficients. Evolving this \(\ket{\psi_{0}}\) under the corresponding Schrodinger equation, \(i\partial_{t}\ket{\psi}=H\ket{\psi}\) we obtain the time-dependent solution, \[\ket{\psi(t)}=\alpha_{0}\ket{0}\bigotimes_{i\in\mathfrak{E}}\ket{\mathfrak{o} _{i}(t)}+\beta_{0}\ket{1}\bigotimes_{i\in\mathfrak{E}}\ket{\mathfrak{l}_{i}(t )}, \tag{12}\] where we introduced \[\begin{split}\ket{\mathfrak{o}_{i}(t)}&=\alpha_{i} \,\exp\left(iB_{i}t\right)\ket{0}+\beta_{i}\,\exp\left(-iB_{i}t\right)\ket{1} \\ \ket{\mathfrak{l}_{i}(t)}&=\alpha_{i}\,\exp\left(-iB_{i }t\right)\ket{0}+\beta_{i}\,\exp\left(iB_{i}t\right)\ket{1}\,.\end{split} \tag{13}\] As usual, the reduced density matrix of \(\mathfrak{S}\) is given by tracing out \(\mathfrak{E}\), \(\rho_{\mathfrak{S}}(t)=\operatorname{tr}_{\mathfrak{E}}\left\{\left|\psi(t) \right\rangle\left\langle\psi(t)\right|\right\}\). The corresponding decoherence factor [32] is given by the amplitude of the off-diagonal coefficients of the reduced density matrix \(\rho_{\mathfrak{S}}\) in the basis \(\left\{\left|0\right\rangle,\left|1\right\rangle\right\}\). We have (14) and, since the \(B_{i}\) are stochastically independent, we can write \[\left\langle\left|\prod\Gamma_{i}(t)\right|^{2}\right\rangle=\prod\left\langle \left|\Gamma_{i}(t)\right|^{2}\right\rangle. \tag{15}\] It is easy to see that we have, \[\Gamma_{i}(t)=\left|\alpha_{i}\right|^{2}\,\exp\left(-2iB_{i}t\right)+\left| \beta_{i}\right|^{2}\,\exp\left(2iB_{i}t\right). \tag{16}\] For random \(B_{i}\) it is, however, more instructive to compute the decoherence factors averaged over all possible values for \(B_{i}\). Further denoting the probability density function of \(B_{i}\) as \(P(B_{i}=x)=f_{i}(x)\), we show in Appendix B that we obtain \[\begin{split}\left\langle\left|\Gamma_{i}(t)\right|^{2}\right\rangle &=\left|\alpha_{i}\right|^{4}+\left|\beta_{i}\right|^{4}\\ &+2\left|\alpha_{i}\right|^{2}\left|\beta_{i}\right|^{2}\left| \widetilde{f}_{i}(4t)\right|\cos(\arg(\widetilde{f}_{i}(4t))),\end{split} \tag{17}\] where \(\widetilde{f}_{i}\) is the characteristic function of \(B_{i}\) \[\widetilde{f}_{i}(k)=\int_{-\infty}^{+\infty}dx\,f_{i}(x)\,\exp\left(ikx \right). \tag{18}\] In conclusion, we have derived an analytic expression for the average decoherence function, which governs the rate with which information about \(\mathfrak{S}\) is communicated through \(\mathfrak{E}\). ### Rate and irreversibility of information transfer Equation (17) demonstrates the relationship between the emergence of classicality and the randomness of system-environment interactions. Indeed, the probability distributions \(f_{i}\) of the couplings \(B_{i}\) between \(\mathfrak{S}\) and \(\mathfrak{E}\) play a central role in the rate of information transfer. Observe that the decoherence factors decrease rapidly if \(\widetilde{f}_{i}\) decrease rapidly. Since \(\widetilde{f}_{i}\) is the Fourier transform of the probability distribution \(f_{i}\), the order of differentiability of \(f_{i}\) gives us the order of decay of \(\widetilde{f}_{i}\), while the smallest characteristic length in the distribution \(f_{i}\) gives us the inverse of the characteristic time of decay of \(\widetilde{f}_{i}\). Furthermore, if the support of \(B_{i}\) is discrete and finite, then the characteristic function \(\widetilde{f}_{i}(k)\) is a periodic (or quasi-periodic) function and therefore does not converge to \(0\). Hence, having continuous support for the \(f_{i}\) is essential for the emergence of truly classical behavior. In fact, if the \(f_{i}\) distribution is continuous, then the information is transferred _irreversibly_. In this case, the \(f_{i}\) are integrable \[\int_{-\infty}^{+\infty}dx\,\left|f_{i}(x)\right|=\int_{-\infty}^{+\infty}dx\, f_{i}(x)=1<\infty, \tag{19}\] and thus, by virtue of the Riemann-Lebesgue Lemma, \[\left|\widetilde{f}_{i}(k)\right|\xrightarrow[k\to\infty]{}0\quad\text{and thus}\quad\left\langle\left|\Gamma_{i}(t)\right|^{2}\right\rangle\xrightarrow[t\to\infty]{}\epsilon_{i}<1. \tag{20}\] where \(\epsilon_{i}\) depends on the initial state. Finally, we note that a perfect record of the information about \(\mathfrak{S}\) in the \(i\)th qubit corresponds to \(\Gamma_{i}=0\). This is typically not the case. However, as the \(\left\langle\left|\Gamma_{i}(t)\right|^{2}\right\rangle\) become strictly less than one, Eq. 15 shows that \(\left|0\right\rangle\bigotimes_{i\in\mathfrak{E}}\left|\mathfrak{c}_{i}(t)\right\rangle\) and \(\left|1\right\rangle\bigotimes_{i\in F}\left|\mathfrak{c}_{i}\right\rangle\) become orthogonal on average for a sufficiently large fragment \(\mathfrak{F}\). Thus, a small set of qubits of the environment is enough to obtain an almost complete record of the state of \(\mathfrak{S}\). ### Quantum Darwinism - the classical plateau The hallmark result of quantum Darwinism is the emergence of the "classical plateau" [5; 6; 7; 8; 9; 10; 11; 12; 13], cf. Fig. 2. This classical plateau is a consequence of redundant encoding of the same information in \(\mathfrak{E}\). To this end, consider again a fragment \(\mathfrak{F}\) of \(\mathfrak{E}\). If any \(\mathfrak{F}\) carries the same information about \(\mathfrak{S}\), then any two observers accessing different \(\mathfrak{F}\) learn exactly the same information about \(\mathfrak{S}\). The amount of information that a fragment \(\mathfrak{F}\) of \(\mathfrak{E}\) contains about the system \(\mathfrak{S}\) can be quantified with the mutual information \(I(\mathfrak{S}:\mathfrak{F})\) defined as \[I(\mathfrak{S}:\mathfrak{F})=\mathcal{S}_{\mathfrak{S}}+\mathcal{S}_{\mathfrak{ F}}-\mathcal{S}_{\mathfrak{F}\mathfrak{F}}\,, \tag{21}\] Figure 2: Asymptotic mutual information (27) and Holevo quantity (28) as functions of the fragment size. The classical plateau corresponds to the value \(\mathcal{S}_{max}=1\). where \(\mathcal{S}(\rho)=-\mathrm{tr}\left\{\rho\log(\rho)\right\}\). The maximal classical information that can be accessed by any observer is upper-bounded by the Holevo quantity [40; 41] \[\chi(\mathfrak{S}:\tilde{\mathfrak{F}})=\mathcal{S}_{\mathfrak{S}}-\mathcal{S}_{ \mathfrak{S}|\tilde{\mathfrak{F}}} \tag{22}\] where \(\mathcal{S}_{\mathfrak{S}|\tilde{\mathfrak{F}}}\) is the conditional von Neumann entropy defined as the minimal von Neumann entropy of \(\mathfrak{S}\) obtained after a measurement on \(\mathfrak{F}\). The difference of the mutual information, \(I(\mathfrak{S}:\mathfrak{F})\), and the Holevo quantity, \(\chi(\mathfrak{S}:\tilde{\mathfrak{F}})\) has been called quantum discord [39], \[D(\mathfrak{S}:\tilde{\mathfrak{F}})\equiv I(\mathfrak{S}:\mathfrak{F})-\chi( \mathfrak{S}:\tilde{\mathfrak{F}})\geq 0\,. \tag{23}\] Quantum discord measures the genuinely quantum information encoded in \(\mathfrak{F}\). For each \(\mathfrak{F}\) and its complement, \(\overline{\mathfrak{F}}=\mathfrak{E}\setminus\mathfrak{F}\) we define the corresponding decoherence factors \[\Gamma_{\mathfrak{F}}=\prod_{i\in\overline{\mathfrak{F}}}\Gamma_{i}\quad \text{and}\quad\Gamma_{\overline{\mathfrak{F}}}=\prod_{i\notin\overline{ \mathfrak{F}}}\Gamma_{i}. \tag{24}\] With these definitions, one can then show that for small enough decoherence factors [32] we have \[I(\mathfrak{S}:\tilde{\mathfrak{F}})\simeq\mathcal{S}_{max}-\frac{\xi(\left| \alpha_{0}\right|^{2})}{2}\left[\left|\Gamma\right|^{2}+\left|\Gamma_{\mathfrak{ F}}\right|^{2}-\left|\Gamma_{\overline{\mathfrak{F}}}\right|^{2}\right], \tag{25}\] where \(\mathcal{S}_{max}=-\left|\alpha_{0}\right|^{2}\log(\left|\alpha_{0}\right|^{2} )-(1-\left|\alpha_{0}\right|^{2})\log(1-\left|\alpha_{0}\right|^{2})\), which is the maximal value that the von Neumann entropy of \(\mathfrak{S}\). Correspondingly, we obtain for the Holevo quantity (see Appendix C) \[\chi(\mathfrak{S}:\tilde{\mathfrak{F}})\simeq\mathcal{S}_{max}-\frac{\xi( \left|\alpha_{0}\right|^{2})}{2}\left|\Gamma_{\mathfrak{F}}\right|^{2}\,, \tag{26}\] which is fully consistent with earlier findings [42]. Finally, in the limit of long times, \(t\gg 1\), and for smooth enough \(f_{i}\) (20) we obtain the following asymptotic expression for the mutual information \[I(\mathfrak{S}:\tilde{\mathfrak{F}})\simeq\mathcal{S}_{max}-\frac{\xi(\left| \alpha_{0}\right|^{2})}{2}\left[\prod_{i\in\mathfrak{F}}\epsilon_{i}+\prod_{i \in\overline{\mathfrak{F}}}\epsilon_{i}-\prod_{i\notin\overline{\mathfrak{F}} }\epsilon_{i}\right]\,, \tag{27}\] and the Holevo quantity \[\chi(\mathfrak{S}:\tilde{\mathfrak{F}})\simeq\mathcal{S}_{max}-\frac{\xi( \left|\alpha_{0}\right|^{2})}{2}\prod_{i\in\mathfrak{F}}\epsilon_{i}\,. \tag{28}\] Further, averaging over every possible separable initial states, we have \[I(\mathfrak{S}:n)_{\infty}\simeq\mathcal{S}_{max}-\frac{\xi(\left|\alpha_{0} \right|^{2})}{2}\left[\overline{\epsilon}^{N}+\overline{\epsilon}^{n}- \overline{\epsilon}^{N-n}\right] \tag{29}\] and \[\chi(\mathfrak{S}:n)_{\infty}\simeq\mathcal{S}_{max}-\frac{\xi(\left|\alpha_{ 0}\right|^{2})}{2}\,\overline{\epsilon}^{n}\,, \tag{30}\] where \(n\) is the size of \(\mathfrak{F}\) and \(\overline{\epsilon}=2/3\) (see Appendix C). These results are depicted in Fig. 2 for \(N=50\) environmental qubits. Both, the mutual information and the Holevo quantity exhibit a steep initial rise with increasing fragment size \(n\), as larger fragments provide more data about \(\mathfrak{S}\). This initial rise is followed by the classical plateau. ## IV Representative examples We conclude the analysis with the numerical solution of four representative examples. To support quantum Darwinism a Hamiltonian must obey the following conditions: existence of a pointer basis, continuous support, and no intra-environment interactions. Our first example exhibits these three conditions, and for each following example we successively remove one of these conditions, cf. Tab. 1. ### Continuous Parallel Decoherent Interaction The first model has a pointer basis, random coupling coefficients with a continuous spectrum, and does not exhibit scrambling in \(\mathfrak{E}\). The corresponding Hamiltonian reads \[H_{\mathrm{CPDI}}=\sigma_{0}^{z}\otimes\sum_{i=1}^{N}B_{i}\sigma_{i}^{z} \tag{31}\] where \(B_{i}\) are independent random variables, drawn uniformly from \(B_{i}\in[-1,1]\). For specificity, we call this model _Continuous Parallel Decoherent Interaction (CPDI)_. In Fig. 3a we plot the resulting mutual information (21), as a function of the fragment size, which rapidly converges towards the asymptotic expression (27). Note the distinct classical plateau indicative of quantum Darwinism. Moreover, we observe relaxation of \(\mathfrak{S}\) into its stationary pointer states over a typical time \(\tau\simeq 1/4\), at which point the information transfer becomes irreversible. ### Discrete Parallel Decoherent Interaction Our second example is called, _Discrete Parallel Decoherent Interaction (DPDI)_. The corresponding Hamilto \begin{table} \begin{tabular}{l c c c} \hline \hline \multicolumn{4}{c}{Pointer Basis Continuous Support No Scrambling} \\ \hline CPDI (31) & ✓ & ✓ & ✓ \\ DPDI (32) & ✓ & \(\times\) & ✓ \\ CODI (33) & \(\times\) & ✓ & ✓ \\ CPDI-S (34) & ✓ & ✓ & \(\times\) \\ \hline \hline \end{tabular} \end{table} Table 1: Models classification nian is \[H_{\text{DPDI}}=\sigma_{0}^{z}\otimes\sum_{i=1}^{N}B_{i}\sigma_{i}^{z} \tag{32}\] where \(B_{i}\) are again independent random variables. However, in contrast to the continuous case in Eq. (31), the \(B_{i}\) are now drawn uniformly from the _discrete_ set \(B_{i}\in\{-1,-0.5,0.5,1\}\). In Fig. 2(b) we depict the resulting mutual information. As expected, we observe that the classical plateau appears and disappears periodically, and hence the information transfer is no longer irreversible. At instants at which the plateau completely vanishes, \(I(\mathfrak{S}:\mathfrak{F})\) is linear in the fragment size. This indicates that the information about \(\mathfrak{S}\) encoded in \(\mathfrak{E}\) is distributed throughout the entire environment (no redunancy). ### Continuous Orthogonal Decoherent Interaction As a next example, we consider _Continuous Orthogonal Decoherent Interaction (CODI)_, which refers to our first model (31) but with an added external field. This additional term is chosen to be not parallel to the interaction between \(\mathfrak{S}\) and \(\mathfrak{E}\), and hence CODI does not support a pointer basis. The Hamlitonian reads \[H_{\text{CODI}}=\sigma_{0}^{y}\otimes\mathbb{I}_{E}+\sigma_{0}^{z}\otimes \sum_{i=1}^{N}B_{i}\sigma_{i}^{z}. \tag{33}\] where, as before, \(B_{i}\) are independent random variables, drawn uniformly from \(B_{i}\in[-1,1]\). In this model, information about the observable \(\sigma_{0}^{z}\) can be registered in \(\mathfrak{E}\), but the eigenstates of this observable are not stable and hence not classically objective. This observation is further supported by Fig. 2(c), which does not exhibit any form of classical plateau. Moreover, at all instants \(I(\mathfrak{S}:\mathfrak{F})\) is a linear function of the size of \(\mathfrak{F}\), which is a consequence of the complete absence of any redundancy. ### Continuous Parallel Decoherent Interaction with Scrambling As a final example, we again consider Eq. (31) but now design \(\mathfrak{E}\) to exhibit scrambling dynamics. Accordingly, this model is called Continuous Parallel Decoherent Interaction with Scrambling (CPDI-S), and the Hamiltonian becomes \[H_{\text{CPDI-S}}=\sigma_{0}^{z}\otimes\sum_{i=1}^{N}B_{i}\sigma_{i}^{z}+\sum _{1\leq i<j\leq N}J_{ij}\sigma_{i}^{z}\otimes\sigma_{j}^{z} \tag{34}\] where again \(B_{i}\) are independent random variables, drawn uniformly from \(B_{i}\in[-1,1]\), and \(J_{ij}\) are independent random variables, drawn uniformly from \(J_{ij}\in[-0.03,0.03]\). As Fig. 2(d) shows, a classical plateau rapidly emerges over a time scale of \(\tau_{\mathfrak{E}\mathfrak{E}}\simeq 1/4\). However, this plateau quickly "disperses" as the quantum information becomes non-local due to scrambling in \(\mathfrak{E}\). We refer to Ref. [32] for a more detailed analysis of this particular model. Furthermore, it is interesting to note that Eq. (34) is another example that demonstrates the competition of decoherence and scrambling as a "sink for quantum information" as analyzed by (some of) us in Ref. [25]. ## V Concluding Remarks In the present work we determined the set of qubit models, which support the emergence of classicality. In particular, we established a classification of 2-body interaction models based on the structure of the Hamiltonian and on the nature of its coefficients. The existence of an "exact" pointer basis for the qubit \(\mathfrak{S}\) requires the interaction Hamiltonian to be separable between \(\mathfrak{S}\) and its environment \(\mathfrak{E}\) such that the part acting on \(\mathfrak{S}\) is proportionnal to the self Hamiltonian of \(\mathfrak{S}\). We call that type of structure a _parallel decoherent interaction_. Furthermore, without any intra-environment interactions, this Hamiltonian structure leads to a branch Figure 3: Mutual information \(I(\mathfrak{S}:\mathfrak{F})\) as a function of time and fragment size for an arbitrary separable initial state of 9 qubits (\(N=8\)), divided by the von Neumann entropy of \(\mathfrak{S}\) (averaged over \(10^{2}\) realizations). **(a)** CPDI (31): irreversible information transfer and classical objectivity. **(b)** DPDI (32): non-irreversible (periodic) information transfer. **(c)** CODI (33): no local information transfer or redundancy. **(d)** CPDI-S (34): competition of emergence of classical objectivity and scrambling in \(\mathfrak{E}\). ing state structure, the only one compatible with quantum Darwinism [11]. Furthermore, intra-environment interactions can lead to information scrambling in \(\mathfrak{E}\), which deteriorates the branching structure. In such situations, the state of \(\mathfrak{S}\) is still a classical mixture, but this classicality is hidden to an outside observer who must take a measurement on a non-local part of \(\mathfrak{E}\) in order to recover almost all the information about \(\mathfrak{S}\). This indicates a clear competition of the emergence of classical objectivity and scrambling dynamics. The conceptual notions and the gained insight of our work may open the door for further inquiry, such as the study of \(k-\)body interactions. In particular, there is every reason to believe that this type of interaction leads to a non-local information transfer. Indeed, for such interactions, the information about \(\mathfrak{S}\) is directly encoded in \(\mathfrak{E}\) by the entanglement of \(\mathfrak{S}\) with fragments \(\mathfrak{F}\) of size \(k\). This results in a lower redundancy of information. However, the analytical analysis of the dynamics is much more involved than the present 2-interaction case, which is why we leave \(k-\)body interactions for future work. ###### Acknowledgements. This work was carried out during a 15 week internship at the University of Maryland, Baltimore County (P.D.). We gratefully acknowledge several discussions with Joshua Chiel, who has provided us with perceptive comments. A.T. acknowledges support from the Center for Nonlinear Studies and the U.S DOE under the LDRD program at Los Alamos. S.D. acknowledges support from the John Templeton Foundation under Grant No. 62422. ## Appendix A Hamiltonians with a pointer basis In this appendix we provide further details that lead to the Hamiltonian structure (8). We start by decomposing the interaction term (5) into \[H_{\mathfrak{E}\mathfrak{E}}=\sigma_{0}^{x}\otimes\sum_{i}\vec{J}_{i}^{x} \cdot\vec{\sigma_{i}}+\sigma_{0}^{y}\otimes\sum_{i}\vec{J}_{i}^{y}\cdot\vec{ \sigma_{i}}+\sigma_{0}^{z}\otimes\sum_{i}\vec{J}_{i}^{z}\cdot\vec{\sigma_{i}}. \tag{10}\] Further, we can expand for any pointer observable \(A=A_{\mathfrak{S}}\otimes\mathbb{I}_{\mathfrak{E}}\) the commutator \[\begin{split}[H,A]&=[H_{\mathfrak{S}},A_{\mathfrak{ E}}]\otimes\mathbb{I}_{\mathfrak{E}}+[\sigma_{0}^{x},A_{\mathfrak{E}}] \otimes H_{\mathfrak{E}\mathfrak{E}}^{x}\\ &\quad+[\sigma_{0}^{y},A_{\mathfrak{E}}]\otimes H_{\mathfrak{E} \mathfrak{E}}^{y}+[\sigma_{0}^{z},A_{\mathfrak{E}}]\otimes H_{\mathfrak{E} \mathfrak{E}}^{z}\,,\end{split} \tag{11}\] where \(H_{\mathfrak{E}\mathfrak{E}}^{x}\), \(H_{\mathfrak{E}\mathfrak{E}}^{y}\), and \(H_{\mathfrak{E}\mathfrak{E}}^{z}\) are linear combinations of \(\sigma_{i}^{x}\), \(\sigma_{i}^{y}\) and \(\sigma_{i}^{z}\), which are orthonormal to \(\mathbb{I}_{\mathfrak{E}}\). Thus, a vanishing commutator, \([H,A]=0\) immediately yields \[[\sigma_{0}^{x},A_{\mathfrak{E}}]\otimes H_{\mathfrak{E}\mathfrak{E}}^{x}+[ \sigma_{0}^{y},A_{\mathfrak{E}}]\otimes H_{\mathfrak{E}\mathfrak{E}}^{y}+[ \sigma_{0}^{z},A_{\mathfrak{E}}]\otimes H_{\mathfrak{E}\mathfrak{E}}^{z}=0 \tag{12}\] as well as \([H_{\mathfrak{S}},A_{\mathfrak{E}}]\otimes\mathbb{I}_{\mathfrak{E}}=0\,\). Therefore, for any non-trivial \(A_{\mathfrak{S}}\) we must have, \[H_{\mathfrak{E}\mathfrak{E}}^{x}\propto H_{\mathfrak{E}\mathfrak{E}}^{y} \propto H_{\mathfrak{E}\mathfrak{E}}^{z}\,, \tag{13}\] and the Hamiltonian (10) becomes \[H_{\mathfrak{E}\mathfrak{E}}=\vec{J}_{0}\cdot\vec{\sigma_{0}}\otimes\sum_{i =1}^{N}\vec{J}_{i}\cdot\vec{\sigma_{i}}\,. \tag{14}\] Thus, we have \(\left[\vec{B}_{0}\cdot\vec{\sigma_{0}},A_{S}\right]=0\) and \(\left[\vec{J}_{0}\cdot\vec{\sigma_{0}},A_{S}\right]=0\), which is equivalent to \[\vec{B_{0}}\wedge\vec{J}_{0}=\vec{0}. \tag{15}\] It means that \(\vec{J}_{0}\) and \(\vec{B_{0}}\) are parallel. We finally obtain the desired result Eq. (8). ## Appendix B Average decoherence factors Equation (17) can be obtained by direct derivation. Consider \[\left\langle\left|\Gamma_{i}(t)\right|^{2}\right\rangle=\left|\alpha_{i} \right|^{4}+\left|\beta_{i}\right|^{4}+2\left|\alpha_{i}\right|^{2}\left|\beta _{i}\right|^{2}\left\langle\cos(4B_{i}t)\right\rangle \tag{16}\] where the average is given by \[\left\langle\cos(4B_{i}t)\right\rangle=\int dx\,f_{i}(x)\cos(4xt)\,. \tag{17}\] As above, \(f_{i}(x)\) denotes the probability density of the magnetic field \(B_{i}\). Employing its Fourier transform \(\widetilde{f}_{i}(k)\), we write \[\left\langle\cos(4B_{i}t)\right\rangle=\int dx\int dk\,\frac{\widetilde{f}_{ i}(k)}{2\pi}\exp\left(-ikx\right)\cos(4xt). \tag{18}\] Now using [43] \[\int dx\,\exp\left(-ikx\right)\cos(ax)=\pi\,\left(\delta(k+a)+\delta(k-a) \right), \tag{19}\] we have \[\left\langle\cos(4B_{i}t)\right\rangle=\frac{1}{2}\,\left(\widetilde{f}_{i}( 4t)+\widetilde{f}_{i}(-4t)\right) \tag{20}\] and hence \[\left\langle\cos(4B_{i}t)\right\rangle=\left|\widetilde{f}_{i}(4t)\right|\, \cos(\arg(\widetilde{f}_{i}(4t)))\,. \tag{21}\] ## Appendix C Mutual information and asymptotics In this final appendix, we summarize the derivation leading to the asymptotic expressions of the mutual information (27) and the Holevo quantity (28). We start by considering the reduced density operator of fragment \(\mathfrak{F}\), which is given by \(\rho_{\mathfrak{F}}=\mathrm{tr}_{\mathfrak{F}\mathfrak{F}}\left\{\rho\right\}\), and we have \[\rho_{\mathfrak{F}}(t)=\left|\alpha_{0}\right|^{2}\left|F_{0}(t)\right\rangle \left\langle F_{0}(t)\right|+\left|\beta_{0}\right|^{2}\left|F_{1}(t)\right\rangle \left\langle F_{1}(t)\right|\,. \tag{30}\] Explicitly, the states \(\left|F_{0}(t)\right\rangle\) and \(\left|F_{1}(t)\right\rangle\) read \[\left|F_{0}(t)\right\rangle =\bigotimes_{j\in\mathfrak{F}}(\alpha_{j}\,\exp\left(iB_{j}t \right)\left|0\right\rangle+\beta_{j}\,\exp\left(-iB_{j}t\right)\left|1 \right\rangle)\] \[\left|F_{1}(t)\right\rangle =\bigotimes_{j\in\mathfrak{F}}(\alpha_{j}\,\exp\left(-iB_{j}t \right)\left|0\right\rangle+\beta_{j}\,\exp\left(iB_{j}t\right)\left|1\right\rangle). \tag{31}\] The corresponding decoherence factor is simply given by \(\left\langle F_{1}(t)\middle|F_{0}(t)\right\rangle=\Gamma_{\mathfrak{F}}(t)\). Since we are working with qubits, it is then a simple exercise to show that \[\mathcal{S}_{\mathfrak{C}}=h\left[\frac{1}{2}(1+\sqrt{1-4\left|\alpha_{0} \right|^{2}\left|\beta_{0}\right|^{2}(1-\left|\Gamma_{\mathfrak{C}}\right|^{2 })})\right] \tag{32}\] and \[\mathcal{S}_{\mathfrak{F}}=h\left[\frac{1}{2}(1+\sqrt{1-4\left|\alpha_{0} \right|^{2}\left|\beta_{0}\right|^{2}(1-\left|\Gamma_{\mathfrak{F}}\right|^{2 })})\right] \tag{33}\] where \[h\left[x\right]=-x\log(x)-(1-x)\log(1-x)\,. \tag{34}\] These expressions can be further simplified, by expanding for small decoherence factors. We have in leading order \[\mathcal{S}_{\mathfrak{C}}\simeq\mathcal{S}_{max}-\frac{\xi(\left|\alpha_{0} \right|^{2})}{2}\left|\Gamma\right|^{2} \tag{35}\] and \[\mathcal{S}_{\mathfrak{F}}\simeq\mathcal{S}_{max}-\frac{\xi(\left|\alpha_{0} \right|^{2})}{2}\left|\Gamma_{\mathfrak{F}}\right|^{2}\,, \tag{36}\] where we introduced the notation \[\xi(\left|\alpha_{0}\right|^{2})=\frac{4\left|\alpha_{0}\right|^{2}(1-\left| \alpha_{0}\right|^{2})\operatorname{arctanh}(1-2\left|\alpha_{0}\right|^{2}) }{1-2\left|\alpha_{0}\right|^{2}}\,. \tag{37}\] Note that we obtain an equivalent expression for the complement \(\overline{\mathfrak{F}}\). Thus, the mutual information (21) becomes (38) Following similar steps as detailed in Ref. [42] the corresponding Holevo quantity can be written as \[\chi(\mathfrak{S}:\tilde{\mathfrak{F}}) =h\left[\frac{1}{2}(1+\sqrt{1-4\left|\alpha_{0}\right|^{2}\left| \beta_{0}\right|^{2}(1-\left|\Gamma\right|^{2})})\right] \tag{39}\] \[-h\left[\frac{1}{2}(1+\sqrt{1-4\left|\alpha_{0}\right|^{2}\left| \beta_{0}\right|^{2}(\left|\Gamma_{\mathfrak{F}}\right|^{2}-\left|\Gamma \right|^{2})})\right]\] which for weak decoherence in leading order simply is \[\chi(\mathfrak{S}:\tilde{\mathfrak{F}})\simeq\mathcal{S}_{max}-\frac{\xi( \left|\alpha_{0}\right|^{2})}{2}\left|\Gamma_{\mathfrak{F}}\right|^{2}\,. \tag{40}\] Finally, we note that for continuous distributions and \(t\gg 1\) we have (with Eq. (20)) \[I(\mathfrak{S}:\mathfrak{F})\simeq\mathcal{S}_{max}-\frac{\xi(\left|\alpha_{0 }\right|^{2})}{2}\left[\prod_{i\in\mathfrak{C}}\epsilon_{i}+\prod_{i\in \mathfrak{F}}\epsilon_{i}-\prod_{i\notin\mathfrak{F}}\epsilon_{i}\right]\,, \tag{41}\] where the \(\epsilon_{i}\) are given by \[\epsilon_{i}=\left|\alpha_{i}\right|^{4}+\left|\beta_{i}\right|^{4}\,. \tag{42}\] Averaging \(\epsilon_{i}\) over all possible values \(\alpha_{i}\) and \(\beta_{i}\) we obtain \[\overline{\epsilon}=\int_{0}^{1}dx\,(x^{2}+(1-x)^{2})=\frac{2}{3}. \tag{43}\]
2306.00115
A road to an elementary particle physics model with no Higgs -- I
This is the first of two companion papers where we prove that the recently discovered non perturbative mechanism capable of giving mass to elementary fermions, in the presence of weak interactions can also generate a mass for the $W$, and can thus be used as a viable alternative to the Higgs scenario. The non perturbative fermion and $W$ masses have the form $m_f\sim C_f(\alpha)\Lambda_{RGI}$ and $M_W\sim g_wc_w(\alpha)\Lambda_{RGI}$ with $C_f(\alpha)$ and $c_w(\alpha)$ functions of the gauge couplings, $g_w$ the weak coupling and $\Lambda_{RGI}$ the RGI scale of the theory. These parametric structures imply that a realistic model must include a new sector of massive fermions (Tera-fermions) subjected, besides Standard Model interactions, to some kind of super-strong gauge interactions (Tera-interactions) so that the RGI scale of the full theory, $\Lambda_T$, will be in the few TeV region. The extension of the model by introducing hypercharge and particles singlets under strong interactions (leptons and Tera-leptons) is the focus of the companion paper, where we also discuss some phenomenological implications of this approach. One can show that, upon integrating out the (heavy) Tera-degrees of freedom, the resulting low energy effective Lagrangian closely resembles the Standard Model Lagrangian. The argument rests on the conjecture that the 125 GeV resonance detected at LHC is a $W^+W^-/ZZ$ composite state, bound by Tera-particle exchanges, and not an elementary object. Although we restrict to the one family case, neglecting weak isospin splitting, this scheme has a certain number of merits with respect to the Standard Model. It offers a radical solution of the Higgs mass tuning problem as there is no Higgs. It allows identifying $\Lambda_T$ as the electroweak scale. It helps reducing the number of Standard Model parameters as elementary particle masses are determined by the dynamics.
Giancarlo Rossi
2023-05-31T18:42:48Z
http://arxiv.org/abs/2306.00115v2
# A road to an elementary particle physics model ###### Abstract This is the first of two companion papers in which we prove that the recently discovery non-perturbative mechanism capable of giving mass to elementary fermions can also generate a mass for the electro-weak bosons, when weak interactions are switched on, and can thus be used as a viable alternative to the Higgs scenario. We can show that the non-perturbatively generated fermion and masses have the expression \(m_{f}\sim C_{f}(\alpha)\Lambda_{RGI}\) and \(M_{W}\sim g_{w}c_{w}(\alpha)\Lambda_{RGI}\) with \(C_{f}(\alpha)\) and \(c_{w}(\alpha)\) functions of the gauge couplings, \(g_{w}\) the weak coupling and \(\Lambda_{RGI}\) the RGI scale of the theory. In view of this parametric dependence we are led to argue that to get the top quark and electro-weak boson masses of the correct order of magnitude a realistic model must include an as yet unobserved sector of massive fermions (Tera-fermions) subjected, besides Standard Model interactions, to some kind of super-strong gauge interactions (Tera-interactions) so that the full theory (including Standard Model and Tera-particles) will have an RGI scale \(\Lambda_{RGI}\sim\Lambda_{T}\gg\Lambda_{QCD}\) in the few TeV range. The extension of the model with the introduction of hypercharge interactions and particles, singlets under strong interactions (leptons and Tera-leptons) is the object of the companion paper, where we also discuss some interesting phenomenological implications of this approach. Though limited in its scope (for the moment we restrict to the case of only one family, neglecting weak isospin splitting), the present approach offers a radical solution of the Higgs mass naturalness problem (as there is no fundamental Higgs), an understanding of the fermion mass hierarchy (as related to the ranking of the magnitude of the gauge couplings) and a physical interpretation of the electro-weak scale (as a fraction of the scale, \(\Lambda_{T}\), of a new interaction). ## 1 Introduction * 1.1 NP mass generation mechanism * 1.2 Comparing with the Standard Model * 1.3 A few remarks * 1.4 Outline of the paper * 2 A toy-model * 2.1 Symmetries * 2.2 Chiral invariance * 2.3 The QEL of the critical theory in the Wigner phase * 2.4 NP mass generation in the NG phase * 2.5 The QEL of the critical theory in the NG phase * 3 Introducing weak and Tera-interactions * 3.1 The critical theory * 3.2 The critical conditions in the Wigner phase * 3.3 The critical conditions in the Nambu-Goldstone phase * 3.4 The NP emergence of elementary particles masses * 3.5 The \(\zeta_{0}\) propagator * 3.6 The NP masses of elementary particles * 3.7 The critical QEL in the NG phase * 3.8 Transversality of the \(W\) polarization amplitude * 4 Integrating out Tera-degrees of freedom and the SM * 5 Universality * 5.1 Rescuing universality? * 6 Conclusions and Outlook * A Symmetries, currents and tuning * A.1 Compatibility between _R_ight and _L_eft tuning conditions * B The QEL of the critical model in the NG phase * C The \(\zeta_{0}\) critical propagator * D Transversality of \(W\) polarization amplitude Introduction The authors of ref. [1] have introduced an interesting field theoretical renormalizable model where an SU(2) fermion doublet, subjected to non-abelian gauge interactions of the QCD type, is coupled to a complex scalar field via a \(d=4\) Yukawa term and an "irrelevant" \(d>4\) Wilson-like operator. Notwithstanding the fact that both terms break chiral invariance, it was shown in [1], and numerically checked in [2] in a set of dedicated lattice simulations, that there exists a critical value of the Yukawa coupling where chiral symmetry is recovered (up to effects formally vanishing when the UV cut-off is removed). The remarkable fact about this model is that in the Nambu-Goldstone (NG) phase of the critical theory non-perturbative (NP) O(\(\Lambda_{\rm RGI}\)) masses for the elementary fermions get dynamically generated. ### NP mass generation mechanism NP masses emerge as a consequence of a sort of "interference" between residual UV chiral breaking effects left behind at the critical value of the Yukawa coupling in the NG phase of the theory, and IR features triggered by the phenomenon of spontaneous breaking of (the recovered) chiral symmetry which standardly takes place in a strongly interacting theory. A detailed analysis of this subtle field theoretical interplay shows that the non perturbatively (NP-ly) generated elementary fermion masses have the parametric expression [1; 3] \[m_{f}\sim C_{f}(\alpha)\Lambda_{\rm RGI}\,, \tag{1}\] where the coefficient \(C_{f}(\alpha)\) is a function of the gauge coupling constant and \(\Lambda_{\rm RGI}\) is the RGI scale of the theory. If we take the "irrelevant" chiral breaking Wilson-like term to be a \(d=6\) operator (as we do for illustrative purposes in eqs. (1) and (1) below), one finds at the lowest loop order \(C_{f}(\alpha)={\rm O}(\alpha^{2})\). A far reaching phenomenological implication of the formula (1), when referred to the top quark, is that the relation \(m_{\rm top}\sim{\rm O}(\Lambda_{\rm RGI})\) requires \(\Lambda_{\rm RGI}\gg\Lambda_{QCD}\) and of the order of a few TeVs if eq. (1) has to reproduce the correct (order of magnitude of the) top mass experimental value. We are thus led to conjecture that a new sector of super-strongly interacting particles 1, gauge-invariantly coupled to standard matter, needs to exist so that the complete theory, encompassing the new and the Standard Model (SM) particles, will have an RGI scale (we call it \(\Lambda_{T}\)) much larger than \(\Lambda_{QCD}\) and in the TeV region [4; 5; 6; 7; 8]. Footnote 1: To avoid any misunderstanding we explicitly remark that here "super" has nothing to do with supersymmetry. We want to immediately remark that the need for assuming the existence of a super-strongly interacting sector here is very different from the reason why Technicolor was introduced in [9; 10]. Technicolor was invoked to provide mass to the electroweak (EW) bosons in the first place, while in the present approach super-strong interactions are introduced just to give the right order of magnitude to the dynamically generated top quark mass and, as we will show in sect. 3.4, also to the \(W\) mass. There is another important difference with respect to Technicolor that we want to point out. As we shall discuss in the next section, a key feature of our model is that (irrespective of the value of the Yukawa coupling) the Lagrangian enjoys an exact symmetry protecting elementary particle masses against power divergent quantum corrections (unlike what happens in Wilson lattice QCD (WLQCD) where the fermion mass is affected by a linearly divergent power correction [11]). The same exact symmetry makes all fermion-anti-fermion vev's (condensates) to vanish, thus preventing the possibility of using them to generate masses. In any case, as suggested by Glashow [12], in the following to avoid confusion, we will refer to these new super-strong interactions as Tera-interactions and to the new set of particles as Tera-particles. In the present paper we show that the model proposed in [1] can be naturally extended to incorporate weak interactions and Tera-particles. The \(W\) bosons as well as the Tera-fermions will acquire a NP mass \(\mathrm{O}(\Lambda_{\mathrm{RGI}})\) via the same mechanism that leads to eq. (1). For the \(W\) mass one gets (see sect. 3, and refs. [4; 5] for an early formulation of the extended model) \[M_{W}\sim g_{w}c_{w}(\alpha)\Lambda_{\mathrm{RGI}}\,, \tag{2}\] where \(g_{w}\) is the weak coupling and \(\alpha\) is a short-hand for the set of gauge couplings \([\alpha_{w},\alpha_{s},\alpha_{T}]\) with \(\alpha_{w}\), \(\alpha_{s}\) and \(\alpha_{T}\) referring to weak, strong and Tera-strong gauge interactions, respectively. In the following we will denote by \(\Lambda_{T}\) the RGI scale of the whole theory where the subscript \(T\) is to remind us that we are including Tera-particles. The parametric expression of \(c_{w}(\alpha)\) and that of the NP Tera-fermion mass can be found in sect. 3.6. In this first paper we limit ourselves to discuss how the model of ref. [1] can be extended to incorporate weak interactions and Tera-particles. We point out that the mass estimates we present in this work are intended to refer to the particles of the heaviest family. The reason is that in this paper we will ignore corrections related to the running of the quark masses. Since renormalization effects are less important for heavier masses we expect the gauge coupling dependence provided by eq. (1) to be phenomenologically more accurate than for lighter families. In the companion paper [13] (hereafter referred to as (II)), besides taking up a number of technical issues left aside here, we address the fundamental question of adding hypercharge interactions and particles singlets under strong interactions (leptons and Tera-leptons), and we present some interesting phenomenological implications of the model. ### Comparing with the Standard Model Eqs. (1)-(2) are very similar to the expression of the SM Higgs-like mass of fermions and \(W\)'s, respectively, with however two fundamental differences. The first is that the scale of the masses is not the vev of the Higgs field (whose value in the SM is determined by fitting the \(W\) mass), but a dynamical quantity associated to a new interaction. The second, related difference is that the values of the Yukawa couplings, which in the SM are tuned by hand to match the phenomenological values of quark masses (and, if present, also of lepton masses, see (II)) here are not free parameters. They are in principle calculable and are controlled by the magnitude of the gauge coupling of the strongest among the interactions the particle feels. The peculiar dependence of eqs. (1)-(2) upon the gauge couplings offers a hint to understand the generic fermion mass hierarchy \(m_{lept}\ll m_{quark}\ll m_{Tera}\), as being a consequence of the natural ranking among hypercharge, strong and Tera-strong gauge couplings, \(\alpha_{Y}\!\ll\!\alpha_{s}\!\ll\!\alpha_{T}\). From the above discussion we conclude that the general NP scenario for elementary particle mass generation we are describing in this investigation can be considered as a valid alternative to the Higgs mechanism, with the extra advantage that lacking a fundamental Higgs, we will not have to worry about the fine tuning problem of the Higgs mass. We might, however, need to reassess the question of the metastability of the Universe. A second conceptual bonus is that we have a natural interpretation of the magnitude of the electroweak scale (EW) scale, as (a fraction of the) the dynamical physical parameter, \(\Lambda_{T}\). ### A few remarks There is a number of further interesting features and implications of the approach we are describing in this work that are worth mentioning. First of all, lacking the need for the existence of a fundamental Higgs boson for elementary particle mass generation, we are in the obligation of finding a convincing interpretation for the 125 GeV resonance identified at LHC [14; 15]. We conjecture that this particle is not a fundamental object. We propose to interpret it as a \(W^{+}W^{-}/ZZ\) composite state, bound by exchanges of Tera-particles which, being charged under EW interactions, can couple to EW bosons. If the above interpretation is correct, this scalar particle must be incorporated in the low energy effective Lagrangian (LEEL) of the theory that one gets by integrating out the heavy Tera-degrees of freedom (Tera-dof's) since its experimental mass is much smaller than the \(\Lambda_{T}\) scale. Not unexpectedly, at \((\text{momenta})^{2}\ll\Lambda_{T}^{2}\) the resulting \(d=4\) piece of the LEEL of the model is seen to resemble very much the SM Lagrangian. We shall discuss this issue in more detail in sect. 4. Secondly, as it was shown in ref. [16], with a reasonable choice of the elementary particle content, a realistic theory extending the SM with the inclusion of the new Tera-sector leads to gauge coupling unification at a \(\sim 10^{18}\) GeV scale. This somewhat large unification scale is likely to yield a proton life-time comfortably longer than the present bound \(\tau_{\text{prot}}>1.7\times 10^{34}\) years [17]. We end by observing that, as briefly discussed in sect. 5, the problem of understanding the multiplicity of families (and possibly cope with weak isospin splitting) is related to the questions of the universality of the NP mass generation mechanism we are advocating in this work, and the predicting power of the present approach. Both issues are under active investigation. ### Outline of the paper The outline of this first paper is as follows. In sect. 2 for completeness and to set our notations we review the features of the model introduced in ref. [1]. In sect. 3 we discuss how to extend it with the introduction of weak and Tera-interactions. In sect. 4 we compare the LEEL one obtains by integrating out the Tera-dof's with the SM Lagrangian. In sect. 5 we touch the issue of universality and predictive power. A few conclusions and an outlook of our future lines of investigation can be found in sect. 6. In Appendix A we recall the procedure that leads to the equations determining the values of the Lagrangian parameters which specify the critical theory. In Appendix B we justify the form that the quantum effective Lagrangian (QEL) functional of the theory 2 takes in the NG phase at the critical point. In Appendix C we discuss the form of the effective scalar propagator in the critical limit. The way in which the transversality property of the 1PI \(W\) polarization amplitude is implemented in the present NP approach to mass generation is explained in Appendix D. Footnote 2: By QEL we mean the generating functional of the 1PI vertices of the theory from which one can directly extract the full quantum information of the model. To be precise in this paper we make a distinction between the QEL functional that represents the Legendre transform of the partition function and the low energy effective Lagrangian (LEEL) of the theory, valid at small momenta, that is obtained by integrating out degrees of freedom heavier than the chosen momentum scale. ## 2 A toy-model For the reader's convenience we summarize in this section the results of refs. [1; 3]. The simplest situation in which the NP mass generation mechanism outlined in the Introduction takes place is realized in a field theory where an SU(2) fermion doublet, subjected to non-abelian gauge interactions of the QCD type, is coupled to a complex scalar field via a \(d=4\) Yukawa term and an "irrelevant" \(d=6\) Wilson-like operator [11], which both break chiral invariance. The Lagrangian of this "toy-model" reads \[\mathcal{L}_{\text{toy}}(q,A;\Phi)=\mathcal{L}_{K}(q,A;\Phi)+ \mathcal{V}(\Phi)+\mathcal{L}_{Y}(q;\Phi)+\mathcal{L}_{Wil}(q,A;\Phi) \tag{1}\] \[\bullet\,\mathcal{L}_{K}(q,A;\Phi)=\frac{1}{4}F^{A}\!\cdot\!F^{A }+\left(\bar{q}_{L}\mathcal{D}^{A}q_{L}+\bar{q}_{R}\mathcal{D}^{A}q_{R}\right) +\frac{1}{2}\text{Tr}\left[\partial_{\mu}\Phi^{\dagger}\partial_{\mu}\Phi\right]\] (2) \[\bullet\,\mathcal{V}(\Phi)=\frac{\mu_{0}^{2}}{2}\text{Tr}\left[ \Phi^{\dagger}\Phi\right]+\frac{\lambda_{0}}{4}\big{(}\text{Tr}\left[\Phi^{ \dagger}\Phi\right]\big{)}^{2}\] (3) \[\bullet\,\mathcal{L}_{Y}(q;\Phi)=\eta\left(\bar{q}_{L}\Phi q_{R}+ \bar{q}_{R}\Phi^{\dagger}q_{L}\right)\] (4) \[\bullet\,\mathcal{L}_{Wil}(q,A;\Phi)=\frac{b^{2}}{2}\rho\left( \bar{q}_{L}\overleftarrow{\mathcal{D}}^{A}_{\mu}\Phi\mathcal{D}^{A}_{\mu}q_{ R}+\bar{q}_{R}\overleftarrow{\mathcal{D}}^{A}_{\mu}\Phi^{\dagger}\mathcal{D}^{A}_{ \mu}q_{L}\right), \tag{5}\] where \(q_{L}=(u_{L},d_{L})^{T}\) and \(q_{R}=(u_{R},d_{R})^{T}\) are fermion isodoublets and \(\Phi=\varphi_{0}\mathds{1}+i\varphi_{j}\tau^{j}=[-i\tau_{2}\varphi^{\star}|\,\varphi]\) is a \(2\times 2\) matrix with \(\varphi=(\varphi_{2}-i\varphi_{1},\varphi_{0}-i\varphi_{3})^{T}\) a complex scalar doublet, singlet under color SU(\(N_{c}\)) gauge transformations. The length scale \(b\) is the inverse of the UV cutoff, \(b^{-1}\sim\Lambda_{UV}\), \(\eta\) is the Yukawa coupling, \(\rho\) is introduced to keep track of \(\mathcal{L}_{Wil}\) and \(\mathcal{D}^{A}_{\mu}\) is the gauge-covariant derivative. A few remarks are in order here. First of all, we want to stress that the distinctive feature of the Lagrangian (1) is the presence of the "irrelevant" \(d>4\) chiral breaking Wilson-like term. Secondly, because of its presence we have immediately introduced a Yukawa term in the fundamental Lagrangian as it will be anyway generated by loop corrections. Finally, we observe that \(\mathcal{L}_{\text{toy}}\) is power counting renormalizable because the \(d=6\) Wilson-like operator appears in the Lagrangian multiplied by two inverse powers of the UV cutoff. It is worth recalling that a similar situation occurs in WLQCD where the \(d=5\) Wilson term in the lattice action is multiplied by one power of the lattice spacing, thus making the lattice action renormalizable and amenable to Monte Carlo simulations [11]. ### Symmetries Among other obvious symmetries, \(\mathcal{L}_{\rm toy}\) is invariant under the (global) transformations \(\chi_{L}\times\chi_{R}\) involving fermions and scalars given by (\(\Omega_{L/R}\in\mathrm{SU}(2)\)) \[\chi_{L}\times\chi_{R}=[\tilde{\chi}_{L}\times(\Phi\to\Omega_{L} \Phi)]\times[\tilde{\chi}_{R}\times(\Phi\to\Phi\Omega_{R}^{\dagger})] \tag{6}\] \[\tilde{\chi}_{L}:q_{L}\to\Omega_{L}q_{L}\,,\qquad\qquad\bar{q}_{ L}\to\bar{q}_{L}\Omega_{L}^{\dagger}\] (7) \[\tilde{\chi}_{R}:q_{R}\to\Omega_{R}q_{R}\,,\qquad\qquad\bar{q}_{ R}\to\bar{q}_{R}\Omega_{R}^{\dagger} \tag{8}\] The exact \(\chi_{L}\times\chi_{R}\) symmetry can be realized either _a la_ Wigner or _a la_ NG depending on the shape of the scalar potential, \(\mathcal{V}(\Phi)\). In any case, as we mentioned in the Introduction, no linearly divergent fermion mass can be generated by quantum corrections because the mass operator \(\bar{q}_{L}q_{R}+\bar{q}_{R}q_{L}\) is not invariant under \(\chi_{L}\times\chi_{R}\). Obviously, this also means that no fermion-anti-fermion vev's of the kind \(\langle\bar{q}_{L}q_{R}+\bar{q}_{R}q_{L}\rangle\) (condensates) can ever be non-vanishing. ### Chiral invariance For generic values of \(\eta\) (at fixed \(\rho\)), the Lagrangian \(\mathcal{L}_{\rm toy}\) is not invariant under the fermionic chiral transformations \(\tilde{\chi}_{L}\times\tilde{\chi}_{R}\) of eqs. (7)-(8), because of the presence of the (chiral breaking) operators \(\mathcal{L}_{Wil}\) and \(\mathcal{L}_{Y}\). Nevertheless, invariance under \(\tilde{\chi}_{L}\times\tilde{\chi}_{R}\) can be recovered (up to \(\mathrm{O}(b^{2})\) terms) by enforcing the conservation of the corresponding currents. This can be seen to occur [1; 3] at a "critical" value of the Yukawa coupling, \(\eta_{cr}\), where the \(\tilde{\chi}_{L}\times\tilde{\chi}_{R}\) rotations of the Yukawa and Wilson-like operators, that separately break current conservation, "compensate" each other. The situation here is very similar to what happens in WLQCD where fermionic chiral symmetry is recovered (up to negligibly small \(\mathrm{O}(a)\) cutoff effects) by tuning the bare quark mass to a critical value, \(m_{cr}\), at which the chiral rotations of the Wilson and the mass operators "compensate" each other yielding the conservation of the chiral currents [11; 18]. As remarked above, a key difference between the model (1) and WLQCD is, however, that in WLQCD no symmetry exists that can prevent the appearance of a linearly divergent quantum correction to the fermion mass, unlike what happens in the case of the model (1). Naturally, as was done in ref. [2], the computation of the critical value of the Yukawa coupling need to be performed in a non-perturbative way, similarly to what is currently done in WLQCD in the calculation of the critical mass. Nevertheless, in order to get a feeling of what this criticality condition implies, it is interesting to see what happens at the lowest loop order. \(\bullet\) In the Wigner phase (where \(\langle|\Phi|^{2}\rangle=0\)) the condition enforcing the conservation of the \(\tilde{\chi}_{L}\times\tilde{\chi}_{R}\) currents (derived in [1] along the lines of the strategy worked out in [18] and summarized in Appendix A) gives at 1-loop \[\eta_{cr}^{(1-\mathrm{loop})}(\alpha_{s},\rho)=\rho\,\alpha_{s}N_{c}\eta_{1}\,, \tag{9}\] where \(\eta_{1}\) is a computable coefficient. Eq. (9) implies the vanishing of the effective Yukawa coupling in the QEL of the theory. The lowest loop order mechanism underlying this cancellation is sketched in fig. 1. \(\bullet\) In the NG phase (where \(\langle|\Phi|^{2}\rangle\neq 0\) and at tree-level equal to \(v^{2}=|\mu_{0}^{2}|/\lambda_{0}\)), the \(\eta\) criticality condition leads at the lowest loop order to the vanishing of the sum of diagrams shown in fig. 2. The figure is obtained from fig. 1 by setting the scalar field equal to its vev. We thus see that, remarkably, enforcing \(\tilde{\chi}_{L}\times\tilde{\chi}_{R}\) invariance implies that in the QEL of the critical theory precisely the Higgs-like mass of the fermion gets cancelled out. The possibility of recovering \(\tilde{\chi}_{L}\times\tilde{\chi}_{R}\) invariance rests on the crucial fact that the quadratic divergency of the loop integral in fig. 1, as well as in fig. 2, is exactly compensated by the \(b^{2}\) factor coming from the insertion of the Wilson-like box, leaving behind a finite result. This balance of UV and IR effects (which has its ultimate basis on dimensional grounds) should not comes as a surprise when irrelevant operators are present in the Lagrangian. Indeed, compensations of this kind also occur in WLQCD and are, for instance, at the origin of the fact that the (partially) conserved axial and vector flavour currents suffer a finite renormalization 3. In the following we will encounter other instances of this sort of "UV vs. IR compensation". Footnote 3: Actually in WLQCD in the vector case one can construct a current that, being exactly conserved at any value of the lattice spacing, does not need any renormalization. We end this section by explicitly noting that (owing to the cancellation pictorially illustrated at the lowest loop order in fig. 2) in the present approach \(\Phi\) is not going to play the role the Higgs field does in the SM, rather its presence should be viewed as an effective, simple way to model a \(\chi_{L}\times\chi_{R}\) invariant UV completion of \({\cal L}_{\rm toy}\). ### The QEL of the critical theory in the Wigner phase In this section we provide the form that the QEL takes in the Wigner phase at the critical value of the Yukawa coupling. As we said, enforcing invariance under \(\tilde{\chi}_{L}\times\tilde{\chi}_{R}\) transformations means that \(\tilde{\chi}\)-violating operators will be absent from the QEL of the critical theory. Figure 1: The lowest loop order mechanism behind the cancellation of the Yukawa vertex in the Wigner phase occurring at \(\eta=\eta_{cr}\). Grey disc and box stand for the insertion of the Yukawa and Wilson-like vertices, respectively. The solid line represents the propagation of a fermion, the curly line of a gluon and the dotted line of a scalar. \(R\) and \(L\) under the fermion line denote chirality. Figure 2: The lowest loop order mechanism behind the cancellation of the Higgs-like mass of the quark in the NG phase, occurring at \(\eta=\eta_{cr}\). The \(d\leq 4\) piece of the QEL of the critical model in the Wigner phase will then have the simple form \[\Gamma^{Wig}_{4\,cr}=\frac{1}{4}F^{A}\!\cdot\!F^{A}+\left(\bar{q}_{L}{\cal D}^{A} q_{L}+\bar{q}_{R}{\cal D}^{A}q_{R}\right)+\frac{1}{2}{\rm Tr}\left[\partial_{\mu} \Phi^{\dagger}\partial_{\mu}\Phi\right]+{\cal V}(\Phi)\,, \tag{10}\] from which we see that scalars are completely decoupled from fermions. We should observe that in the Wigner phase the critical theory is kind of "unstable". In fact, strictly speaking, at the critical point no seed exists that can trigger the phenomenon of the spontaneous breaking of the (restored) \(\tilde{\chi}_{L}\times\tilde{\chi}_{R}\) chiral symmetry. However, any chiral breaking disturbance, no matter how small, would make spontaneous breaking to take place [19] with the vacuum choosing its orientation. ### NP mass generation in the NG phase The situation one encounters in the NG phase of the critical theory is totally different. We shall see that, despite the fact that in the NG phase the Higgs-like mass of the fermion gets cancelled (see fig. 2), a non-vanishing NP fermion mass term emerges. Because of the subtle UV vs. IR interplay that we have seen is at work in the model, in order to get some understanding of how a non-vanishing NP fermion mass can emerge, it is crucial to start the discussion by determining the structure of the possible operators (vertices) of NP origin, formally of O(\(b^{2}\)), arising in the regularized theory. The analysis can be properly done by making use of the Symanzik expansion technique [20; 21]. A study of this kind was carried out in ref. [1] where it was shown that, as a consequence of the occurrence of the spontaneous breaking of the (restored) \(\tilde{\chi}_{L}\times\tilde{\chi}_{R}\) symmetry (which is in turn triggered by residual O(\(b^{2}v\)) chiral breaking terms surviving at \(\eta_{cr}\) in the NG phase), NP-ly generated Symanzik operators of O(\(b^{2}\Lambda_{s}\)) emerge. These NP effects can be described and included in the theory if, following refs. [20; 21], we allow ourself to work with the "augmented" Lagrangian \[{\cal L}_{\rm toy}\to{\cal L}_{\rm toy}+\Delta{\cal L}_{NP}\,, \qquad\Delta{\cal L}_{NP}=\gamma_{\bar{q}q}(\alpha_{s})O_{6,\bar{q}q}+\gamma_ {AA}(\alpha_{s})O_{6,AA}\,, \tag{11}\] \[O_{6,\bar{q}q}=b^{2}\Lambda_{s}|\Phi|\left(\bar{q}_{L}\,{\cal D}^ {A}q_{L}+\bar{q}_{R}\,{\cal D}^{A}q_{R}\right),\qquad O_{6,AA}=b^{2}\Lambda_{s }|\Phi|\,F^{A}\!\cdot\!F^{A}\,, \tag{12}\] where the coefficients \(\gamma_{\bar{q}q}(\alpha_{s})\) and \(\gamma_{FF}(\alpha_{s})\) are functions of the gauge coupling. It was shown in ref. [1] that at lowest order both coefficients are O(\(\rho\,\alpha_{s}\)), if a \(d=6\) Wilson-like term, as in eq. (5), appears in the fundamental Lagrangian. Complementary arguments supporting the emergence of such NP operators can be found in Appendix B of (II). The structure of the operators \(O_{6,\bar{q}q}\) and \(O_{6,AA}\) is completely dictated by symmetries (in particular \(\chi_{L}\times\chi_{R}\)) and dimensional considerations. The presence of the \(\Lambda_{s}\) factor signals their NP origin. According to the rules of the Symanzik analysis [20; 21], introducing \({\cal L}_{\rm toy}+\Delta{\cal L}_{NP}\) should be seen as a way of bookkeeping the NP operator insertions occurring in the correlators of the fundamental Lagrangian. Extending in this way the allowed set of diagrams is necessary to fully describe all the NP features of the theory in the NG phase. In fact, though formally of O(\(b^{2}\)), the insertions of \(\Delta{\cal L}_{NP}\) cannot be ignored, because, as we have repeatedly said, explicit \(b^{2}\) factors can be compensated by UV power divergencies in loop integrals, eventually leading to finite NP contributions to correlators. The remarkable fact about the "augmented" Lagrangian (11) is that it generates among others, new diagrams of NP origin (surviving in the \(b\to 0\) limit) that contribute to the quark self-energy. Two instances of such diagrams are drawn in fig. 3. An explicit calculation of the amputated zero momentum diagram shown in the right panel of fig. 3 gives to the effective fermion mass the finite contribution 4 Footnote 4: The vertical dotted line means amputation of the external leg. We have nevertheless left explicit the outgoing lines to help the reader recognizing the particle one is referring to. \[m_{q}^{eff}\!\!\propto \,\alpha_{s}^{2}\,\text{Tr}\!\int^{1/b}\!\frac{d^{4}k}{k^{2}}\frac {\gamma_{\mu}k_{\mu}}{k^{2}}\!\int^{1/b}\!\frac{d^{4}\ell}{\ell^{2}+m_{\zeta_{0 }}^{2}}\frac{\gamma_{\nu}(k+\ell)_{\nu}}{(k+\ell)^{2}}\,b^{2}\gamma_{\rho}(k+ \ell)_{\rho}\,b^{2}\Lambda_{s}\gamma_{\lambda}(2k+\ell)_{\lambda}\!\sim \tag{13}\] \[\sim \,\alpha_{s}^{2}\Lambda_{s}\,,\] where one factor of \(\alpha_{s}\) comes from the gluon connecting a standard QCD-like vertex with the Wilson-like box and a second factor from the insertion of the \(O_{6,\bar{q}q}\) operator in eq. (12) with the coefficient \(\gamma_{\bar{q}q}\) taken at its lowest order. In the integrand of eq. (13) \(m_{\zeta_{0}}^{2}=2|\mu_{0}^{2}|\) is the (square) mass of the singlet \(\zeta_{0}\) which is standardly defined by the polar decomposition \[\Phi=RU\,,\quad R=v+\zeta_{0}\,, \tag{14}\] \[U=\exp[i\vec{\tau}\,\vec{\zeta}/c\Lambda_{s}]\,, \tag{15}\] with the (arbitrary) scale in the exponential conveniently chosen with an eye to the QEL in eq. (16) below, i.e. so as to have the effective NG fields \(\zeta^{i},i=1,2,3\) canonically normalized. Obviously \(U\) can only be defined in the NG phase of the theory where \(\langle|\Phi|^{2}\rangle\!\neq\!0\). We conclude this section with an important, technical remark concerning the argument we have developed to arrive at the discovery of the existence of a NP fermionic effective mass. To simplify the line of reasoning above we have provisionally ignored the fact that in the operators (12) it is \(|\Phi|\) (i.e. the modulus of \(\Phi\)) that appears and not \(\Phi\) itself. This implies that in the diagrams of fig. 3 after the \(\zeta_{0}\) contraction a "dandling" \(U\) factor is left out exiting the Wilson-like box (recall eq. (14)). We did not show this field dependence in fig. 3, because for the calculation of the mass one can set the external field \(U\) equal to the identity. However, the complete \(U\) dependence of these diagrams needs to be taken into account to be able to generate a fully \(\chi_{L}\times\chi_{R}\) invariant fermion mass operator (like the ones in the second line of eq. (16) below). Figure 3: Two lowest loop order NP quark self-energy diagrams. The blobs represent insertions of the operators \(O_{6,FF}\) and \(O_{6,\bar{q}q}\), respectively, and the dotted line the propagation of the singlet scalar, \(\zeta_{0}\), defined in eq. (14) below. The vertical lines mean amputation. The rest of the notations is as in fig. 1. ### The QEL of the critical theory in the NG phase The existence of a NP fermion mass term, the occurrence of which in the NG phase of the model (1) was successfully checked in ref. [2] by explicit lattice simulations, can be incorporated in the QEL, \(\Gamma^{NG}_{cr}\), that describes the physics of the theory, with the help of the non-analytic field \(U\) introduced in eq. (14). Owing to the fact that \(U\) has the same transformation properties under \(\chi_{L}\!\times\!\chi_{R}\) as \(\Phi\) does, new operators (with respect to those appearing in the critical QEL of the Wigner phase, eq. (10)) invariant under \(\chi_{L}\!\times\!\chi_{R}\) can be constructed, leading for the \(d\leq 4\) piece of \(\Gamma^{NG}_{cr}\) to the expression \[\Gamma^{NG}_{4\,cr} =\!\frac{1}{4}F^{A}\!\cdot\!F^{A}\!+\!\left(\bar{q}_{L}\mathcal{D} ^{A}q_{L}\!+\!\bar{q}_{R}\mathcal{D}^{A}q_{R}\right)\!+\] \[+c_{q}\Lambda_{s}\big{(}\bar{q}_{L}Uq_{R}\!+\!\bar{q}_{R}U^{ \dagger}q_{L}\big{)}\!+\!\frac{c^{2}\Lambda_{s}^{2}}{2}\mathrm{Tr}\left[\partial _{\mu}U^{\dagger}\partial_{\mu}U\right]\!+\ldots \tag{16}\] with \(c_{q}\) and \(c\) functions of the gauge coupling. The form of \(\Gamma^{NG}_{4\,cr}\) is purely geometrical. It is constrained by dimensional and symmetry arguments and the observation that at \(\eta=\eta_{cr}\) the \(\tilde{\chi}_{L}\times\tilde{\chi}_{R}\) breaking Yukawa term should not be present. The expression (16) is obtained by including all the operators of dimension \(d\leq 4\), invariant under \(\chi_{L}\times\chi_{R}\) that can be constructed in terms of \(q\), \(\bar{q}\), \(A_{\mu}\) and \(U\). Dots in (16) are there to recall us that actually there exist other operators invariant under \(\chi_{L}\!\times\!\chi_{R}\) that we have not reported. At \(d=4\) we have the scalar kinetic term \(\frac{1}{2}\mathrm{Tr}\left[\partial_{\mu}\Phi^{\dagger}\partial_{\mu}\Phi\right]\) as well as the operator \(\Lambda_{s}R\,\mathrm{Tr}\left[\partial_{\mu}U^{\dagger}\partial_{\mu}U\right]\) and the scalar potential \(\mathcal{V}(\Phi)\). However, as we shall see in the next section, in the presence of weak interactions, the restoration of \(\tilde{\chi}_{L}\!\times\!\tilde{\chi}_{R}\) makes the first two operators to disappear from the QEL. At the same time, as we show in Appendix B, at the critical point the effective singlet field \(R=v+\zeta_{0}\) (eq. (14)) becomes infinitely massive and decouples. Naturally, the third term in the r.h.s. of eq. (16) (which is not invariant under \(\tilde{\chi}_{L}\!\times\!\tilde{\chi}_{R}\) transformations) is of special interest because, upon expanding \(U=\!\mathrm{1}\!\mathrm{l}+i\vec{\tau}\,\vec{\zeta}/c\Lambda_{s}+\ldots\) (see eq. (15)), it gives rise to a mass for the fermion plus a wealth of NG boson interactions. We observe that, despite the fact that the tuning of the Yukawa coupling enforces the conservation of the chiral \(\tilde{\chi}_{L}\!\times\!\tilde{\chi}_{R}\) currents, we have discovered that peculiar mass terms are dynamically generated which represent NP obstructions to the exact realization of chiral symmetry 5. A suggestive way to describe the nature of this dynamically generated mass term is to say that the latter appears as a "NP anomaly" preventing the full recovery of the chiral \(\tilde{\chi}_{L}\times\tilde{\chi}_{R}\) symmetry. Footnote 5: Actually also in QCD mass terms break chiral symmetry. The difference is that, while in QCD they can be put to zero by hand, here masses are NP effects that cannot be eliminated, as they arise as soon as \(\rho\neq 0\) no matter how small \(|\rho|\) might be. ## 3 Introducing weak and Tera-interactions To proceed to the construction of a realistic model for elementary particles the next step is to introduce weak interactions. At the same time, as mentioned in the Introduction, it is also necessary to extend the model by incorporating a super-strongly interacting sector, so that the whole theory will have an RGI scale, \(\Lambda_{T}\), much larger than \(\Lambda_{QCD}\) and, to match phenomenology, of the order of a few TeVs. Only in this way eqs. (1) and (2) can possibly yield the correct order of magnitude of the top quark and \(W\) boson mass, respectively. The desired extension of the model is obtained by doubling the structure of fermions to encompass Tera-particles (\(Q\!=\!\) Tera-quarks and \(G\!=\!\) Tera-gluons) and introducing weak interactions by gauging the exact \(\chi_{L}\) symmetry. The resulting Lagrangian will have the expression \[\mathcal{L}(q,Q;A,G,W;\Phi)\!=\] \[\qquad=\mathcal{L}_{K}(q,Q;A,G,W;\Phi)+\mathcal{V}(\Phi)+ \mathcal{L}_{Y}(q,Q;\Phi)+\mathcal{L}_{Wil}(q,Q;A,G,W;\Phi) \tag{3.1}\] \[\bullet\ \mathcal{L}_{K}(q,Q;A,G,W;\Phi)=\frac{1}{4}\Big{(}F^{A} \cdot F^{A}+F^{G}\cdot F^{G}+F^{W}\cdot F^{W}\Big{)}+\] \[\qquad+\big{(}\bar{q}_{L}\mathcal{D}^{AW}q_{L}\!+\!\bar{q}_{R} \mathcal{D}^{A}q_{R}\big{)}\!+\!\big{(}\bar{Q}_{L}\mathcal{D}^{AW}Q_{L}\!+\! \bar{Q}_{R}\mathcal{D}^{AG}Q_{R}\big{)}\!+\!\frac{k_{b}}{2}\mathrm{Tr}\left[( \mathcal{D}_{\;\mu}^{\;W}\Phi)^{\dagger}\mathcal{D}_{\mu}^{W}\Phi\right]\] (3.2) \[\bullet\ \mathcal{V}(\Phi)=\frac{\mu_{0}^{2}}{2}k_{b}\mathrm{Tr}\left[ \Phi^{\dagger}\Phi\right]+\frac{\lambda_{0}}{4}\big{(}k_{b}\mathrm{Tr}\left[ \Phi^{\dagger}\Phi\right]\big{)}^{2}\] (3.3) \[\bullet\ \mathcal{L}_{Y}(q,Q;\Phi)=\eta_{q}\left(\bar{q}_{L}\Phi\,q_{R}+ \bar{q}_{R}\Phi^{\dagger}q_{L}\right)+\eta_{Q}\left(\bar{Q}_{L}\Phi\,Q_{R}+ \bar{Q}_{R}\Phi^{\dagger}Q_{L}\right)\] (3.4) \[\bullet\ \mathcal{L}_{Wil}(q,Q;A,G,W;\Phi)=\frac{b^{2}}{2}\rho_{q} \left(\bar{q}_{L}\overleftarrow{\mathcal{D}}_{\;\mu}^{\;AW}\Phi\mathcal{D}_{ \mu}^{A}q_{R}+\bar{q}_{R}\overleftarrow{\mathcal{D}}_{\;\mu}^{\;A}\Phi^{ \dagger}\mathcal{D}_{\mu}^{AW}q_{L}\right)+\] \[\qquad+\frac{b^{2}}{2}\rho_{Q}\left(\bar{Q}_{L}\overleftarrow{ \mathcal{D}}_{\;\mu}^{\;AGW}\Phi\mathcal{D}_{\mu}^{AG}Q_{R}+\bar{Q}_{R} \overleftarrow{\mathcal{D}}_{\;\mu}^{\;AG}\Phi^{\dagger}\mathcal{D}_{\mu}^{ AGW}Q_{L}\right) \tag{3.5}\] with obvious notations for the covariant derivatives. In the case of the Lagrangian (3.1) the previous form of the \(\chi_{L}\times\chi_{R}\) (eq. (2.6)) and \(\tilde{\chi}_{L}\times\tilde{\chi}_{R}\) (eqs. (2.7)-(2.8)) transformations, besides the obvious extension necessary to let them act also on Tera-fermions, need to be modified in the presence of \(W\) bosons to insure invariance of the fermionic kinetic terms under \(\tilde{\chi}_{L}\times\tilde{\chi}_{R}\). The analog of the set of eqs. (2.6), (2.7) and (2.8) then reads (\(\Omega_{L/R}\!\in\!\)SU(2)) \[\chi_{L}\times\chi_{R}=\left[\tilde{\chi}_{L}\times(\Phi\to \Omega_{L}\Phi)\right]\times\left[\tilde{\chi}_{R}\times(\Phi\to\Phi\,\Omega_ {R}^{\dagger})\right], \tag{3.6}\] \[\tilde{\chi}_{L}:\left\{\begin{array}{ll}q_{L}\to\Omega_{L}q_{L }&\bar{q}_{L}\to\bar{q}_{L}\Omega_{L}^{\dagger}\\ Q_{L}\to\Omega_{L}Q_{L}&\bar{Q}_{L}\to\bar{Q}_{L}\Omega_{L}^{\dagger}\\ W_{\mu}\to\Omega_{L}W_{\mu}\Omega_{L}^{\dagger}\end{array}\right.\] (3.7) \[\tilde{\chi}_{R}:\left\{\begin{array}{ll}q_{R}\to\Omega_{R}q_{R }&\bar{q}_{R}\to\bar{q}_{R}\Omega_{R}^{\dagger}\\ Q_{R}\to\Omega_{R}Q_{R}&\bar{Q}_{R}\to\bar{Q}_{R}\Omega_{R}^{\dagger}\end{array}\right. \tag{3.8}\] ### The critical theory Besides the Yukawa (eq. (3.4)) and the Wilson-like (eq. (3.5)) operators, now also the kinetic term of the scalar field (last term in eq. (3.2)) breaks \(\tilde{\chi}_{L}\times\tilde{\chi}_{R}\) and mixes with \(\mathcal{L}_{Y}\) and \(\mathcal{L}_{Wil}\). Thus to get the critical theory (invariant under \(\tilde{\chi}_{L}\times\tilde{\chi}_{R}\)), on top of the Yukawa couplings, \(\eta_{q}\) and \(\eta_{Q}\), a further parameter, \(k_{b}\), has been introduced which needs to be appropriately tuned. For convenience the coefficient \(k_{b}\) is let to appear also in the expression of the scalar potential (3.3), because with this choice the bare (vev)\({}^{2}\) (in the NG phase) and the quartic coupling of the canonically normalized scalar field will keep their standard definitions, i.e. \(v^{2}=|\mu_{0}^{2}|/\lambda_{0}\) and \(\lambda_{0}\). The tuning conditions determining \(\eta_{q\,cr}\), \(\eta_{Q\,cr}\) and \(k_{b\,cr}\) come from enforcing the conservation of the \(\tilde{\chi}_{L}\times\tilde{\chi}_{R}\) currents [18]. In Appendix A for completeness we sketch the procedure that, extending the strategy proposed in [1], leads to the tuning equations for \(\eta_{q}\), \(\eta_{Q}\) and \(k_{b}\). The conditions determining the critical theory physically correspond to have no scalar kinetic term in the QEL of the theory and, similarly to what we saw happening in sect. 2 in the case of the model (1), vanishing effective Yukawa interactions. Naturally, as was done in ref. [2] for the Yukawa coupling, the computation of the critical values of \(\eta_{q}\), \(\eta_{Q}\) and \(k_{b}\) needs to be performed in a non-perturbative way, like it is currently done in WLQCD in the determination of the critical mass. Nevertheless, in order to get a feeling of what these criticality conditions mean, it is interesting to see what happens at the lowest loop order. ### The critical conditions in the Wigner phase Solving, with steps similar to those leading to eq. (9), the tuning eqs. (100)-(120) one finds for \(\eta_{q\,cr}\), \(\eta_{Q\,cr}\) and \(k_{b\,cr}\) at 1-loop in the Wigner phase the parametric expressions \[\eta_{q\,cr}^{(1-\text{loop})} =\rho_{q}\,\alpha_{s}N_{c}\eta_{q}^{(1)}\,, \eta_{Q\,cr}^{(1-\text{loop})} =\rho_{Q}\,\alpha_{T}N_{T}\eta_{Q}^{(1)}\,, \tag{121}\] \[k_{b\,cr}^{(1-\text{loop})} =\rho_{q}^{2}N_{c}k_{b\,q}^{(1)}+\rho_{Q}^{2}N_{c}N_{T}k_{b\,Q}^ {(1)}\,, \tag{122}\] with \(\eta_{q}^{(1)}\), \(\eta_{Q}^{(1)}\), \(k_{b\,q}^{(1)}\) and \(k_{b\,Q}^{(1)}\) computable coefficients and \(\text{SU}(N_{T})\) the Tera-gauge group. In figs. 4 and 5 we display at the lowest loop order the mechanism leading to the cancellation of the effective quark and Tera-quark Yukawa terms in the QEL of the critical theory. The figures show the mixing between the quark (Tera-quark) Yukawa operator (grey disks) and the quark (Tera-quark) Wilson-like operator (grey box) at 1-loop. In fig. 6 we report the leading 1-loop diagrams yielding the cancellation between the scalar kinetic term (grey disk) and the sum of the quark and Tera-quark Wilson-like operators (grey boxes). The empty boxes represent the insertion of Wilson-like vertices from the Lagrangian necessary to close the fermion loops. As before, the loop diagrams in the figs. 4, 5 and 6 yield finite results because the UV power divergencies are exactly compensated by the \(b^{2}\) factors coming from the insertion of Wilson-like vertices. Like in the previous section, the key observation here is that the NP enforcement of the \(\tilde{\chi}_{L}\times\tilde{\chi}_{R}\) invariance implies that in the QEL of the critical theory the Higgs-like masses of quarks, Tera-quarks and \(W\)'s are canceled out. Figure 5: The cancellation of the Tera-quark Yukawa vertex implied by the tuning conditions determining \(\eta_{Q\,cr}\) at the lowest loop order in the Wigner phase. Double lines represent Tera-particles. The grey box, labelled by \(\rho_{Q}\), represents the Tera-quark Wilson-like vertex. The rest of the notations is as in fig. 4. Figure 6: The cancellation of the scalar kinetic term implied by the tuning condition determining \(k_{b\,cr}\) at the lowest loop order in the Wigner phase. The integers \(n_{q}\) and \(n_{Q}\) are the multiplicities of quarks and Tera-quarks running in the loops with \(N_{c}\) and \(N_{T}\) the number of colours and Tera-colours, respectively. The grey disc, labelled by \(k_{b}\), represents the insertion of the scalar kinetic term. The empty boxes represent the insertion of Wilson-like vertices from the Lagrangian. The rest of the notations is as in figs. 4 and 5. Figure 7: The mechanism underlying the cancellation of the Higgs-like mass term of quarks at the lowest loop order occurring in the NG phase of the critical theory. Notations are as in fig. 4. Figure 8: The mechanism underlying the cancellation of the Higgs-like mass term of Tera-quarks at the lowest loop order occurring in the NG phase of the critical theory. Notations are as in fig. 5. Figure 6: The cancellation of the scalar kinetic term implied by the tuning condition determining \(k_{b\,cr}\) at the lowest loop order in the Wigner phase. The integers \(n_{q}\) and \(n_{Q}\) are the multiplicities of quarks and Tera-quarks running in the loops with \(N_{c}\) and \(N_{T}\) the number of colours and Tera-colours, respectively. The grey disc, labelled by \(k_{b}\), represents the insertion of the scalar kinetic term. The empty boxes represent the insertion of Wilson-like vertices from the Lagrangian. The rest of the notations is as in figs. 4 and 5. Figure 7: The mechanism underlying the cancellation of the Higgs-like mass term of quarks at the lowest loop order occurring in the NG phase of the critical theory. Notations are as in fig. 4. Figure 9: The mechanism underlying the cancellation of the Higgs-like \(W\) mass term at the lowest loop order occurring in the NG phase of the critical theory. Wiggly lines are \(W\)’s. The rest of the notations is as in figs. 4, 5 and 6. ### The NP emergence of elementary particles masses By extending to the present case the analysis that in sect. 2.4 has led us to identify the operators (12), one finds at O(\(b^{2}\)) the following set of NP \(\chi_{L}\times\chi_{R}\) invariant operators \[O^{T}_{6,\bar{Q}Q} =b^{2}\Lambda_{T}|\Phi|\left(\bar{Q}_{L}\mathcal{D}^{AGW}Q_{L}+ \bar{Q}_{R}\mathcal{D}^{AG}Q_{R}\right) \tag{20}\] \[O_{6,GG} =b^{2}\Lambda_{T}|\Phi|\,F^{G}\!\cdot\!F^{G}\] (21) \[O_{6,AA} =b^{2}\Lambda_{T}|\Phi|\,F^{A}\!\cdot\!F^{A} \tag{22}\] Although formally of O(\(b^{2}\)) the effect of these terms cannot be ignored because of the kind of IR-UV compensation we have seen occurring when "irrelevant" operators are present in the fundamental Lagrangian. Taking into account the operators (20)-(22) is necessary to fully describe the NP features of the theory related to the spontaneous breaking of the (restored) chiral symmetry 6. Footnote 6: Actually there exist other, less important NP operators. One example is \(b^{2}\Lambda_{T}|\Phi|\,F^{W}\!\cdot\!F^{W}\). The list above is limited to the operators that give rise to the leading diagrams contributing to the NP masses of fig. 10. Similarly to the situation we discussed in sect. 2.4, the occurrence of the Symanzik NP operators (20)-(22) can be properly taken into account by adding them to the fundamental Lagrangian, yielding the "augmented" Lagrangian [20; 21] \[\mathcal{L}\to\mathcal{L}+\Delta\mathcal{L}_{NP}\,,\quad\Delta\mathcal{L}_{NP} =\gamma_{\bar{Q}Q}O^{T}_{6,\bar{Q}Q}+\gamma_{GG}O_{6,GG}+\gamma_{AA}O_{6,AA}\,, \tag{23}\] where the coefficients \(\gamma^{T}_{\bar{Q}Q}\), \(\gamma_{GG}\) and \(\gamma_{AA}\) are functions of the gauge couplings. In Appendix B of (II) we provide arguments supporting the existence of the NP operators (20)-(22) and derive the lowest loop order behaviour of the \(\gamma\) coefficients in eq. (23). One can show that, in the case of \(d=6\) Wilson-like terms as those in eq. (19), at lowest order one gets that \(\gamma^{T}_{QQ}\) and \(\gamma_{GG}\) are O(\(\rho_{Q}\alpha_{T}\)), while \(\gamma_{AA}\) is O(\(\rho_{Q}\alpha_{s}\)). As noted in sect. 2.4, where we discussed the similar case of eq. (11), eq. (23) is purely formal, in the sense that, according to the rules of the Symanzik expansion [20; 21], introducing \(\mathcal{L}+\Delta\mathcal{L}_{NP}\) should be seen as a bookkeeping of the NP operator insertions occurring in the correlators of the fundamental Lagrangian. The presence of new vertices leads among other contributions, to new self-energy diagrams capable of generating NP masses for quarks, Tera-quarks as well as for the weak bosons. At the lowest loop order one finds the typical amputated self-energy diagrams displayed in the three panels of fig. 10, where on a case by case basis the operators (20), (20) or (22) are inserted together with the Wilson-like vertices necessary to close the loops. All these diagrams are finite owing to the by now familiar UV-IR compensation and all of O(\(\Lambda_{T}\)) times gauge coupling dependent coefficients. In sect. 2.4 we have sketched the calculation of the quark mass. The calculation of the Tera-quark mass exactly parallels what we did there. As for the \(W\) mass, from the kind of diagrams displayed in the bottom panel of fig. 10 we find \[(M_{W}^{eff})^{2}\,\propto\,g_{w}^{2}\alpha_{T}^{2}\Lambda_{T}\,(b^{ 2})^{4}\!\int^{\pi/b}\!\frac{d^{4}k}{(2\pi)^{4}}\int^{\pi/b}\frac{d^{4}\ell}{(2 \pi)^{4}}\int^{\pi/b}\!\frac{d^{4}q}{(2\pi)^{4}} \tag{3.15}\] \[\frac{1}{k^{2}\!+\!m_{\zeta_{0}}^{2}}\mathrm{tr}\Big{[}\gamma\! \cdot\!(q\!-\!k)\frac{1}{\gamma\!\cdot\!(q\!-\!k)}q_{\mu}\frac{1}{\gamma\!\cdot \!q}\gamma\!\cdot\!(q\!-\!\ell)\frac{1}{\gamma\!\cdot\!(q\!-\!\ell)}(q\!-\!\ell )_{\mu}\frac{1}{\gamma\!\cdot\!q}\Big{]}\frac{1}{\ell^{2}\!+\!m_{\zeta_{0}}^{2} }\sim g_{w}^{2}\alpha_{T}^{2}\Lambda_{T}^{2}\,.\] After a few obvious simplifications, also here, like in the case of \(m_{q}^{eff}\), one gets a finite result because the \(b^{-8}\) three-loop UV-power divergency is exactly compensated by the factor \(b^{8}\) coming from the insertions of the Wilson-like vertices and the NP operators (3.12), at the end yielding an UV-finite square \(W\) mass of \(\mathrm{O}(g_{w}^{2}\alpha_{T}^{2}\Lambda_{T}^{2})\). From the observation that the NP building blocks appearing in the diagrams in the first and second panel of fig. 10 are the same as those occurring in the third panel, we conclude that the dynamical \(W\) boson mass is generated by the same NP mechanism that gives mass to fermions and Tera-fermions. As we remarked at the end of sect. 2.4, to be precise we should have displayed in each diagram of fig. 10 the dependence upon the field \(U\), exiting from each Wilson-like term. We did not do so because the presence of \(U\) is not relevant for what concerns the calculation of the NP effective masses. Including \(U\) is, however, necessary to give rise to Figure 10: Examples of the lowest loop order self-energy diagrams yielding (from top to bottom) to NP effective masses for quarks, Tera-quarks and \(W\). The blobs represent the insertion of the appropriate NP operators among those displayed in equations from (3.11) to (3.13), and the boxes the insertion of the Wilson-like vertices necessary to close the loops. The vertical dotted line means amputation of the external leg. the fully \(\chi_{L}\times\chi_{R}\) invariant mass operators of fermions and Tera-fermions as well as the gauge invariant structure related to the \(W\) mass (see eq. (3.19) below). ### The \(\zeta_{0}\) propagator We need to conclude the analysis of these NP mass estimates answering a question that can naturally arise concerning the precise expression of the \(\zeta_{0}\) propagator in the diagrams of fig. 10. In fact, one might wonder what do we really mean by "\(\zeta_{0}\) propagator", given the fact that the critical value of \(k_{b}\) in eq. (3.2) was fixed to precisely cancel the scalar kinetic term against the similar operator with which the Wilson-like terms mix. The cancellation is pictorially represented in fig. 6 at the lowest loop order. We clarify this important issue in Appendix C by disentangling the delicate interplay between the critical limit and the \(b\to 0\) limit in determining the effective expression of the critical \(\zeta_{0}\) propagator. ### The NP masses of elementary particles From the analysis of the building blocks entering the diagrams in fig. 10 one can determine the lowest loop order parametric expression of the effective NP quark, Tera-quark and \(W\) mass, finding \[m_{q}^{eff}=C_{q}\,\Lambda_{T}\,, C_{q}=\mathrm{O}(\alpha_{s}^{2}) \tag{3.16}\] \[m_{Q}^{eff}=C_{Q}\,\Lambda_{T}\,, C_{Q}=\mathrm{O}(\alpha_{T}^{2})\] (3.17) \[M_{W}^{eff}=C_{w}\,\Lambda_{T}\,, C_{w}=g_{w}\,c_{w}\,,\qquad c_{w}=\mathrm{O}(\alpha_{T})\,. \tag{3.18}\] These formulae are derived under the working hypothesis according to which bare parameters are such that the renormalized \(\alpha_{T}\) coupling grows large (i.e. becomes \(\mathrm{O}(1)\)) at an energy scale \(\sim\Lambda_{T}\) where \(\alpha_{s}\) and \(\alpha_{w}\) are still \(\ll 1\). One thus expects to observe a Tera-quark "pole" (or, in the confinement regime, a "current") mass much larger than the quark and \(W\) mass. In sect. (3.2) of (II) we try to address this kind of questions providing a crude estimate of \(\Lambda_{T}\) and mass ratios. However, we can already conclude that eqs. (3.16)-(3.18) allow identifying the EW scale as (a fraction of) the physical parameter \(\Lambda_{T}\). ### The critical QEL in the NG phase We now discuss the form of the QEL which is expected to describe the physics of the renormalized critical model (3.1) in the NG phase at momenta well below the UV cutoff scale but above \(\Lambda_{T}\), i.e. in the range \(b^{-2}\gg p^{2}>\Lambda_{T}^{2}\). Following the same line of arguments we developed in sect. 2.5 to derive eq. (2.16), we get for the \(d=4\) piece of the QEL the expression \[\Gamma_{4\,cr}^{NG}(q,Q;A,G,W;\Phi)=\frac{1}{4}\Big{(}F^{A}\!\cdot \!F^{A}+F^{G}\!\cdot\!F^{G}+F^{W}\!\cdot\!F^{W}\Big{)}+\] \[+\big{(}\bar{q}_{L}\,\not\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! where (compare with the expression (15)) \[U=\exp[i\vec{\tau}\,\vec{\zeta}/c_{w}\Lambda_{T}]\,. \tag{30}\] In eq. (3.19) we have included \(d\leq 4\) operators with up to two derivatives. Actually there exists a further \(d\!=\!4\) operator invariant under \(\chi_{L}\times\chi_{R}\), namely \(\mathrm{Tr}\big{[}(\mathcal{D}^{W}_{\mu}U)^{\dagger}\mathcal{D}^{W}_{\mu}U( \mathcal{D}^{W}_{\nu}U)^{\dagger}\mathcal{D}^{W}_{\nu}U\big{]}\) which, however, has four derivatives. This term will not impact in an essential way on our considerations in sect. 4 where we compare the SM Lagrangian with the expression of the LEEL resulting upon integrating out the (heavy) Tera-dof's. It will, however, contribute with higher order weak corrections to the coefficients of the LEEL. In Appendix B we justify the form of the expression (3.19) showing that, despite the fact that also the term \(\Lambda_{T}R\,\mathrm{Tr}\,[(\mathcal{D}^{W}_{\mu}U)^{\dagger}\mathcal{D}^{W}_ {\mu}U]\) (\(R\) is the singlet radial scalar of eq. (14)), is invariant under \(\chi_{L}\times\chi_{R}\), it cannot appear in \(\Gamma^{NG}_{4\,cr}\). The proof of this statement is based on the decoupling theorem [22] which holds in the critical limit, based to the fact that in the QEL the effective field \(\zeta_{0}\) can be shown to get an infinite mass and decouple. Naturally at \(d>4\) there will obviously be infinitely many other operators contributing to \(\Gamma^{NG}_{cr}\), scaled by larger and larger inverse powers of \(\Lambda_{T}\). A few examples are \[\frac{1}{\Lambda_{T}}\bar{Q}_{L}\,\overleftarrow{\mathcal{D}}_{ \mu}^{\ \ AGW}U\mathcal{D}^{AG}_{\mu}Q_{R}\!+\!\mathrm{hc}\,,\quad\frac{1}{\Lambda_{T }}\bar{q}_{L}\overleftarrow{\mathcal{D}}_{\mu}^{\ \ AW}U\mathcal{D}^{A}_{\mu}q_{R}\!+\!\mathrm{hc}\,,\quad \frac{1}{\Lambda_{T}^{2}}(\bar{Q}_{L}UQ_{R})(\bar{Q}_{L}UQ_{R})\!+\!\mathrm{hc }\,,\] \[\frac{1}{\Lambda_{T}^{2}}(\bar{q}_{L}Uq_{R})(\bar{q}_{L}Uq_{R})\! +\!\mathrm{hc}\,,\quad\frac{1}{\Lambda_{T}^{4}}(\bar{Q}_{L}\overleftarrow{ \mathcal{D}}_{\mu}^{\ \ \ AGW}UQ_{R})(\bar{q}_{L}\overleftarrow{\mathcal{D}}_{\mu}^{\ \ AW}Uq_{R})\!+\!\mathrm{hc}\,,\quad \ldots\,. \tag{31}\] ### Transversality of the \(W\) polarization amplitude As in the SM the Goldstone bosons \(\zeta_{i},i=1,2,3\) (see eq. (30)) are eaten up to give the longitudinal dof's of the massive weak bosons. In Appendix D we illustrate how the transversality property of the \(W\) polarization amplitude (which in the end is a direct consequence of the SU(2)\({}_{L}\) gauge symmetry) is realized in this model. The argument is not totally trivial because in the present situation the \(W\) mass is not generated at tree level by the standard Higgs mechanism, but rather arises from NP spontaneous chiral symmetry breaking effects and virtual particle exchanges like we see in the lowest panel of fig. 10. ## 4 Integrating out Tera-degrees of freedom and the SM We show in this section that, upon integrating out the (heavy) Tera-dof's in the NG phase of the critical model (11), the resulting LEEL closely resembles the SM Lagrangian. The argument rests on the key conjecture that the 125 GeV resonance detected at LHC [14; 15] is a \(W^{+}W^{-}/ZZ\) composite state, bound by Tera-particle exchanges, and not an elementary particle 7. This state, which we shall denote by \(h\), is (assumed to be) a singlet under all the symmetries of the theory. Since its mass is (experimentally found to be) \(\ll\Lambda_{T}\), it must be included in the LEEL of the theory, valid for (momenta)\({}^{2}\ll\Lambda_{T}^{2}\). In these kinematical conditions the most general LEEL invariant under the symmetries of the critical model (3.1), in particular under the \(\chi_{L}\times\chi_{R}\) transformations, and including \(h\), takes the form [23; 24] \[\mathcal{L}_{LE}^{NG}(q;A,W;U,h)=\frac{1}{4}\Big{(}F^{A}\!\cdot\!F^ {A}+F^{W}\!\cdot\!F^{W}\Big{)}+\left(\bar{q}_{L}\,\mathcal{D}^{AW}q_{L}+\bar{q} _{R}\,\mathcal{D}^{A}q_{R}\right)+\] \[\quad+\left(y_{q}h+k_{q}k_{v}\right)\left(\bar{q}_{L}Uq_{R}+\bar{ q}_{R}U^{\dagger}q_{L}\right)+\] \[\quad+\frac{1}{2}\partial_{\mu}h\partial_{\mu}h+\frac{1}{2}(k_{v }^{2}+2k_{v}k_{1}h+k_{2}h^{2})\mathrm{Tr}\left[(\mathcal{D}_{\mu}^{\;W}U)^{ \dagger}\mathcal{D}_{\mu}^{W}U\right]+\widetilde{\mathcal{V}}(h)+\ldots\,, \tag{4.1}\] where dots represent \(\tilde{\chi}_{L}\times\tilde{\chi}_{R}\) violating operators of dimension \(d>4\). Examples of such operators are \[\bar{q}_{L}\overleftarrow{\mathcal{D}}_{\mu}^{\;AW}U\mathcal{D}_ {\mu}^{A}q_{R}+\mathrm{hc}\,, d=5 \tag{4.2}\] \[(\bar{q}_{L}Uq_{R})\left(\bar{q}_{R}U^{\dagger}q_{L}\right), d=6\,. \tag{4.3}\] They appears in the LEEL multiplied by inverse powers of the fundamental scale \(\Lambda_{T}\) (\(\Lambda_{T}^{-1}\) and \(\Lambda_{T}^{-2}\), respectively). We note that the \(1/\Lambda_{T}\) expansion entailed by the integration over Tera-dofs provides a physical interpretation and a theoretical basis for the occurrence of the higher dimensional operators which are standardly used in the literature to parametrize physics beyond the SM. The scalar potential \(\widetilde{\mathcal{V}}(h)\) comprises the cubic and quartic self-interactions of the \(h\) field, as well as the \(h\) mass term, \(m_{h}^{2}h^{2}/2\). The coefficients \(k_{v},k_{1},k_{2},y_{q},k_{q}\) and the \(\widetilde{\mathcal{V}}\)-couplings are parameters that need to be fixed by matching onto the underlying (renormalizable and unitary) fundamental critical theory (3.1). Tree-level matching of \(\mathcal{L}_{4,LE}^{NG}\) in eq. (4.1) with the QEL \(\Gamma_{4\,cr}^{NG}\) in eq. (3.19) requires for the masses of quarks and \(W\) (see eqs. (3.16) and (3.18)) the identifications \[m_{q}^{eff}=C_{q}\Lambda_{T}=k_{q}k_{v}\,,\qquad M_{W}^{eff}=g_{w}\,c_{w} \Lambda_{T}=g_{w}k_{v}\,, \tag{4.4}\] while the unitarity of the mother theory (3.1) implies the constraints [25; 26] \[y_{q}=k_{q}\,,\qquad k_{1}=k_{2}=1\,. \tag{4.5}\] The above relations are expected to hold up to small loop effects controlled by the couplings \(g_{w}\) and \(y_{q}\). Neglecting these corrections, one recognizes that, with the exception of the scalar potential \(\widetilde{\mathcal{V}}(h)\), precisely the combination \[\Phi\equiv(k_{v}+h)U \tag{4.6}\] enters the \(d\leq 4\) part of \(\mathcal{L}_{LE}^{NG}\) (see eq. (4.1)). The latter can thus be rewritten in the suggestive form \[\mathcal{L}_{4,LE}^{NG}(q;A,W;U,h)=\frac{1}{4}\Big{(}F^{A}\!\cdot\!F ^{A}+F^{W}\!\cdot\!F^{W}\Big{)}+\left(\bar{q}_{L}\,\mathcal{D}^{AW}q_{L}+\bar {q}_{R}\,\mathcal{D}^{A}q_{R}\right)+\] \[\quad+y_{q}\left(\bar{q}_{L}\Phi q_{R}+\bar{q}_{R}\Phi^{\dagger}q _{L}\right)+\frac{1}{2}\mathrm{Tr}\left[(\mathcal{D}_{\mu}^{\;W}\Phi)^{ \dagger}\mathcal{D}_{\mu}^{W}\Phi\right]+\widetilde{\mathcal{V}}(h)\,. \tag{4.7}\] From eq. (4.1) we see that (up to corrections suppressed as O(\(\alpha_{w}\)) or O(\(y_{q}^{2}/4\pi\)) which we have not included) \(\mathcal{L}_{4,LE}^{NG}\) looks very much like the SM Lagrangian (more precisely, like the Lagrangian of the oversimplified version of the SM where the existence of families, hypercharge interactions, leptons and weak isospin splitting is ignored). In particular, we see that just like it happens in the case of the Higgs mechanism in the SM, the effective Yukawa coupling of \(h\) to fermions is given by (see eqs. (4.4) and (4.5)) \[y_{q}=k_{q}=\frac{m_{q}^{eff}}{k_{v}}=\frac{m_{q}^{eff}}{c_{w}\Lambda_{T}}\,, \tag{4.8}\] where \(k_{v}=c_{w}\Lambda_{T}\) is what in the SM is the Higgs field vev. This simple analysis shows that like in the SM, also here the Yukawa coupling is proportional to the fermion mass. There are, however, two important differences between the LEEL of our model and the SM Lagrangian that we must point out. First of all, as shown by eq. (4.7), the proportionality factor between the Yukawa coupling (4.8) and the fermion mass is not in our hands, rather it is completely fixed by the NP dynamics. Secondly, since the scalar potential \(\widetilde{\mathcal{V}}(h)\) is supposed to describe, besides the mass, just the self-interactions of the (composite) \(h\) field, there is no reason why it should have the same form as the SM Higgs potential. This implies that, even if it yields \(m_{h}\!\simeq\!125\) GeV, differences with respect to the case of the SM may well appear in the trilinear and quadrilinear \(h\) self-couplings. ## 5 Universality A key point that needs to be thoroughly discussed, if we want to put the present approach on a firm conceptual basis, is to what extent the NP mass formulae we have derived above are "universal", or in other words to what extent their expression depends on the specific form of the "irrelevant" \(d>4\) Wilson-like chiral breaking terms that one decides to introduce in the fundamental Lagrangian. To ease the discussion of this issue we start analyzing the case in which associated to each fermion species there is one single Wilson-like operator of dimension \(d>4\). It turns out that the leading (lowest) power of the gauge coupling dependence of the \(C\) coefficients in eqs. (3.16), (3.17) and (3.18) depends on the operator dimension of the various Wilson-like terms. One can prove that generically the higher is the dimension of the Wilson-like operator the larger will be the lowest power of the gauge coupling controlling the behaviour of the \(C\) coefficients. If the chiral breaking Wilson-like term associated to a fermion is a linear combination of operators of different dimensions, the situation is a bit more complicated to analyze because of mixing. However, one can say the following. It is always the dimension of Wilson-like operators of the lowest dimension that determines the leading (smallest) value of the power of the gauge coupling in the coefficients of the NP mass formulae (3.16), (3.17) and (3.18). Higher and higher dimensional Wilson-like operators affect terms of higher and higher order in the gauge coupling power expansion. As for the dependence of the NP masses (or more generally of the theory observables) upon the \(\rho\) parameters of the various Wilson-like terms present in the Lagrangian, we prove in Appendix C of (II) that in the case of a theory with only one family of quarks and weak interactions (but neither Tera-quarks nor leptons), physics is \(\rho\) independent. If other fermions are present, be them quarks belonging to other families, Tera-particles or leptons, observables will only depend on the ratios of the various \(\rho\) parameters. Actually, one can say something more. We show in (II) that the physics of the model depends only on the combinations \(\eta_{q\,cr}/\sqrt{k_{b\,cr}}\), \(\eta_{Q\,cr}/\sqrt{k_{b\,cr}}\), \(\rho_{q}/\sqrt{k_{b\,cr}}\) and \(\rho_{Q}/\sqrt{k_{b\,cr}}\). The proof is based on the transformation properties of the fundamental Lagrangian under a rescaling of the \(\Phi\) field. The interesting observation is that as functions of \(\rho\) ratios these (positive) quantities are uniformly bound by coefficients that decrease as \(N_{c},N_{T}\) get large. ### Rescuing universality? At first sight the dependence of the value of the NP masses on the precise form of the chiral breaking Wilson-like terms that we have outlined above might appear as a blunt violation of universality. We have seen, in fact, that the \(C\) coefficients in the eqs. (3.16), (3.17) and (3.18) depend not only on the value of the (ratios of the) \(\rho\) parameters, but they have a gauge coupling dependence correlated with the operator dimension of the various Wilson-like terms in play. As for the \(\rho\) dependence, the problem could be attenuated or even eliminated by conjecturing that some GUT symmetry exists which, putting constraints on the \(\rho\)'s, restricts the variability range of their ratios. An extreme but appealing situation would the one in which the \(\rho\)'s are all equal. In this case the whole \(\rho\) dependence would completely drop out from physical observables. Also concerning the dependence on the dimensions of the Wilson-like operators, which, as we said, directly impact on the parametric gauge coupling dependence of the NP masses, the situation may not be as bad as it looks. On the contrary, in our opinion the liberty in the choice of the chiral breaking Wilson-like terms might give us an unexpected handle to understand flavour. The idea is to interpret the family mass hierarchy (from heavy to light) as related to Wilson-like terms of increasing dimensions. In this way we may hope to get flavour as an emerging quantum number that identifies different families with a mechanism somehow related to the "geometry" of the UV completion of the theory. ## 6 Conclusions and Outlook We have shown that, as a viable alternative to the Higgs scenario, a recently discovered [1; 2] NP field theoretical mechanism is capable of generating masses for all the elementary particles. This phenomenon takes place in strongly interacting theories where the fermion chiral symmetry (or an appropriate extension of it when in the presence of weak interactions), broken at the UV cutoff level, is recovered at low energy owing to the tuning of certain Lagrangian parameters. As a matter of fact this NP mass generation was already noticed in WLQCD, like the compilation of \(m_{cr}\) data as function of the lattice spacing reported in fig. 1 of [1] shows. However, no use of this feature was ever made for the purpose of giving mass to quarks in QCD, because the breaking of chirality induced by the lattice regularization (i.e. by the presence of the Wilson term) has the well-known effect of generating a linearly divergent mass term that completely hide any finite underlying NP mass contribution. The standard lattice procedure is thus to subtract the whole \(m_{cr}\bar{q}q\) operator, while at the same time adding by hand an explicit mass term for the quarks. The peculiar property of the kind of models discussed in this paper is that the exact \(\chi_{L}\times\chi_{R}\) symmetry precisely forbids the existence of linearly divergent quantum corrections to fermion masses, thus opening the possibility of exploiting NP effects related to the spontaneous breaking of chiral symmetry to provide mass terms to elementary particles. The approach we are advocating here, which extends ideas already proposed in ref. [1], has also the virtue of offering interesting hints towards the solution of some of the conceptual problems left open by the current formulation of the SM. 1. First of all, as we have discussed at the end of sect. 3.6, the dependence of the NP-ly generated masses upon the gauge couplings in eqs. (3.16) and (3.17), together with the similar one one finds for leptons (for which one gets from eq. (3.10)) of (II) \(m_{lept}^{eff}\!=\!C_{\ell}\Lambda_{T},C_{\ell}\!=\!\mathrm{O}(\alpha_{Y}^{2})\)) with \(Y\) the hypercharge), provides a natural explanation of the generic fermion mass hierarchy \[m_{lept}\ll m_{quark}\ll m_{Tera}\,,\] (6.1) as being a consequence of the \(\alpha_{Y}\ll\alpha_{s}\ll\alpha_{T}\) ranking among the gauge couplings 8. Footnote 8: For completeness, we remark that, as we shall see in (II), in the present approach neutrinos are exactly massless because with the standard hypercharge assignments we are assuming for SM fermions (see Table 1 in ref. [16]) the right-handed neutrino Weyl component is completely sterile. 2. Secondly, owing to the fact that in the model there is no elementary Higgs, there isn't anymore a "Higgs mass fine tuning problem", hence no need to explain why the Higgs mass is so much smaller than the Planck scale [9]. At the same time, lacking this fundamental scalar, the whole issue of the metastability of the Universe may need to be revised. Naturally it remains to explain why the weak force is so much stronger than the gravitational one. 3. Finally, since the exact \(\chi_{L}\times\chi_{R}\) symmetry protects elementary particle masses from quantum power divergencies, the former are "naturally small", i.e. \(\mathrm{O}(\Lambda_{\mathrm{RGI}})\), and indeed the massless critical theory enjoys an enhanced symmetry of chiral nature. From this point of view the present approach to mass generation fulfils the 't Hooft criterion of naturalness [27]. We have also seen that to cope with the magnitude of the top and \(W\) mass, a super-strongly interacting sector, gauge invariantly coupled to SM matter, must be conjectured to exist, in order for the full theory to have an RGI scale \(\Lambda_{\mathrm{RGI}}\!\sim\!\Lambda_{T}\!\gg\!\Lambda_{QCD}\) of the order of a few TeVs. We have proved in this paper that the simplest model, introduced in [1] and endowed with the above NP mass generation mechanism, can indeed be extended to incorporate weak interactions and the conjectured Tera-dof's. In a similar way one could include leptons and the hypercharge interaction. This further extension is discussed in (II). As for the relation of the present scheme to the SM, we have observed in sect. 4 that upon integrating out the Tera-dof's, under the assumption that a "light" (compared to the TeV scale \(\Lambda_{T}\)) \(W^{+}W^{-}/ZZ\) composite scalar singlet state bound by Tera-particle exchanges gets formed (that we conjecture should be identified with the 125 GeV resonance discovered at LHC [14; 15]), one ends up with a LEEL valid for \((\text{momenta})^{2}\ll\Lambda_{T}^{2}\), which, ignoring small perturbative corrections, resembles very much the SM Lagrangian, except possibly for the effective trilinear and quadrilinear self-couplings of the composite scalar. ## Appendix A Symmetries, currents and tuning In this Appendix we provide a derivation of the criticality conditions which allow enforcing the invariance of the Lagrangian (10) under the \(\tilde{\chi}_{L}\times\tilde{\chi}_{R}\) transformations (12)-(13). The conserved currents associated with the transformations (11) have the expression (\(i=1,2,3\)) \[J^{L\,i}_{\mu}=K^{i}_{\mu}+\bar{q}_{L}\gamma_{\mu}\frac{\tau^{i} }{2}q_{L}+\bar{Q}_{L}\gamma_{\mu}\frac{\tau^{i}}{2}Q_{L}-\frac{k_{b}}{4}\text{ Tr}\left[\Phi^{\dagger}\frac{\tau^{i}}{2}\mathcal{D}^{W}_{\mu}\Phi-(\Phi \overleftarrow{\mathcal{D}}^{W}_{\mu})^{\dagger}\frac{\tau^{i}}{2}\Phi\right]+\] \[-\frac{b^{2}}{2}\rho_{q}\left(\bar{q}_{L}\,\frac{\tau^{i}}{2} \Phi\mathcal{D}^{A}_{\mu}q_{R}-\bar{q}_{R}\overleftarrow{\mathcal{D}}^{A}_{ \mu}\Phi^{\dagger}\frac{\tau^{i}}{2}q_{L}\right)-\frac{b^{2}}{2}\rho_{Q}\left( \bar{Q}_{L}\frac{\tau^{i}}{2}\Phi\mathcal{D}^{AG}_{\mu}Q_{R}-\bar{Q}_{R} \overleftarrow{\mathcal{D}}^{AG}_{\mu}\Phi^{\dagger}\frac{\tau^{i}}{2}Q_{L} \right), \tag{14}\] \[J^{R\,i}_{\mu}\!=\!\bar{q}_{R}\gamma_{\mu}\frac{\tau^{i}}{2}q_{R }+\bar{Q}_{R}\gamma_{\mu}\frac{\tau^{i}}{2}Q_{R}-\frac{k_{b}}{4}\text{Tr}\left[( \Phi\overleftarrow{\mathcal{D}}^{W}_{\mu})^{\dagger}\Phi\frac{\tau^{i}}{2}- \frac{\tau^{i}}{2}\Phi^{\dagger}(\mathcal{D}^{W}_{\mu}\Phi)\right]+\] \[-\frac{b^{2}}{2}\rho_{q}\left(\bar{q}_{R}\frac{\tau^{i}}{2}\Phi^{ \dagger}\mathcal{D}^{\,AW}_{\mu}q_{L}\!-\!\bar{q}_{L}\overleftarrow{\mathcal{D }}^{AW}_{\mu}\Phi\frac{\tau^{i}}{2}q_{R}\right)\!-\!\frac{b^{2}}{2}\rho_{Q} \left(\bar{Q}_{R}\frac{\tau^{i}}{2}\Phi^{\dagger}\mathcal{D}^{\,AGW}_{\mu}Q_{L }\!-\!\bar{Q}_{L}\overleftarrow{\mathcal{D}}^{\,AGW}_{\mu}\Phi\frac{\tau^{i}}{ 2}Q_{R}\right), \tag{15}\] where \[K^{i}_{\mu}=g_{w}\text{Tr}\left([W_{\nu},F^{W}_{\mu\nu}]\frac{\tau^{i}}{2} \right). \tag{16}\] We stress that the conserved currents, \(J^{L\,i}\), to which weak bosons are coupled, do not coincide with the conserved left-handed currents of the SM because of the terms originating from the variation of the \(d=6\) Wilson-like operators. The key observation here is that, unlike the case where weak interactions are absent, in the Lagrangian (10), besides Yukawa and Wilson-like operators, also the \(\Phi\) kinetic term breaks the invariance under the \(\tilde{\chi}_{L}\times\tilde{\chi}_{R}\) transformations as the latter do not act on the scalar field (see eqs. (12)-(13)). The (non-conserved) currents associated to the (chiral) \(\tilde{\chi}_{L}\times\tilde{\chi}_{R}\) transformations read \[\tilde{J}^{L\,i}_{\mu}=K^{i}_{\mu}+\bar{q}_{L}\gamma_{\mu}\frac{ \tau^{i}}{2}q_{L}+\bar{Q}_{L}\gamma_{\mu}\frac{\tau^{i}}{2}Q_{L}+\] \[-\frac{b^{2}}{2}\rho_{q}\left(\bar{q}_{L}\frac{\tau^{i}}{2}\Phi \mathcal{D}^{A}_{\mu}q_{R}-\bar{q}_{R}\overleftarrow{\mathcal{D}}^{A}_{\mu} \Phi^{\dagger}\frac{\tau^{i}}{2}q_{L}\right)-\frac{b^{2}}{2}\rho_{Q}\left(\bar{Q }_{L}\frac{\tau^{i}}{2}\Phi\mathcal{D}^{AG}_{\mu}Q_{R}-\bar{Q}_{R} \overleftarrow{\mathcal{D}}^{\,AG}_{\mu}\Phi^{\dagger}\frac{\tau^{i}}{2}Q_{L }\right), \tag{17}\] \[\tilde{J}^{R\,i}_{\mu}=\bar{q}_{R}\gamma_{\mu}\frac{\tau^{i}}{2}q _{R}+\bar{Q}_{R}\gamma_{\mu}\frac{\tau^{i}}{2}Q_{R}+\] \[-\frac{b^{2}}{2}\rho_{q}\left(\bar{q}_{R}\frac{\tau^{i}}{2}\Phi^{ \dagger}\mathcal{D}^{AW}_{\mu}q_{L}\!-\!\bar{q}_{L}\overleftarrow{\mathcal{D}}^ {\,AW}_{\mu}\Phi\frac{\tau^{i}}{2}q_{R}\right)\!-\!\frac{b^{2}}{2}\rho_{Q} \left(\bar{Q}_{R}\frac{\tau^{i}}{2}\Phi^{\dagger}\mathcal{D}^{AGW}_{\mu}Q_{L }\!-\!\bar{Q}_{L}\overleftarrow{\mathcal{D}}^{\,AGW}_{\mu}\Phi\frac{\tau^{i}}{2 }Q_{R}\right). \tag{18}\] They differ from the conserved currents, \(J^{L\,i}_{\mu}\) and \(J^{R\,i}_{\mu}\), only because the terms bilinear in the scalar field coming from variation of the \(\Phi\)-kinetic term (proportional to \(k_{b}\)) are absent in eqs. (17) and (18). #### Enforcing invariance under the chiral \(\tilde{\chi}_{L}\times\tilde{\chi}_{R}\) transformations Generalizing the strategy proposed in [1], restoration of chirality is achieved by appropriately tuning the Yukawa couplings \(\eta_{q}\) and \(\eta_{Q}\) and the coefficient \(k_{b}\) to critical values determined by enforcing the conservation of the currents associated to the \(\tilde{\chi}_{L}\times\tilde{\chi}_{R}\) transformations. As we shall see, this can be done up to O(\(b^{2}\)) cutoff effects. In order to identify under which conditions the (chiral) \(\tilde{\chi}_{L}\times\tilde{\chi}_{R}\) transformations (3.1) and (3.1) can become an invariance of the Lagrangian (3.1) we start by writing down the associated Schwinger-Dyson equations (SDEs) 9. Following the steps outlined in [1], based on the strategy devised in [18; 28], we arrive at the (renormalized) SDEs Footnote 9: We refrain from calling Ward–Takahashi identities (WTIs) eqs. (A.6) and (A.7) below, because they refer to transformations that are not symmetries of the Lagrangian. For generic values of \(\eta_{q}\), \(\eta_{Q}\) and \(k_{b}\) we prefer to talk of Schwinger–Dyson equations. Only at the critIcal point where the \(\tilde{\chi}_{L}\times\tilde{\chi}_{R}\) currents are conserved one is entitled to talk of WTIs. \[\partial_{\mu}\langle Z_{\tilde{J}}\tilde{J}_{\mu}^{L\,i}(x)\, \hat{O}(0)\rangle=\langle\tilde{\Delta}_{L}^{i}\hat{O}(0)\rangle\delta(x)+\] (A.6) \[-(\eta_{q}-\bar{\eta}_{q}^{L})\,\langle\left(\bar{q}_{L}\,\frac{ \tau^{i}}{2}\Phi q_{R}-\bar{q}_{R}\Phi^{\dagger}\frac{\tau^{i}}{2}d_{L} \right)(x)\,\hat{O}(0)\rangle+\] \[+\frac{i}{2}g_{w}(k_{b}\!-\!\bar{k}_{b}^{L})\langle{\rm Tr} \Big{(}\Phi^{\dagger}[\frac{\tau^{i}}{2},W_{\mu}]{\cal D}_{\mu}^{W}\Phi\!+\! \Phi^{\dagger}\!\overleftarrow{\cal D}_{\mu}^{\;W}[W_{\mu},\frac{\tau^{i}}{2 }]\Phi\Big{)}(x)\hat{O}(0)\rangle\!+\!{\rm O}(b^{2})\!+\!\ldots\,,\] \[\partial_{\mu}\langle Z_{\tilde{J}}\tilde{J}_{\mu}^{R\,i}(x)\, \hat{O}(0)\rangle=\langle\tilde{\Delta}_{R}^{i}\hat{O}(0)\rangle\delta(x)+\] (A.7) \[-(\eta_{q}-\bar{\eta}_{q}^{R})\,\langle\left(\bar{q}_{R}\frac{ \tau^{i}}{2}\Phi^{\dagger}q_{L}-\bar{q}_{L}\Phi\frac{\tau^{i}}{2}q_{R}\right) (x)\,\hat{O}(0)\rangle+\] \[-(\eta_{Q}-\bar{\eta}_{Q}^{R})\,\langle\left(\bar{Q}_{R}\frac{ \tau^{i}}{2}\Phi^{\dagger}Q_{L}-\bar{Q}_{L}\Phi\frac{\tau^{i}}{2}Q_{R}\right)(x )\,\hat{O}(0)\rangle+{\rm O}(b^{2})\!+\!\ldots\,,\] where \(\hat{O}(0)\) is a generic local operator and the quantities \(\bar{\eta}_{q}^{L,R}\), \(\bar{\eta}_{Q}^{L,R}\) and \(\bar{k}_{b}^{L}\) are the mixing coefficients of the \(\tilde{\chi}_{L,R}\) variations of the Wilson-like and Tera-Wilson-like terms with the variations of the quark and Tera-quark Yukawa operators and the scalar kinetic term, respectively. The tuning conditions yielding the conservation of the \(\tilde{\chi}_{L}\times\tilde{\chi}_{R}\) currents that determine the values of the parameters \(\eta_{q}\), \(\eta_{Q}\) and \(k_{b}\) of the critical theory then take the form \[\eta_{q}-\bar{\eta}_{q}^{L}(\{g\};\eta_{q},\eta_{Q},\rho_{q},\rho_ {Q};k_{b};\mu_{0},\lambda_{0})=0\,,\] (A.8) \[\eta_{q}-\bar{\eta}_{q}^{R}(\{g\};\eta_{q},\eta_{Q},\rho_{q},\rho_ {Q};k_{b};\mu_{0},\lambda_{0})=0\,,\] (A.9) \[\eta_{Q}-\bar{\eta}_{Q}^{L}(\{g\};\eta_{q},\eta_{Q},\rho_{q},\rho_ {Q};k_{b};\mu_{0},\lambda_{0})=0\,,\] (A.10) \[\eta_{Q}-\bar{\eta}_{Q}^{R}(\{g\};\eta_{q},\eta_{Q},\rho_{q},\rho_ {Q};k_{b};\mu_{0},\lambda_{0})=0\,,\] (A.11) \[k_{b}\,-\,\bar{k}_{b}^{L}(\{g\};\eta_{q},\eta_{Q},\rho_{q},\rho_ {Q};k_{b};\mu_{0},\lambda_{0})=0\,,\] (A.12) where for short we have set \(\{g\}=(g_{s},g_{T},g_{w})\). Notice that, unlike what happens in the absence of weak interactions where parity is unbroken, we have for both quarks and Tera-quarks a "Left" and a "Right" equation determining the Yukawa couplings. Thus one may wonder whether in this situation, where parity is not an exact symmetry of the basic Lagrangian, \(L\)eft (eqs. (A.8) and (A.10) and \(R\)ight (eqs. (A.9) and (A.11)) constraints are compatible with each other. In the next section we prove that \(L\)eft-\(R\)ight compatibility is a consequence of the exact \(\chi_{L}\times\chi_{R}\) symmetry of the theory. ### Compatibility between \(R\)ight and \(L\)eft tuning conditions In this section we discuss the compatibility of the constraints from (A.8) to (A.12) in the situation where, because of the presence of weak interactions, parity is not an exact symmetry of the basic Lagrangian. We will show that, owing to the exact \(\chi_{L}\times\chi_{R}\) invariance (plus standard symmetries and dimensionality arguments) any set of \(\eta_{f}^{L}\), \(\eta_{f}^{L}\) (\(f=q,Q\)) and \(k_{b}\) parameters that satisfies the subset of conditions following from the requirement of, say, \(\tilde{\chi}_{L}\) symmetry restoration alone, automatically satisfies also the conditions corresponding to \(\tilde{\chi}_{R}\) symmetry restoration. To reduce the argument to its essentials, for simplicity of notations we imagine working in a model where only quarks and \(W\)'s are present. In this situation the tuning conditions reduce to the following set of equations \[\eta_{q}-\bar{\eta}_{q}^{L}(\{g\};\eta_{q},\rho;k_{b};\mu_{0}, \lambda_{0})=0\,,\] (A.13) \[\eta_{q}-\bar{\eta}_{q}^{R}(\{g\};\eta_{q},\rho;k_{b};\mu_{0}, \lambda_{0})=0\,,\] (A.14) \[k_{b}\,-\,\bar{k}_{b}^{L}(\{g\};\eta_{q},\rho;k_{b};\mu_{0}, \lambda_{0})=0\,,\] (A.15) where \(\{g\}=(g_{s},g_{w})\). Let us start by provisionally identifying as \(\eta_{q\,cr}\) and \(k_{b\,cr}\) the values obtained by solving, say, eqs. (A.13) and (A.15) which enforce \(\tilde{\chi}_{L}\) invariance. We notice that a bit away from the critical limit, owing to dimensional considerations and exact \(\chi_{L}\times\chi_{R}\) invariance, the \(d=4\) QEL of the theory in the Wigner phase must be of the form \[\Gamma_{4}^{Wig}=\frac{1}{4}\Big{(}F^{A}\!\cdot\!F^{A}+F^{W}\! \cdot\!F^{W}\Big{)}+\Big{[}\bar{q}_{L}\mathcal{D}^{A,W}q_{L}+\bar{q}_{R} \mathcal{D}^{A}q_{R}\Big{]}+\] \[+y_{q}^{red}\left(\bar{q}_{L}\Phi\,q_{R}+\bar{q}_{R}\Phi^{\dagger }q_{L}\right)+\frac{k_{b}^{red}}{2}\text{Tr}\left[(\mathcal{D}\,^{W}_{\mu}\Phi )^{\dagger}\mathcal{D}^{W}_{\mu}\Phi\right]\,+\mathcal{V}(\Phi)\,,\] (A.16) where \(y_{q}^{red}\) and \(k_{b}^{red}\) are the "reduced" couplings \[y_{q}^{red}=\eta_{q}-\bar{\eta}_{q}^{L}(\{g\};\eta_{q},\rho;k_{b} ;\mu_{0},\lambda_{0})\,,\] (A.17) \[k_{b}^{red}=k_{b}-\bar{k}_{b}^{L}(\{g\};\eta_{q},\rho;k_{b};\mu_{ 0},\lambda_{0})\,.\] (A.18) Setting \(\eta_{q}\) and \(k_{b}\) equal to the solutions of the eqs. (A.13) and (A.15), the restoration of the \(\tilde{\chi}_{L}\) symmetry (up to UV cutoff effects) entails the vanishing of the reduced coefficients \[y_{q}^{red}\ \eta_{q}\to\eta_{q\,cr}\,,k_{b}\to k_{b\,cr}\ 0\,,\] (A.19) \[k_{b}^{red}\ \eta_{q}\to\eta_{q\,cr}\,,k_{b}\to k_{b\,cr}\ 0^{+}\,.\] (A.20) Looking at the form of eq. (A.16) at \(y_{q}^{red}=0\) and \(k_{b}^{red}\to 0^{+}\), it is apparent that the \(\tilde{\chi}_{R}\) invariance is also automatically restored and the current \(\tilde{J}_{\mu}^{R,\,i}\) is consequently conserved because the vanishing of \(y_{q}^{red}\) and \(k_{b}^{red}\) make all chiral breaking terms disappear from (101). This fact in turn implies that the values of \(\eta_{q\,cr}\) and \(k_{b\,cr}\) determined from the eqs. (100) and (102) will also fulfil eq. (100). The extension of this argument to the case of the model considered in the main text, which also includes Tera-fermions and Tera-strong interactions, is straightforward. ## Appendix B The QEL of the critical model in the NG phase In this Appendix we want to justify the form (29) that the QEL of the renormalizable model (14) takes in the NG phase at the critical point. In particular we want to prove that, despite the fact that the operator \(\Lambda_{T}R\,\mathrm{Tr}\,[(\mathcal{D}_{\mu}^{W}U)^{\dagger}\mathcal{D}_{ \mu}^{W}U]\) is invariant under \(\chi_{L}\times\chi_{R}\) transformations, it cannot appear in \(\Gamma_{cr}^{NG}\). To this purpose it is convenient to start by examining the structure of the QEL, \(\Gamma^{NG}\), slightly away from the critical point. Using the definition introduced in eq. (14), we can write 10 Footnote 10: Though not explicitly indicated, scalar potential constants are intended to be renormalized. \[\Gamma^{NG} = \Gamma^{NG}_{4\,cr}+\frac{\mu_{\Phi}^{2}}{2}R^{2}+\frac{\lambda}{ 4}R^{4}+\frac{1}{2}k_{b}^{red}\Big{(}\partial_{\mu}R\,\partial_{\mu}R+R^{2} \mathrm{Tr}\,[(\mathcal{D}_{\mu}^{W}U^{\dagger})\mathcal{D}_{\mu}^{W}U] \Big{)}+ \tag{104}\] \[+ \widetilde{c}_{\Phi}\,\Lambda_{T}R\,\mathrm{Tr}\,[(\mathcal{D}_ {\mu}^{W}U)^{\dagger}\mathcal{D}_{\mu}^{W}U]+y_{q}^{red}R\Big{(}\bar{q}_{L}Uq_ {R}+\bar{q}_{R}U^{\dagger}q_{L}\Big{)}+\] \[+ y_{Q}^{red}R\Big{(}\bar{Q}_{L}UQ_{R}+\bar{Q}_{R}U^{\dagger}Q_{L} \Big{)}+\ldots\,,\] where \(\Gamma^{NG}_{4\,cr}\) is the \(d=4\) QEL of the model in the critical limit, given in eq. (29). It includes all the NP \(\tilde{\chi}_{L}\times\tilde{\chi}_{R}\) breaking terms of \(\mathrm{O}(\Lambda_{T})\) and \(\mathrm{O}(\Lambda_{T}^{2})\). Dots stand for \(d>4\) operators describing further NP \(\tilde{\chi}_{L}\times\tilde{\chi}_{R}\) breaking operators that are suppressed by inverse powers of \(\Lambda_{T}\) (e.g. see for instance eq. (28)). The "reduced" coefficients \(y_{q}^{red}\), \(y_{Q}^{red}\) and \(k_{b}^{red}\) are defined by the relations (which generalize eqs. (100) and (101)) \[y_{q}^{red} =\eta_{q}-\bar{\eta}_{q}^{L}(\{g\};\eta_{q},\eta_{Q},\rho_{q}, \rho_{Q};k_{b};\mu_{0},\lambda_{0})\,, \tag{105}\] \[y_{Q}^{red} =\eta_{Q}-\bar{\eta}_{Q}^{L}(\{g\};\eta_{q},\eta_{Q},\rho_{q}, \rho_{Q};k_{b};\mu_{0},\lambda_{0})\,,\] (106) \[k_{b}^{red} =k_{b}-\bar{k}_{b}^{L}(\{g\};\eta_{q},\eta_{Q},\rho_{q},\rho_{Q};k _{b};\mu_{0},\lambda_{0})\,, \tag{107}\] respectively, with the critical limit determined by the conditions \[y_{q}^{red}=y_{Q}^{red}=0\,,\qquad k_{b}^{red}\to 0^{+}\,. \tag{108}\] The purpose of this Appendix is to show that also the coupling \(\widetilde{c}_{\Phi}\) vanishes in the critical limit, implying that the undesired contribution of order \(g_{w}^{2}v\Lambda_{T}\) to \((M_{W}^{eff})^{2}\), that would arise from this term, cannot be present, making \(M_{W}^{eff}\) independent of the vev of the scalar. Since the QEL is a functional from which one is supposed to directly extract the full quantum information of the model, it is practical to normalize canonically the effective scalar field \(R\) appearing in eq. (104). Upon introducing \[\Phi_{c}=\Phi\sqrt{k_{b}^{red}}\,,\qquad R_{c}=R\sqrt{k_{b}^{red}}\,, \tag{109}\] with the subscript \(c\) standing for "canonical", \(\Gamma^{NG}\) takes the form \[\Gamma^{NG}=\Gamma^{NG}_{4\,cr}+\frac{\mu_{\Phi}^{2}}{2k_{b}^{red}}R _{c}^{2}+\frac{\lambda}{4(k_{b}^{red})^{2}}R_{c}^{4}+\frac{1}{2}\Big{(}(\partial _{\mu}R_{c})^{2}+R_{c}^{2}\text{Tr}\,[(\mathcal{D}_{\mu}^{W}U^{\dagger}) \mathcal{D}_{\mu}^{W}U]\Big{)}+\] \[+\frac{\widetilde{c}_{\Phi}}{\sqrt{k_{b}^{red}}}\,\Lambda_{T}R_{c }\text{Tr}\,[(\mathcal{D}_{\mu}^{W}U)^{\dagger}\mathcal{D}_{\mu}^{W}U]+\frac{ y_{q}^{red}}{\sqrt{k_{b}^{red}}}R_{c}\Big{(}\bar{q}_{L}Uq_{R}+\bar{q}_{R}U^{ \dagger}q_{L}\Big{)}+\] \[+\frac{y_{Q}^{red}}{\sqrt{k_{b}^{red}}}R_{c}\Big{(}\bar{Q}_{L}UQ_ {R}+\bar{Q}_{R}U^{\dagger}Q_{L}\Big{)}+\ldots\,, \tag{101}\] where, recalling \(v^{2}=|\mu_{\Phi}^{2}|/\lambda\), we have \[R_{c}=v_{c}+\zeta_{0c}\,,\qquad\zeta_{0c}=\zeta_{0}\sqrt{k_{b}^{red}}\,, \qquad v_{c}^{2}=\frac{|\mu_{\Phi}^{2}|}{\lambda}k_{b}^{red}\,. \tag{102}\] From the definition of \(R_{c}\), the expression (101) of \(\Gamma^{NG}\) and eq. (102), it is apparent that in the critical limit a peculiar non-linear sigma-model is realized in the scalar sector because \[v_{c}\sim\sqrt{k_{b}^{red}}\to 0^{+}\,,\quad m_{\zeta_{0c}}^{2}=\frac{2|\mu_{ \Phi}^{2}|}{k_{b}^{red}}\to+\infty\,. \tag{103}\] We see that in the critical limit the squared mass of the effective \(\zeta_{0c}\) mode is a (real positive) divergent quantity while the vev of \(R_{c}\) vanishes because its effective quartic coupling, \(\lambda/4(k_{b}^{red})^{2}\), diverges faster than \(m_{\zeta_{0c}}^{2}\). As for the canonical reduced Yukawa couplings of quarks and Tera-quarks \(y_{q}^{red}/\sqrt{k_{b}^{red}}\) and \(y_{Q}^{red}/\sqrt{k_{b}^{red}}\), they can be safely set to zero before taking the limit \(k_{b}^{red}\to 0^{+}\). We are now ready to show that in the critical limit (100) the coupling \(\widetilde{c}_{\Phi}/\sqrt{k_{b}^{red}}\) appearing in the QEL (101) is finite, or equivalently that \[\widetilde{c}_{\Phi}\sim\sqrt{k_{b}^{red}}\to 0\,, \tag{104}\] which validates the expression of \(\Gamma^{NG}_{4\,cr}\) given in (100) where the term proportional to \(\widetilde{c}_{\Phi}\) was omitted. The key remark on which the proof is based is that, owing to the decoupling theorem [22], the QEL \(\Gamma^{NG}\) (see eq. (101)) must be such that in the critical limit the amplitudes with the virtual exchange of one \(\zeta_{0c}\) particle should vanish at least like \(1/m_{\zeta_{0c}}^{2}\). To see the implications of this constraint let us for instance consider the \(WW\to WW\) scattering amplitude, which receives a tree-level contribution from the exchange of a \(\zeta_{0c}\) particle. Taking the \(WW\zeta_{0c}\)-vertex from the term \(\propto\widetilde{c}_{\Phi}\Lambda_{T}\) in the second line of (101), one gets \[A(WW\to WW)\propto g_{w}^{2}\frac{\widetilde{c}_{\Phi}\Lambda_{T}}{\sqrt{k_{b}^ {red}}}\,\frac{1}{s-m_{\zeta_{0c}}^{2}}\,\frac{\widetilde{c}_{\Phi}\Lambda_{T }}{\sqrt{k_{b}^{red}}}\,g_{w}^{2}\,. \tag{105}\] Since for large \(m_{\zeta_{0c}}^{2}\) and at fixed \(s\), the decoupling theorem requires \[A(WW\to WW)\propto(g_{w}^{2}\Lambda_{T})^{2}\frac{(\widetilde{c}_{\Phi})^{2}}{ k_{b}^{red}}\frac{1}{s-m_{\zeta_{0c}}^{2}}\stackrel{{ k_{b}^{red}\to 0^{+}}}{{\longrightarrow}}\text{O}\Big{(}\frac{1}{m_{\zeta_{0c}}^{ 2}}\Big{)}\,, \tag{106}\] or faster, it follows that indeed eq. (114) must hold. As we said, the vanishing of \(\widetilde{c}_{\Phi}\), that was already taken into account in the expression of \(\Gamma^{NG}_{4\,cr}\) we gave in eq. (16), tells us that \(M_{W}\) does not depend on the scalar vev. A physical consequence of paramount importance of the analysis carried out above is that the (canonically normalized) singlet scalar mode, \(\zeta_{0c}\), becomes an infinitely massive field with vanishing vev, decoupled from fermions and gauge bosons, with no dynamics on any physical scale. This in turn implies that there isn't any dependence of physical observables on the scalar quartic coupling, \(\lambda_{0}\) and that, as announced, the \(d=4\) QEL of the critical theory in the NG phase is given by the functional \(\Gamma^{NG}_{4\,cr}\) reported in eq. (19). ## Appendix C The \(\zeta_{0}\) critical propagator An apparently tricky question is what is the expression of the \(\zeta_{0}\) propagator in the critical limit, in view of the fact that the value of \(k_{b}\) in eq. (10) was fixed to precisely cancel the scalar kinetic term against the similar operators with which the Wilson-like terms mix. The cancellation is pictorially represented in fig. 6 at the lowest loop order. To answer the question it is necessary to make more explicit the calculation behind the determination of \(k_{b\,cr}\). Summing the three diagrams of fig. 6 one finds for the 1-loop \(\zeta_{0}\) propagator \[\Pi_{\zeta_{0}}(p^{2})\Big{|}^{(1-\text{loop})}=\frac{1}{k_{b}p^{2}}-\frac{1 }{k_{b}p^{2}}\Big{[}\rho_{q}^{2}N_{c}\,k_{b\,q}(p^{2})+\rho_{Q}^{2}N_{c}N_{T} \,k_{b\,Q}(p^{2})\Big{]}\frac{1}{k_{b}p^{2}}\,, \tag{116}\] where \(k_{b\,q}(p^{2})\) and \(k_{b\,Q}(p^{2})\) are functions of mass dimension \(d=2\) that represent the contributions of the fermion and Tera-fermion loop, respectively. By expanding in powers of \(p^{2}\) (or in powers of \(b^{2}\) which, after rescaling momenta by factors of \(b\), is the same) we can write \[k_{b\,q}(p^{2}) =b^{-2}k_{b\,q}^{(0)}+k_{b\,q}^{(1)}p^{2}+k_{b\,q}^{(2)}b^{2}p^{4}+\ldots \tag{117}\] \[k_{b\,Q}(p^{2}) =b^{-2}k_{b\,Q}^{(0)}+k_{b\,Q}^{(1)}p^{2}+k_{b\,Q}^{(2)}b^{2}p^{4} +\ldots\,, \tag{118}\] where the expansion coefficients are dimensionless constants with alternate signs. The first terms in the r.h.s. of eqs. (117) and (118) contribute a quadratically divergent term to the scalar mass, that needs to be subtracted out, but it is of no relevance for the argument we develop in this Appendix. It is anyway reabsorbed in the value of the bare scalar mass at which the Wigner \(\leftrightarrow\) NG transition occurs (i.e. where the renormalized mass changes sign). The O(\(p^{2}\)) terms lead to the constraint determining the critical value of \(k_{b}\) which reads \[\frac{1}{k_{b}p^{2}}-\frac{1}{k_{b}p^{2}}\Big{[}\rho_{q}^{2}N_{c}k_{b\,q}^{(1 )}+\rho_{Q}^{2}N_{c}N_{T}k_{b\,Q}^{(1)}\Big{]}p^{2}\frac{1}{k_{b}p^{2}}=0\,. \tag{119}\] From this condition we get (see eq. (10)) \[k_{b\,cr}^{(1-\text{loop})}=\rho_{q}^{2}N_{c}k_{b\,q}^{(1)}+\rho_{Q}^{2}N_{c} N_{T}k_{b\,Q}^{(1)}\,. \tag{120}\] Using the expression (108), from eq. (109) and the expansions (100)-(101) we find that the effective critical \(\zeta_{0}\) propagator entering the calculation of the self-energy diagrams of fig. 10 can be written in the form \[\Pi_{\zeta_{0}}(p^{2})\Big{|}_{cr}^{(1-\text{loop})}=-\frac{b^{2}}{k_{b\,cr}^{( 1-\text{loop})}}\frac{\rho_{q}^{2}N_{c}k_{b\,q}^{(2)}+\rho_{Q}^{2}N_{c}N_{T}k_ {b\,Q}^{(2)}}{\rho_{q}^{2}N_{c}k_{b\,q}^{(1)}+\rho_{Q}^{2}N_{c}N_{T}k_{b\,Q}^{ (1)}}\cdot\Big{[}1+\text{O}(b^{2}p^{2})\Big{]}\,. \tag{110}\] From this expression we can draw the following conclusions (see sect. 3.5). 1) Each term in the expansion (110) contribute a finite bit to the self-energy diagrams of fig. 10, owing to the exact compensation between IR and UV \(b^{2}\) behaviours in the loops occurring for each term. This compensation in turn follows from the fact that loop integrals are dominated by the region of phase space where all momenta2 are \(\text{O}(b^{-2})\). Footnote 2: For instance, as in the SM, from eq. (108) we can read off the value of the squared \(W\) mass (see eq. (20.11) of [29]). 2) The whole expansion (110) is proportional to \((k_{b\,\sigma}^{(1-\text{loop})})^{-1}\) times coefficients (like the one we display) that only depend on ratios of \(\rho\)'s. Recalling that blobs (NP Symanzik operators) as well as squares (Wilson-like terms) in fig. 10 are proportional to \(\rho\), from the fact that \(k_{b\,\sigma}^{(1-\text{loop})}\) is quadratic in the \(\rho\), one concludes that the NP self-energy diagrams of fig. 10 depend only on ratios of \(\rho\)'s and not on their values separately. This feature is consistent with the results of Appendix C of (II) where we prove that this property actually holds in general for all physical observables and to all loops. ## Appendix D Transversality of \(W\) polarization amplitude In this Appendix we want to illustrate how the expected transversality property of the \(W\) polarization amplitude is realized in the critical limit of the model (10) and discuss how the Goldstone fields \(\zeta_{i},i=1,2,3\) (see eq. (3.20)) get eaten up to become the longitudinal \(W\) dof's [29]. In particular, we will show that the sum of the amputated diagrams displayed in top panel of fig. 11 has the expected transverse structure, namely \[\langle\widetilde{W}_{\mu}^{i}(p)\widetilde{W}_{\nu}^{j}(-p)\rangle\Big{|}_{ p\to 0}^{\text{amp}}=g_{w}^{2}\langle\widetilde{J}_{\mu}^{L\,i}(p)\widetilde{J}_{\nu }^{L\,j}(-p)\rangle\Big{|}_{p\to 0}\to c_{w}^{2}\Lambda_{T}^{2}g_{w}^{2} \Big{[}\delta_{\mu\nu}-\frac{p_{\mu}p_{\nu}}{p^{2}}\Big{]}\delta_{ij}\,. \tag{111}\] In eq. (111) we have dropped an irrelevant \(\delta(0)\) factor and indicated by \(\widetilde{W}_{\mu}^{i}(p)\) (\(\widetilde{J}_{\mu}^{L\,i}(p)\)) the Fourier transform of \(W_{\mu}^{i}(x)\) (\({J_{\mu}^{L\,i}(x)}\)). The leftmost diagram of the top panel of fig. 11 was already computed (see eq. (3.18)) and contributes the first term in eq. (111). To compute the rightmost diagram we need to evaluate in the critical limit the (amputated) \(W\)-NG boson two-point function as well as the NG-boson propagator (the latter is depicted in the bottom panel of fig. 11). These are pretty straightforward calculations which do not differ much from the analogous ones one would do in the SM 11. The only difference is that here we need to remember that these two-point functions come completely from NP effects, as the perturbative contribution Footnote 11: For instance, as in the SM, from eq. (111) we can read off the value of the squared \(W\) mass (see eq. (20.11) of [29]). \[\Gamma_{\text{kin}}^{PT}(\zeta)=\frac{v^{2}}{F^{2}}\Big{[}k_{b\,cr}-(\rho_{q} ^{2}N_{c}k_{bq}^{(1)}+\rho_{Q}^{2}N_{c}N_{T}k_{bQ}^{(1)})+\dots\Big{]}\partial _{\mu}\vec{\zeta}\,\partial_{\mu}\vec{\zeta}\,, \tag{112}\] vanishes. In eq. (45) we have used the standard polar decomposition (see eq. (14)) \[\Phi=R\,U\equiv(v+\zeta_{0})U\,,\qquad U=\exp[i\vec{\tau}\vec{\zeta}/F]\,, \tag{46}\] where for a while \(F\) is left as a not yet specified scale, The vanishing of the quantity in the square parenthesis is precisely the condition that determine the critical value of \(k_{b}\) (see eq. (44)). We recall from sect. 3.1 that the first contribution in the parenthesis comes from the NG-boson kinetic term present in the fundamental Lagrangian (3.1), the second from the 1-loop fermion correction of fig. 9 (see eq. (3.10)) and the dots from the rest of the loop expansion. From the NP expression of the QEL of the critical theory given in eq. (3.19), we get \[\langle\widetilde{W}^{i}_{\mu}(p)\widetilde{\zeta}^{j}(-p)\rangle\Big{|}_{p \to 0}^{\rm amp}\to c_{w}^{2}\Lambda_{T}^{2}g_{w}\frac{p_{\mu}}{F}\delta_{ij} \tag{47}\] for the \(W\)-NG boson two-point function and \[\langle\widetilde{\zeta}^{i}(p)\widetilde{\zeta}^{j}(-p)\rangle\Big{|}_{p \to 0}^{\rm amp}=\frac{c_{w}^{2}\Lambda_{T}^{2}}{F^{2}}\frac{1}{p^{2}}\delta _{ij} \tag{48}\] for the (tree-level) NG-boson propagator. Putting everything together, one obtains for the sum of the two diagrams in the top panel of fig. 11 the desired transverse expression \[g_{w}^{2}\langle\widetilde{J}_{\mu}^{L\,i}(p)\widetilde{J}_{\nu} ^{L\,j}(-p)\rangle\Big{|}_{p\to 0}\to\Big{[}c_{w}^{2}\Lambda_{T}^{2}g_{w}^{2} \delta_{\mu\nu}-\Big{(}c_{w}^{2}\Lambda_{T}^{2}g_{w}\frac{p_{\mu}}{F}\Big{)} \frac{F^{2}}{c_{w}^{2}\Lambda_{T}^{2}}\,\frac{1}{p^{2}}\Big{(}\frac{p_{\nu}}{ F}c_{w}^{2}\Lambda_{T}^{2}g_{w}\Big{)}\Big{]}\delta_{ij}=\] \[\qquad=c_{w}^{2}\Lambda_{T}^{2}g_{w}^{2}\Big{[}\delta_{\mu\nu}- \frac{p_{\mu}p_{\nu}}{p^{2}}\Big{]}\delta_{ij}\,. \tag{49}\] It should be stressed that, as expected, the arbitrary scale \(F\) introduced in the parametrization (46) completely disappears from the physical formula (49). As we have already remarked, in view of the form (3.19) of the QEL it is convenient to set \(F=c_{w}\Lambda_{T}\) so as to have canonically normalized Goldstone fields (see eq. (46)). Figure 11: Top panel: the diagrams making transverse the \(W\) polarization amplitude. The doubly dotted lines represent the propagation of a NG boson. Bottom panel: the NG-boson propagator. The rest of the notation is as in the figures in secs. 3.3 and 3.4. ###### Acknowledgments. We are indebted to R. Frezzotti for his interest in this work and for infinitely many comments and suggestions on the issues presented in this paper. We wish to thank R. Barbieri, M. Bochicchio, G. Martinelli, C. T. Sachrajda, N. Tantalo, M. Testa and especially G. Veneziano for many useful discussions. Correspondence with M. Garofalo is also acknowledged.
2306.17469
Manga109Dialog: A Large-scale Dialogue Dataset for Comics Speaker Detection
The expanding market for e-comics has spurred interest in the development of automated methods to analyze comics. For further understanding of comics, an automated approach is needed to link text in comics to characters speaking the words. Comics speaker detection research has practical applications, such as automatic character assignment for audiobooks, automatic translation according to characters' personalities, and inference of character relationships and stories. To deal with the problem of insufficient speaker-to-text annotations, we created a new annotation dataset Manga109Dialog based on Manga109. Manga109Dialog is the world's largest comics speaker annotation dataset, containing 132,692 speaker-to-text pairs. We further divided our dataset into different levels by prediction difficulties to evaluate speaker detection methods more appropriately. Unlike existing methods mainly based on distances, we propose a deep learning-based method using scene graph generation models. Due to the unique features of comics, we enhance the performance of our proposed model by considering the frame reading order. We conducted experiments using Manga109Dialog and other datasets. Experimental results demonstrate that our scene-graph-based approach outperforms existing methods, achieving a prediction accuracy of over 75%.
Yingxuan Li, Kiyoharu Aizawa, Yusuke Matsui
2023-06-30T08:34:08Z
http://arxiv.org/abs/2306.17469v2
# Manga109Dialog: A Large-scale Dialogue Dataset ###### Abstract The expanding market for e-comics has spurred interest in the development of automated methods to analyze comics. For further understanding of comics, an automated approach is needed to link text in comics to characters speaking the words. Comics speaker detection research has practical applications, such as automatic character assignment for audiobooks, automatic translation according to characters' personalities, and inference of character relationships and stories. To deal with the problem of insufficient speaker-to-text annotations, we created a new annotation dataset Manga109Dialog1 based on Manga109. Manga109Dialog is the world's largest comics speaker annotation dataset, containing 132,692 speaker-to-text pairs. We further divided our dataset into different levels by prediction difficulties to evaluate speaker detection methods more appropriately. Unlike existing methods mainly based on distances, we propose a deep learning-based method using scene graph generation models. Due to the unique features of comics, we enhance the performance of our proposed model by considering the frame reading order. We conducted experiments using Manga109Dialog and other datasets. Experimental results demonstrate that our scene-graph-based approach outperforms existing methods, achieving a prediction accuracy of over 75%. Footnote 1: Dataset and code are available at [https://github.com/manga109/public-annotations](https://github.com/manga109/public-annotations). ## 1 Introduction The market for e-comics has been expanding rapidly with the popularization of digital devices. In 2022, e-comics made up 66.2% of the Japanese comics market [14]. The development of e-comics has led to increased interest in automated methods to analyses comics. To support these techniques, reliable large-scale dialogue datasets are required for improved computational understanding of comics. Additionally, automated methods to detect characters to whom text is attributed are necessary for effective speaker detection. An example of speaker detection is shown in Figure 1. Given an image from comics, the system can detect characters, text, and frame regions, and can then automatically predict the character to which the text is attributed. This research can be applied to various tasks, such as automatic character assignment for text-to-speech reading, automatic translation according to characters' personalities, inference of character relationships and stories, and automated generation of scenarios. Due to the lack of dialogue datasets and some limitations of existing annotations, we constructed more accurate and rich annotations, Manga109Dialog. To the best of our knowledge, this is the largest comics dialogue dataset ever created. We divided the test set into two subsets based on the prediction difficulty. To evaluate the reliability and accuracy of Manga109Dialog, we conducted speaker detection experiments on it. Previous approaches were primarily based on the rule that the character nearest to a given text is most likely to be the speaker [13, 1]. However, due to the unique and complex structure of comics, they may have difficulty making correct predictions in some complex cases. As an example, consider the text at bottom right in Figure 1. If we chose the character nearest to the text as the speaker, the boy would be incorrectly predicted as the speaker rather than the girl. Therefore, the relationship between characters and text should be considered to make a correct prediction. To address this challenge, we propose an innovative deep learning-based approach. We leverage scene graphs, which are widely recognized as one of the most popular methods for describing visual relationships. Scene graph generation (SGG) is the task of detecting objects and their relationships in an image. Since SGG models have performed well on real-world datasets, such as the Visual Genome dataset [8], we propose utilizing SGG models for the task of comics speaker detection. Along with the general framework of SGG models, we introduce frame information because we find that the speaker tends to appear in the same frame as the text. Traditional metrics used to evaluate SGG lack suitability because the amount of speaker-to-text pairs varies from page to page. Besides, prediction results using traditional metrics cannot be compared to those of rule-based methods. Therefore, we present an evaluation metric well suited for this specific task, and demonstrate its superiority over conventional approaches in experimental results on Manga109Dialog. The contributions are summarized as follows. * We constructed an annotation dataset of associations between speakers and texts. This is the largest comics dialogue dataset in the world, containing over 130,000 speaker-to-text pairs. * We propose a deep learning-based approach for comics speaker detection using SGG models. We enhance the results by introducing frame information in the relationship prediction stage. * We introduce a new evaluation metric and a new standard for data division, that is, dividing the dataset according to prediction difficulty. Experimental results showed that Manga109Dialog provides a challenging yet realistic benchmark for comics speaker detection, and that our proposed approach exhibited 5% better performance compared to conventional rule-based methods. This research aimed to show to what extent speaker detection can be achieved solely with visual information, providing a benchmark for incorporating NLP in future research. Figure 1: An example of speaker detection. ©Akamatsu Ken. Related Work ### Comics Speaker Dataset Among the many comics available in different forms, only a few can be used for academic research due to copyright issues. eBDtheque [3] is a dataset of comics available for direct use. It comprises 100 images from French, American, and Japanese comics, containing 1,550 characters, 1,092 speech balloons, and 887 speaker-to-text pairs. Moreover, a large-scale manga dataset called Manga109 [2] contains 109 Japanese comics, including 21,142 images, 2,979 characters, and 147,918 texts. Given the lack of available comics datasets associated with speakers and comics texts, Abe et al. [1] constructed a dataset based on Manga109 with a total of 121,364 annotated speaker-to-text pairs. ### Comics Speaker Detection Rule-Based Methods:Rigaud et al. [13] proposed early rule-based speaker prediction method using distance calculation between speech balloon centroids and character centroids. The character with the shortest distance was assumed to be the speaker. They performed experiments on eBDtheque and obtained an accuracy of 78.58%. In the study by Abe et al. on Manga109, four types of information were used [1], including the text-character distance, the characters in the same frame with the text, the direction of the tail of the speech balloon, and language style (such as first-person utterances). Their approach experimentally obtained an accuracy of 70.7%. Deep Learning-Based Methods:In recent years, deep learning-based methods have also been developed. Yamamoto et al. [20] used features extracted from the input images and metadata to calculate scores for each text-character pair, while other studies explored time-series learning [11] and Natural Language Processing [19]. The results of these studies on Manga109 all demonstrate the potential of learning-based methods in speaker prediction. However, their results were only based on a part of the dataset and lacked implementation details. Therefore, a large dataset is needed to evaluate these models under a unified standard. ### Scene Graph Generation (SGG) Scene graphs were introduced by Johnson et al. [6] as a data structure for describing objects in a scene. To generate a scene graph, an SGG model first detects object regions and their categories, then identifies relationships between objects. Since scene graphs provide rich semantic features, they can distinguish images or videos more accurately and describe them more precisely. Scene graphs have demonstrated success in various visual tasks, including image retrieval, image captioning, and visual question answering [6; 21; 17]. ## 3 Manga109Dialog Abe et al. [1] first created a comics speaker dataset based on the entire Manga109 dataset. Although their study was pioneering and impressive, this dataset still involves some notable limitations. First, they linked the name of the character to the text instead of the bounding box of the speaker. Therefore, we cannot specify a specific bounding box if the same character appears multiple times on a single page. Besides, since annotators prefer making annotations of simple relationships, some more complicated cases are not included in this dataset. Aiming to produce a more complete and applicable dataset for speaker detection in comics, we constructed a new dataset, Manga109Dialog. Instead of connecting the character name to the text, we connect the bounding box of the speaker to the text. The visual differences between Abe's dataset and Manga109Dialog are shown as Figure 2. ### Dataset Construction In this section, we show the details of how we constructed our dataset. Our annotations were mainly based on the following rules. * If the character to whom the text is attributed appears only one on the page, link the bounding box of the character to the text regardless of the position. * If more than one character is on the page, the bounding box of the speaker in the same frame as the text is linked in priority. If the speaker is not in the same frame, it is determined by the reading order (from top to bottom, right to left). The texts in the last frame of the page, however, are linked to the character in the second-to-last frame. * When there is more than one speaker, we link the text to all speakers. * Texts spoken by "Others" are annotated if we can specify the bounding box of the speaker, and are excluded otherwise. Although we make annotations on Manga109, not all texts are our annotation targets. For example, titles and descriptions were not annotated. Texts designed as "Narration" or with unknown speakers were also not annotated. Furthermore, in contrast to Abe's dataset, when the speaker was not on the page, such as texts from phone conversations or letters, we did not annotate them even if we knew the speaker. More details and examples of annotation rules are shown in the supplementary material. ### Data Analysis We outsourced the annotation of our dataset to a company with expert annotators. After they completed the task, the first author confirmed and corrected their annotations. Creating the annotations took approximately three months. An overview of Manga109Dialog and Abe's annotation dataset is shown in Table 1. In order to compare the two datasets at the same level, we did an additional step to determine the bounding boxes of speakers. For cases where the same character appears multiple times on a page, we preferred the box in the same frame as the text; otherwise, we would choose the nearest one. From Table 1, we can find Abe annotated 9,830 images of Manga109, containing an average of 6.17 speaker-to-text pairs per page, whereas we annotated all pages. In Manga109Dialog, 9,904 images include speaker-to-text pairs, with 6.70 pairs per page. Moreover, we divided our annotations into _Easy_ and _Hard_ by the difficulty of prediction. If the speaker is in the same frame as the text, the text is considered _Easy_; otherwise, it is considered _Hard_. _Total_ contains all pairs of speakers and texts. An example of annotations of varying difficulties is \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & & & \multicolumn{4}{c}{Speaker-to-Text pairs} \\ \cline{3-7} Dataset & Annotated images & Texts & _Easy_ & _Hard_ & _Total_ & Pairs / page \\ \hline Manga109Dialog & 9,904 & 147,887 & 111,959 & 20,733 & 132,692 & 6.70 \\ Abe’s annotations & 9,830 & 147,887 & - & - & 121,291 & 6.17 \\ \hline \hline \end{tabular} \end{table} Table 1: Statistics for two datasets. Figure 2: Our dataset allows us to accurately determine whether the text on the right side is spoken by the character in the first frame or the character in the second frame, which Abe’s annotations were unable to do. ©Arai Satoshi. given in Figure 3. We performed experiments under these three difficulties to better evaluate our dataset in Section 5.3. ## 4 Approach Here, we present our proposed scene-graph-based approach for comics speaker prediction using Manga109Dialog. ### Problem setting Let us first define our problem setting. Given an input image \(I\), our task is twofold. First, we localize character regions and text regions. Next, for all combinations of characters and texts, we calculate the relationship score of each combination. The character with a higher score is more likely to be the speaker of the text. An example is shown in Figure 4. We represent each region as a bounding box \(\mathbf{b}=[x,y,w,h]^{\top}\in\mathbb{R}^{4}\). For each region, we predict an object label \(l\in\{\texttt{character},\texttt{text},\texttt{background}\}\). For the combination of character \(i\) and text \(j\), the relationship score between \(\mathbf{b}_{i}\) and \(\mathbf{b}_{j}\) is represented as \(r_{i\to j}\in\mathbb{R}\). This problem setting is equivalent to a standard SGG task, where the number of object classes is two (_character_ or _text_) and the number of relationship labels is one (_speak_). In addition, we provide a frame detector and a frame order estimator. We can predict a bounding box of a frame; \(\mathbf{p}=[x,y,w,h]^{\top}\in\mathbb{R}^{4}\). We can also obtain frame reading orders. We later use this information to enhance our system. ### Speaker Detection via SGG Introducing our approach in detail, we follow the pipeline of MOTIFS [22], the most representative SGG model. Its process can be divided into three stages. The first stage is object detection. Given an image \(I\), an object detector \(f_{\mathrm{detect}}\) outputs a set of \(N\) tuples. \[\{(\mathbf{b}_{i},\mathbf{f}_{i},\mathbf{l}_{i})\}_{i=1}^{N}=f_{\mathrm{detect }}(I). \tag{1}\] Each tuple consists of a bounding box \(\mathbf{b}_{i}\in\mathbb{R}^{4}\), a feature vector \(\mathbf{f}_{i}\in\mathbb{R}^{4096}\), and a vector of class label probabilities \(\mathbf{l}_{i}\in\mathbb{R}^{3}\). The feature vector \(\mathbf{f}_{i}\) encodes the visual information of the region \(\mathbf{b}_{i}\). \(\mathbf{l}_{i}\) represents the probability that the label is \(\{\texttt{character},\texttt{text},\texttt{background}\}\) respectively. Unlike MOTIFS, we use a Faster-RCNN [12] with a ResNeXt-101-FPN [9; 18] backbone for this step. Next, we fuse the detected feature vectors to produce a richer representation. We feed the set of the feature vectors and label probability vectors \(\{(\mathbf{f}_{i},\mathbf{l}_{i})\}_{i=1}^{N}\) into the fusion module \(f_{\mathrm{fuse}}\) to output enhanced features \(\{\mathbf{d}_{i}\}_{i=1}^{N}\). \[\{\mathbf{d}_{i}\}_{i=1}^{N}=f_{\mathrm{fuse}}(\{(\mathbf{f}_{i},\mathbf{l}_{i })\}_{i=1}^{N}). \tag{2}\] Here, \(\mathbf{d}_{i}\in\mathbb{R}^{512}\) is an enhanced representation for \(\mathbf{b}_{i}\). Following MOTIFS, we adopt bidirectional LSTM models [4] for this step. An additional LSTM is used to predict the object label \(l_{i}\) from \(\mathbf{l}_{i}\). Figure 3: An example of annotations on three prediction difficulties. ©Kurita Riku. In the last stage, we predict the relationship score for each combination of all \(N\) bounding boxes. Therefore, there are \(N^{2}\) types of predictions. For example, the score of \(\mathbf{b}_{i}\) and \(\mathbf{b}_{j}\) (\(r_{i\to j}\in\mathbb{R}\)) is computed as \[r_{i\to j}=w(i,j)g(\mathbf{d}_{i},\mathbf{d}_{j},\mathbf{f}_{i,j}). \tag{3}\] \[\mathbf{f}_{i,j}=f_{\mathrm{extract}}(\mathbf{b}_{i},\mathbf{b}_{j},\mathbf{f} _{i},\mathbf{f}_{j}). \tag{4}\] The function \(g\) inputs the RoI feature of the union box \(\mathbf{f}_{i,j}\) and enhanced features \(\mathbf{d}_{i},\mathbf{d}_{j}\) obtained from the previous stage and outputs a score. Here, \(g\) consists of simple learnable matrices and a Softmax predicate classifier. Note that the feature vector \(\mathbf{f}_{i,j}\) is extracted from bounding boxes \(\mathbf{b}_{i},\mathbf{b}_{j}\) and their feature vectors \(\mathbf{f}_{i},\mathbf{f}_{j}\). The weight function \(w(i,j)\in\mathbb{R}\) can be any function. If \(w(i,j)=1\), the entire pipeline is the same as that of MOTIFS. ### Use of Reading Order Due to the unique feature of comics that the speaker is more likely to appear in the same or next frame that the text belongs to, we add frame information to the SGG model. To do so, we train another Faster-RCNN model to detect the frame regions from the input image. Following Kovanen and Ikuta's study [7; 5], we can then estimate the reading order of the frames. Given a set of frame bounding boxes, we first decide whether they are horizontally divisible into two parts. If so, we split them and repeat the first step for each set until we cannot horizontally split them. Then, we decide whether each set is vertically divisible into two parts. If so, we do so and repeat the first step. When the set of frames cannot be divided horizontally and vertically, we number the frames according to reading order (from top to bottom, from right to left). As shown in Figure 4, we can obtain the result as the form of \(\mathbf{p}_{1},\mathbf{p}_{2},\dots\), where \(\mathbf{p}_{i}\in\mathbb{R}^{4}\) is a bounding box of the \(i\)-th frame. We also show an another example of frame detection and reading order estimation results in the supplementary material. For each object, we calculate to which frame it belongs and obtain the reading order. We empirically found that the model performed best when the weight function \(w(i,j)\) was in the form of Eq. 5: \[w(i,j)=\frac{1}{2+|k_{i}-k_{j}|}. \tag{5}\] Here, \(k_{i}\) represents the reading order of the frame \(\mathbf{p}\) to which \(\mathbf{b}_{i}\) belongs. The closer the reading orders of \(\mathbf{b}_{i}\), \(\mathbf{b}_{j}\) are, the higher the score of \(w(i,j)\). Therefore, we can make effect the prediction output by introducing frame orders and changing relationship scores. To summarize, the end-to-end pipeline we designed is as follows. Given \(I\), we obtain \(N\) tuples \((\mathbf{b}_{i},\mathbf{f}_{i},\mathbf{l}_{i},\mathbf{d}_{i})\) by running Eq. 1 and 2. In addition, we obtain frame information \(k_{i}\) through our frame detector. We then compute \(r_{i\to j}\) for all \(N^{2}\) possible combinations by Eq. 3. From the \(N^{2}\) scores, we select the top \(K\) items. Here, \(K\) is a hyper-parameter. We discuss how to select \(K\) in Section 4.4. Figure 4: Framework of the proposed method. ©Arai Satoshi. After this, the model can generate a scene graph, consisting of object labels, their bounding boxes, and speaker-to-text pairs. ### Evaluation Metrics The earliest and the most widely accepted evaluation metric of SGG models is Recall@K [10]. To calculate this, we first need to compose the set of subject-predicate-object into a triplet. For each triplet, we compute the prediction score by multiplying each score. We sort the scores and select the top \(K\) triplets. We then compute the triplet recalls. We consider the triplet a correct prediction when all three elements are labeled correctly. In classic SGG tasks, \(K\) is usually predefined as 20, 50, or 100. However, since the number of texts fluctuates significantly from image to image, using a fixed \(K\) value does not provide a fair comparison. Take Figure 5 as an example. When a large \(K\) is fixed, Recall@K can cover almost all major combinations of objects if the number of texts is small, which makes the accuracy very close to 100% (left side of Figure 5). When a small \(K\) is fixed, the score is always low if there are many texts (right side in Figure 5). To compare our proposed method with rule-based methods more appropriately, we set \(K\)= #text and introduce a new evaluation metric called Recall@(#text), which is the Recall of selected predictions that can cover all texts on the page. For each text, we choose the triplet containing it with the highest score as the prediction. For the cases where there exist \(N\) speakers for a single text, we select the top \(N\) triplets as predictions. We use Recall@(#text) to evaluate the experimental results in Section 5.3. ## 5 Experiments ### Experimental Setup Baseline:The simplest rule-based method is "shortest distance", where we assume the text is spoken by the closest character. A slightly modified version is described as "frame distance". We prioritize characters in the same frame as the text. If there are no characters in the frame, we make predictions using the same rule as "shortest distance". In these two approaches, the amount of predictions is the same as that of texts. Therefore, there is no need to select the top \(K\) predictions. Besides, we proposed a deep learning-based baseline, where the relationship score was calculated only by feeding the union feature \(\textbf{f}_{i,j}\) obtained from Eq. 4 into a fully connected layer. Datasets:We divided Manga109Dialog into training and test sets with ratios of 70% and 30%, respectively. Furthermore, we divided the annotations into three levels of difficulty mentioned in Section 3.2. Besides, we tested our model on Abe's dataset. Since the annotation in Abe's dataset Figure 5: Comparison of using Recall@K and Recall@(#text). ©Arai Satoshi. ©Akamatsu Ken. could not identify the particular bounding box of the speaker, we did an additional step and determined it by "frame distance". Evaluation:In the task of SGG, the performance of the models can be evaluated in three protocols. (1) Predicate Classification (PredCls): predicting the relationship between two objects (\(\{r_{i\to j}\}\)), given an image (\(I\)), object bounding boxes (\(\{\mathbf{b}_{i}\}\)), and object labels (\(\{l_{i}\}\)). (2) Scene Graph Classification (SGCls): predicting object labels and relationships, given an image and bounding boxes. (3) Scene Graph Detection (SGDet): detecting object bounding boxes and predicting their labels and relationships to generate an entire scene graph using only the given image. We executed the model for these three tasks to evaluate our proposed method. ### Implementation Details Because object detection is the most time-consuming step in speaker prediction, we pre-trained a Faster-RCNN model as our object detector. Following the previous studies in SGG [16; 15], we applied the Faster-RCNN with a ResNeXt-101-FPN backbone. We then froze the model as the object detector of our SGG models. Additionally, we pre-trained another Faster-RCNN as the frame detector. The Mean Average Precision (mAP) of the two models reached 86.09% and 96.48%, respectively. We present the details and results of our pre-trained object detector and frame detector in our supplementary material. We trained our models on a single NVIDIA A100 GPU (60 GiB), with a batch size of 4. We used SGD as an optimizer and set the initial learning rate to \(4\times 10^{-3}\). In the stage of SGG, we trained our SGG model on a single NVIDIA A100 GPU (60 GiB), with a batch size of 4. We optimized the model using SGD with an initial learning rate of \(4\times 10^{-2}\). Our loss was the sum of the cross entropy for objects and relationships. ### Quantitative Results We ran our model under three tasks (PredCls, SGCls, SGDet). Because our study focuses on speaker prediction, we only show the results under PredCls in Table 6, meaning that the model only needs to find speaker-to-text pairs. See the supplementary material for the results under SGCls and SGDet. From Table 6, we can observe a significant improvement in our approach compared to the rule-based approach, especially on _Hard_. That was because the speaker in _Hard_ is not in the same frame as the text, while "frame distance" gives preference to the character in the same frame. Therefore, other characters are in the frame to which the text belongs, "frame distance" is 100% likely to be wrong. Besides, the model using reading order information performed better than that without it. In addition to our dataset, we also tested our model on Abe's dataset under the task of PredCls. The results can be find in our supplementary material. ### Qualitative Results To better understand the generated scene graphs visually, we visualized the predictions. Figure 7 shows Recall@(#text) under PredCls. The green lines represent correct predictions, and the red lines represent incorrect predictions. On _Easy_, our model outperformed the "frame distance" method regardless of whether frame reading orders were used. For texts in _Easy_, if the speaker was not the closest character to the text, rule-based methods are certain to make an incorrect prediction, as shown Figure 6: Recall@(#text) for PredCls. in the red lines. Moreover, the results on _Hard_ show that the proposed method of introducing frame orders exhibited a high accuracy in prediction and was able to handle more challenging cases. We show some additional examples in the supplementary material. ### Challenges and Future Work Since speaker prediction is somewhat subjective, more annotators are needed to enhance the reliability and validity of Manga109Dialog and enable it to be used more effectively for research and development in the field of comics analysis. Besides, although our proposed method demonstrated significant improvements compared to conventional methods, there were still some cases where predictions failed. Two situations were primarily challenging for the machine. The first case was that the speaker was not in the closest position to the text. Though our model can cope with this situation to some degree, it was not always able to make correct predictions. The other case was that texts spoken by different characters appear alternately. Because no visual information can be used for prediction, this may be relatively difficult even for humans. The straightforward way to handle these cases would be to introduce Natural Language Processing (NLP) models. However, this research aimed to show to what extent speaker detection can be achieved only with visual information. The results of the present study provide a baseline needed to incorporate NLP in future research. ## 6 Conclusion In this study, we have constructed a large-scale dialogue dataset called Manga109Dialog based on the shortcomings of existing annotations. we have presented a novel approach that applies SGG models to comics speaker detection, thereby creating a benchmark for deep learning-based methods where none existed before. Due to comics' unique features, we have introduced frame reading order to help predict the speaker. To properly evaluate the experimental results, we have further proposed a new evaluation metric and tested our model on different test sets. The results indicate the reliability of Manga109Dialog, as well as the superior performance of our deep learning-based approach over existing rule-based methods. These findings highlight the potential of scene-graph-based methods in this field of comics speaker detection. We believe our dataset and model will have broader applicability in the digital processing of comic. Our work lays the groundwork for further investigations into the effective combination of SGG models and other techniques such as NLP, offering new insights for future research in this area. Figure 7: Examples of predictions made by rule-based method and proposed method. The green lines represent correct predictions, while the red lines represent incorrect predictions.
2310.05953
Classification of Spam URLs Using Machine Learning Approaches
The Internet is used by billions of users every day because it offers fast and free communication tools and platforms. Nevertheless, with this significant increase in usage, huge amounts of spam are generated every second, which wastes internet resources and, more importantly, users' time. This study investigates the use of machine learning models to classify URLs as spam or nonspam. We first extract the features from the URL as it has only one feature, and then we compare the performance of several models, including k nearest neighbors, bagging, random forest, logistic regression, and others. Experimental results demonstrate that bagging outperformed other models and achieved the highest accuracy of 98.64%. In addition, bagging outperformed the current state-of-the-art approaches which emphasize its effectiveness in addressing spam-related challenges on the Internet. This suggests that bagging is a promising approach for URL spam classification.
Omar Husni Odeh, Anas Arram, Murad Njoum
2023-09-10T16:15:09Z
http://arxiv.org/abs/2310.05953v2
# Classification of Spam URLs Using Machine Learning Approachs ###### Abstract The Internet is used by billions of users every day because it offers fast and free communication tools and platforms. Nevertheless, with this significant increase in usage, huge amounts of spam are generated every second, which wastes internet resources and, more importantly, users' time. This study investigates the use of machine learning models to classify URLs as spam or non-spam. We first extract the features from the URL as it has only one feature, and then we compare the performance of several models, including k-nearest neighbors, bagging, random forest, logistic regression, and others. Experimental results demonstrate that bagging outperformed other models and achieved the highest accuracy of 98.64%. In addition, bagging outperformed the current state-of-the-art approaches which emphasize its effectiveness in addressing spam-related challenges on the Internet. This suggests that bagging is a promising approach for URL spam classification. Spam, URL, dataset, machine learning, model, KNeighbors, bagging, random forest, logistic regression, classifier ## I Introduction The Internet is an open space for everyone to freely create content, publish it, and share it with others. In the last decade, internet access has increased tremendously. This increase in audience came with some side effects. Too many ads and spam are being shared everywhere, whether it is email or any other type of social media. Platforms like email clients which are used by hundreds of millions of users every day struggle to effectively filter the content to the end user, [1][2]. Blacklist is one of the common methods used to identify malicious URLs. Although blacklisting has been effective for many URLs, the rapid increase in these URLs makes it an insufficient method. Therefore, Machine learning techniques have been proposed to address this issue [3]. These techniques can detect malicious websites, even if they have never been encountered before. In the context of this discussion, machine learning (ML) refers to the field of computer algorithms that autonomously enhance their performance through experiential learning and data analysis [4, 5]. Because ML models possess the capability to comprehend the underlying structural patterns within URLs, they provide more insightful methods for classifying URLs [6, 7, 8]. In this study, we want to investigate the use of machine learning models to classify URLs that are most likely spam. The model's input and output are pretty straightforward; it will take a URL and it will classify it as spam or not. Since we are taking only one feature as an input, we will need to analyze and extend this feature to extract more information about the URL to determine its characteristics, [9, 10]. In addition, we will build multiple machine learning models, with different parameters to evaluate different options, and finally to come up with the best model that can result in the highest possible accuracy. The paper is organized into five sections. Section 2 covers related works in spam URLs. Section 3 presents the proposed ML models, while Section 4 discusses the obtained results. Finally section 5 concludes the paper. ## II Literature review URL spam detection is a modernistic field that form a solid interest for both organizations and researchers. It started to receive more and more attention from researchers due to the major evolving and exposing that happened on the internet over the past few years. Previous work on this topic has involved analysis of the URL and the page itself. Oshingbesan _et al_, 2023, [11] examined different ML models to classify malicious websites across different datasets. From their results, K-Nearest neighbor performs the best in classifying malicious URLs. One the otherside, other models like random forest, decision trees, logistic regression, and support victor machine, outperform the baseline models across all dataset. Murat Koca _et al_, 2022, [12] investigated the use of different ML models for classifying the URLs. These models include Logistic Regression, Neural Networks, and multiple Naive Bayes Algorithms. The results showed that the Naive Bayes model performed noticeably better than both the logistic regression and neural network approaches across all tested dataset. Gyongyi and Garcia-Molina, 2005, [9], attempted to classify the web spam into smaller buckets, such as the URL spam, redirection, and keywords stuffed in the link. While splitting and categorizing the spam into some specific buckets will likely improve the classifier ability to detect spams, their paper focused on building a general classifier for all different types of spam. Ntoulas, Najork, Manasse, and Fetterly, 2006, [13], studied and analyzed the content of the page itself, and it typically included creating and extracting features from the HTML structure of the page, JavaScript, and links, such as the number of the words on the page, and the average length of words, and the number of words in both title and body. Other feature extraction methods involved looking at the percentage of hidden content, which is not visible to the user who is browsing a specific page. Egele, Kolbitsch, and Platzer, 2009, [14], had another approach which starts by determining the important features in terms of their rank in a search engine and then find the features that are most likely to be used by spammers. The problem of this approach is that it is infeasible to enumerate all ranking element, and thus, some important features may be missed. Boser, Guyon, and Vapnik 1992, [15], said that all the models are based on SVMs, which is known to perform well in classification tasks, Joachims 1998, [16, 17]. Their evaluation used the standard area under ROC curve metric over a K-fold cross-validation where K=10, and they used some tools provided by libsvm Chang and Lin, 2001, [18]. The feature selection was made using frequency count over an entire sample set. All their charts were plotted only for feature sizes of! 1000. Larger feature sizes did not significantly improve results. ## III Dataset and features ### _Data description_ To train our models, we are using URL - Spam or Not Spam - classification dataset, Dec. 2021, [19]. The dataset contains about 148.3K URLs in which one-third are flagged as a spam URL and the rest are not spam. It can be used to create a binary classification model. The dataset was created by the pudding, [20]. The dataset links were found in different newsletters. Their flagging system identifies if a link is a spam or not, by parsing links from over 100 newsletters every 30 minutes. A link is programmatically flagged if it appears more than three times in a single newsletter or contains a likely subscribe/unsubscribe URL. ### _Features Extraction_ The dataset has only one input feature which is the URL itself. To make the best use of these URL we need to extract as many features as we can to understand the characteristics of the URL in order to identify some patterns which can be useful for the machine learning models later on. Below is the list of most important extracted features with a brief description: \begin{table} \begin{tabular}{|l|l|} \hline Feature name & Description \\ \hline url\_length & The number of characters. \\ \hline has\_subscribe & Is the URL has the word subscribe. \\ \hline contains\_hash & Whether the URL has the hash letter. \\ \hline num\_digits & The number of digits in the URL. \\ \hline non\_htps & Is it a secure connection. \\ \hline num\_words & Number of words in the URL. \\ \hline entropy & The measure of disorder/uncertainty. \\ \hline num\_params & The number of query parameters. \\ \hline num\_fragments & Number of fragments in the URL. \\ \hline num\_subdomains & The number of sub domains. \\ \hline num\_\%20 & Number of encoded white spaces \\ \hline num\_\text@ & Number of @ in the URL. \\ \hline has\_ip & Whether it’s a FQDN or IP address. \\ \hline \end{tabular} \end{table} TABLE I: Dataset new features ### _Exploratory data analysis_ Two-thirds of the used dataset are not spam URLs, which is more than 100k URLs. The spam URL represents 32% of the data, which is about 48k URLs. Figure 1 represents the distribution of spam vs. non-spam URLs. The URL length analysis shows that most of the spam URLs have a length of less than 100 characters as shown in the histogram below: Another important piece of information extracted from these URLs is that the URLs that contain subscribe words are most likely spam. There is 3% of the input URLs have the word subscribe and almost all of them are spam. In addition, the number of words in the spam URLs is less than 5 in the majority of the data, unlike the non spam which is distributed over a wider range as shown in Figure 3. There is about 2.08% of the URLs that are not using secure HTTPS protocol, 1.25% of these URLs are spam which is more than 60% of the HTTP URLs knowing that only 33% of the total data is flagged as spam. This indicates a usable feature for the model as shown in 4. ## IV Methods The following subsections describe the machine learning models used in this study to classify spam URLs. ### _Logistic regression_ Even though it has the word regression, it's a classification model rather than a regression model. It's a simple and efficient method especially for binary and linear classification problems. It is a model, which that's very easy to realize, and it achieves a very good performance with linearly classes. It's an extensively algorithm for classification. The logistic regression model is a statistical method for binary classification which can be also generalized to multiclass classification. Scikit-learn has a highly optimized version of logistic regression implementation, which supports multiclass classification tasks, [21]. ### _Random Forest Classifier_ Random forest classifier, [21], is an ensemble method that trains multiple decision trees Fig 1: Spam URL distributions Fig 4: URLs number of words by Spam / Non spam Fig 3: URL has the word subscribe Fig 2: URLs length by Spam / Non spam in parallel with bootstrapping which is followed by aggregation. The bootstrapping indicates that the different individual decision trees are trained concurrently on multiple subsets of the training dataset using different subsets of the available features. The bootstrapping will ensure that every individual decision tree is unique, and that reduces the overall variance of the random forest classifier. For the final decision step, the random forest classifier aggregates the decisions of all the individual trees; then, the classifier will exhibit good generalization. Random forest classifier tends to outperform most other classification methods in accuracy without having issues of overfitting. The random forest classifier doesn't require the feature scaling process. Even though a random forest classifier is harder to interpret, it's easier to tune the hyperparameter when we compare it to a decision tree classifier. The general figure of random forest is represented in Figure 1. ### _Multi layer perceptron (MLP)_ MLP can be viewed as a supplement of feed-forward neural network, [22], which consists of three different types of layers which are; the input, output, and hidden layers. The input layer received the input signal for processing. The prediction and classification are performed by the output layer. A number of hidden layers that are placed between the input and the output layer are the core engine of the MLP. Like a feed-forward network, the data flows in a forward direction from the input to the output layer. The neurons in the MLP are trained with the backpropagation learning algorithm. MLPs can solve problems that are not linearly separable. The major use cases of Multi-layer perceptron are pattern recognition, classification, approximation, and prediction. The computations in MPL take place at every neuron in the output and hidden layer [11]. Figure 5 shows the input, hidden and output layers of a MLP. ### _Gradient Boosting Classifier_ Gradient Boosting is a machine learning algorithm, that's used for both classification and regression problems. This classifier works on the idea that multiple weak learners can collaborate together and make a more accurate predictor. Gradient boosting classifier works by building simpler and weak prediction models sequentially where each model will try to predict the error left from the previous model. That's why this algorithm tends to over-fit quickly [23]. ### _Decision Tree Classifier_ A decision tree is a versatile machine learning algorithm used for classification and regression tasks. It operates by splitting a dataset into smaller, manageable subsets while simultaneously developing a tree-like model of decisions. Each node in the tree represents a decision point, leading to branches and ultimately to leaf nodes that signify outcomes. The simplicity and visual interpretability of decision trees make them easily comprehensible, ideal for practical decision-making scenarios. However, they can be prone to overfitting, especially with complex datasets. To mitigate this, techniques like pruning are employed. Decision trees also lay the groundwork for advanced ensemble methods such as Random Forests and Gradient Boosted Trees, enhancing predictive performance and robustness [24][25]. ### _Other methods_ In addition to the method described previously, we used plenty of other methods to build and train models using the input dataset. These methods are the K-Neighbors classifier [26], ADA boosting classifier [27], Bagging classifier [28], Stacking classifier [29] and Naive Bayes classifiers (Bernoulli and Multinomial) [30]. All the nine mentioned methods went through the same process from model evaluation and finally selecting the best model. ## V Results and discussion Our research utilized a dataset of approximately 148.3K URLs, with one-third categorized as spam and the remaining as non-spam. For model evaluation and training, the data was partitioned with \(20\,\mathrm{\char 37}\) allocated for testing and \(80\,\mathrm{\char 37}\) for training. Fig 5: Multi layer perceptron In optimizing the classifiers, hyperparameters were fine-tuned through a random search methodology, balancing computational efficiency with the likelihood of securing optimal hyperparameter combinations. ### _Comparison between all the proposed ML models_ For assessing classifier performance, the AUC and ROC curves were employed, As Shown in Figure 7, The Random Forest, Bagging, and Stacking classifiers each achieved a perfect AUC score of 1. Conversely, the Multinomial classifier Fig 6: ROC and AUC curves for various classifiers. and the Bernoulli classifier registered AUC scores of 0.85 and 0.8, respectively. Further insights were derived from the confusion matrices, As Shown in Figure 8, which underscored Bagging as the predominant classifier in terms of precision and recall, closely followed by Stacking. Figure 8 serves as a pivotal component of our analysis, providing a comprehensive view of twelve confusion matrices generated by various classification methods. These matrices allow for a thorough evaluation of classifier performance based on their false positive and false negative rates. Among these classifiers, the Bagging Classifier emerges as the top performer, displaying a commendable equilibrium between false positives (166) and false negatives (277). This balance underscores the classifier's effectiveness in correctly classifying instances while minimizing the occurrence of misclassifications. In contrast, the Bernoulli Naive Bayes (BernoulliNB) Classifier demonstrates the least favorable performance, with an alarmingly high false positive count of 6583 and an elevated false negative count of 677. This classifier faces notable challenges in achieving accurate classifications, resulting in a substantial number of both false positives and false negatives. Similarly, the Logistic Regression Classifier exhibits suboptimal performance, with a relatively high false positive count of 3251 and a substantial false negative count of 2330. These findings emphasize the crucial role of careful classifier selection and potential fine-tuning in achieving robust classification results. The Bagging Classifier, AS Shown in II was particularly noteworthy with an accuracy of \(98.64\,\mathrm{\char 37}\) and a 10 K-fold validation score of \(97.93\,\mathrm{\char 37}\). Its performance metrics, including precision, recall, and F1 score, further accentuated its dominance. The Stacking Classifier, though impressive, was slightly behind with an accuracy of \(98.45\,\mathrm{\char 37}\) and a 10 K-fold validation score of \(97.82\,\mathrm{\char 37}\). Despite the perfect AUC score for the Random Forest, its accuracy was \(97.55\,\mathrm{\char 37}\) with a 10 K-fold score of \(96.87\,\mathrm{\char 37}\) On the flip side, the MultinomialNB and BernoulliNB classifiers lagged, with respective accuracies of \(73.05\,\mathrm{\char 37}\) and \(77.75\,\mathrm{\char 37}\). ### _Comparison with the state-of-the-art methods_ In this section, we compare the performance of the best-tested ML model against the most related models in the literature. As shown in Table III, many studies on spam URL detection using machine learning have highlighted significant advancements. The results presented in [31][32][33] demonstrated high effectiveness with models like DistilBERT and Random Forest, achieving accuracies ranging from 93.77% to 97.39%. In our study, we outperformed these benchmarks, achieving an accuracy of 98.64% using a Bagging Classifier. These findings underscore the evolving efficacy of diverse machine learning techniques in addressing spam URLs, emphasizing the imperative for ongoing innovation in cybersecurity. In summary, while many classifiers demonstrated commendable efficacy in distinguishing between spam and non-spam URLs, the Bagging classifier, as indicated by the presented metrics, emerge as top contenders for practical applications. ## VI Conclusion To sum up the work done and discussed in this paper, we utilized a dataset that classifies URLs as spam or not spam, analyzed the data, and extracted multiple features. Then we trained various machine learning models using this dataset. For each model, we tuned the hyperparameters and cross-validated the results. The outcomes showed several models with accuracy higher than 90 For a future work, we would like to train the data sets on more models like deep learning models, and we also would like to extract more features from the website of the URL itself such as the body size of the web page and features related to the script used as well. ## References * [1] W. Zhongtao, P. Xin, W. Yuling, L. Yaohua, H. Li, and C. Biao, "Analysis on the characteristics of url spam," vol. 1, 03 2012. * [2] A. Arram, H. Mousa, and A. Zainal, "Spam detection using hybrid artificial neural network and genetic algorithm," in _2013 13th International Conference on Intelligent Systems Design and Applications_, 2013, pp. 336-340. * [3] F. Vanhoershoven, G. Napoles, R. Falcon, K. Vanhoof, and M. Koppen, "Detecting malicious urts using machine learning techniques," in _2016 IEEE Symposium Series on Computational Intelligence (SSCI)_. IEEE, 2016, pp. 1-8. * [4] M. I. Jordan and T. M. Mitchell, "Machine learning: Trends, perspectives, and prospects," _Science_, vol. 349, no. 6245, pp. 255-260, 2015. * [5] A. Arram, M. Ayob, and A. Sulaiman, "Hybrid bird mating optimizer with single-based algorithms for combinatorial optimization problems," _IEEE Access_, vol. 9, pp. 115 972-115 989, 2021. * [6] M. Anjaneyulu, B. Madhuravani, and P. Devika, "Detection of malicious websites using machine learning approach and web vulnerability scanner," in _AIP Conference Proceedings_, vol. 2492, no. 1. AIP Publishing, 2023.
2301.13870
Symmetry constraints and spectral crossing in a Mott insulator with Green's function zeros
Lattice symmetries are central to the characterization of electronic topology. Recently, it was shown that Green's function eigenvectors form a representation of the space group. This formulation has allowed the identification of gapless topological states even when quasiparticles are absent. Here we demonstrate the profundity of the framework in the extreme case, when interactions lead to a Mott insulator, through a solvable model with long-range interactions. We find that both Mott poles and zeros are subject to the symmetry constraints, and relate the symmetry-enforced spectral crossings to degeneracies of the original non-interacting eigenstates. Our results lead to new understandings of topological quantum materials and highlight the utility of interacting Green's functions toward their symmetry-based design.
Chandan Setty, Shouvik Sur, Lei Chen, Fang Xie, Haoyu Hu, Silke Paschen, Jennifer Cano, Qimiao Si
2023-01-31T18:59:52Z
http://arxiv.org/abs/2301.13870v2
# Symmetry constraints and spectral crossing in a Mott insulator ###### Abstract Lattice symmetries are central to the characterization of electronic topology. Recently, it was shown that Green's function eigenvectors form a representation of the space group. This formulation has allowed the identification of gapless topological states even when quasiparticles are absent. Here we demonstrate the profundity of the framework in the extreme case, when interactions lead to a Mott insulator, through a solvable model with long-range interactions. We find that both Mott poles and zeros are subject to the symmetry constraints, and relate the symmetry-enforced spectral crossings to degeneracies of the original non-interacting eigenstates. Our results lead to new understandings of topological quantum materials and highlight the utility of interacting Green's functions toward their symmetry-based design. **Introduction:** In band theory of non-interacting topological semimetals, lattice symmetries act as indicators of topology and have been widely exploited in identifying novel topological materials [1; 2; 3; 4; 5; 6; 7]. The effects of interactions in topological semimetals are typically analyzed perturbatively [8; 9; 10; 11; 12; 13; 14; 15]. To address the interplay between strong correlations and topology, however, non-perturbative approaches to the interactions are required. Whether and how symmetry constraints operate is _a priori_ unclear. Recently, a group that includes several of us have shown that the Green's function eigenvectors form a representation of the space group [16], in parallel to the Bloch functions of the non-interacting settings [7]. Symmetry enforced or protected degeneracies then respectively follow when the dimensionality of irreducible representation is greater than one at a given high symmetry point, or when two irreducible representations with distinct symmetry eigenvalues cross along a high symmetry line. This formulation was applied to the case of a multi-channel Kondo lattice, which features dispersive modes with fractionalized electronic excitations. The eigenvectors of the Green's function were used to define degeneracies by locating spectral crossings [16]. The approach also provided the theoretical basis for the robustness [17] of Kondo-driven Weyl semimetals [18; 19; 20]. The extreme form of correlation effects occurs when the interactions drive a metal into a Mott localized state. It is an intriguing question as to what role topological nodes of the non-interacting limit may have in Mott insulators [21]. Along this direction, determining how symmetry constraints operate in a Mott insulator represents an outstanding open question. One of the important features of a Mott insulator is that it can have Green's function poles and zeros, both of which contribute to the Luttinger count of electronic states [22]. Does symmetry constrain both features? In this work, we address the symmetry constraints of a Mott insulator using the Green's function approach [16]. To be specific, we present our analysis on a lattice model in which the non-interacting Hamiltonian has symmetry-enforced Dirac nodes, though we expect our results to be valid more generally. Importantly, the symmetry constrains the Green's functions at all frequencies and the degeneracies at the high symmetry wavevectors appear in the form of spectral crossings; in particular, we find that this operates on both Green's function poles and zeros. Our qualitative results are illustrated in Fig. 1: the spectral crossings of the Green's function poles [(c)] and Green's function zeros [(d)] appears as the wavector moves [(b)] towards the high symmetry wavevector \(P\); this captures the degeneracy of the Green's function eigenvectors at \(P\), where the Bloch functions of the non-interacting counterpart are degenerate [(a), top panel]. They give rise to new understandings of topological quantum materials and set the stage for systematic analysis of the topology of Mott insulators. **Interacting square net lattice and solution method:** We consider a two-dimensional (2D) square net lattice, as illustrated in Fig. 2. Here, the non-interacting bands contain symmetry enforced Dirac crossings at the \(X\) and \(M\) points in the Brillouin zone (Fig. 1(a), bottom panel) [23]. We focus on local in momentum interactions analogous to those appearing in the Hatsugai-Kohmoto (HK) model [24; 25; 26; 27; 28; 29; 30; 31; 32; 33]. This form of interaction can be solved exactly [see the Supplementary Material (SM), Sec. C], which facilitates the understanding of not only the symmetry-enforced spectral crossing but also the symmetry constraints on dispersive poles and zeros as we do below. The Hamiltonian of a 2D square net lattice (Fig. 2) in the orbital basis \(\Lambda_{\mathbf{k}}^{\uparrow}\equiv(c_{A,\bar{\natural}},c_{A,\bar{\natural}},c_ {B,\bar{\natural}},c_{B,\bar{\natural}})_{\mathbf{k}}\) takes the
2302.14497
An active-set method for sparse approximations. Part II: General piecewise-linear terms
In this paper we present an efficient active-set method for the solution of convex quadratic programming problems with general piecewise-linear terms in the objective, with applications to sparse approximations and risk-minimization. The method exploits the structure of the piecewise-linear terms appearing in the objective in order to significantly reduce its memory requirements, and thus improve its efficiency. We showcase the robustness of the proposed solver on a variety of problems arising in risk-averse portfolio selection, quantile regression, and binary classification via linear support vector machines. We provide computational evidence to demonstrate, on real-world datasets, the ability of the solver of efficiently handling a variety of problems, by comparing it against an efficient general-purpose interior point solver as well as a state-of-the-art alternating direction method of multipliers. This work complements the accompanying paper [``An active-set method for sparse approximations. Part I: Separable $\ell_1$ terms", S. Pougkakiotis, J. Gondzio, D. S. Kalogerias], in which we discuss the case of separable $\ell_1$ terms, analyze the convergence, and propose general-purpose preconditioning strategies for the solution of its associated linear systems.
Spyridon Pougkakiotis, Jacek Gondzio, Dionysios S. Kalogerias
2023-02-28T11:26:10Z
http://arxiv.org/abs/2302.14497v1
# An active-set method for sparse approximations ###### Abstract In this paper we present an efficient active-set method for the solution of convex quadratic programming problems with general piecewise-linear terms in the objective, with applications to sparse approximations and risk-minimization. The method exploits the structure of the piecewise-linear terms appearing in the objective in order to significantly reduce its memory requirements, and thus improve its efficiency. We showcase the robustness of the proposed solver on a variety of problems arising in risk-averse portfolio selection, quantile regression, and binary classification via linear support vector machines. We provide computational evidence to demonstrate, on real-world datasets, the ability of the solver of efficiently handling a variety of problems, by comparing it against an efficient general-purpose interior point solver as well as a state-of-the-art alternating direction method of multipliers. This work complements the accompanying paper ["_An active-set method for sparse approximations. Part I: Separable \(\ell_{1}\) terms_", _S. Pougkakiotis, J. Gondzio, D. S. Kalogerias_], in which we discuss the case of separable \(\ell_{1}\) terms, analyze the convergence, and propose general-purpose preconditioning strategies for the solution of its associated linear systems. ## 1 Introduction In this paper we are interested in the solution of the following optimization problem \[\min_{x\in\mathbb{R}^{n}} \left\{c^{\top}x+\frac{1}{2}x^{\top}Qx+\sum_{i=1}^{l}\left((Cx+d) _{i}\right)_{+}+\|Dx\|_{1}+\delta_{\mathcal{K}}(x)\right\},\] (1.1) s.t. \[Ax=b,\] where \(c\in\mathbb{R}^{n}\), \(Q\in\mathbb{R}^{n\times n}\) is a positive semi-definite matrix, \(C\in\mathbb{R}^{l\times n}\), \(d\in\mathbb{R}^{l}\), \(A\in\mathbb{R}^{m\times n}\) is a linear constraint matrix with \(b\in\mathbb{R}^{m}\) a given right-hand side, \(D\in\mathbb{R}^{n\times n}\) is a diagonal positive semi-definite ("weight") matrix, and \(\mathcal{K}\) is a closed convex set \(\mathcal{K}\coloneqq\{x\in\mathbb{R}^{n}\colon x\in[a_{l},a_{u}]\}\) with \(a_{l},\ a_{u}\in\mathbb{R}^{n}\), such that \((a_{l})_{i}\in\mathbb{R}\cup\{-\infty\},\ (a_{u})_{i}\in\mathbb{R}\cup\{+\infty\}\) for all \(i=1,\ldots,n\). Additionally, \((\cdot)_{+}\equiv\max\{\cdot,0\}\), while \(\delta_{\mathcal{K}}(\cdot)\) is an indicator function for the set \(\mathcal{K}\), that is, \(\delta_{\mathcal{K}}(x)=0\) if \(x\in\mathcal{K}\) and \(\delta_{\mathcal{K}}(x)=\infty\), otherwise. We introduce an auxiliary variable \(w\in\mathbb{R}^{l}\), and reformulate (1.1) in the following form: \[\min_{(x,w)\in\ \mathbb{R}^{n}\times\mathbb{R}^{l}} \left\{c^{\top}x+\frac{1}{2}x^{\top}Qx+\sum_{i=1}^{l}\left(w_{i} \right)_{+}+\|Dx\|_{1}+\delta_{\mathcal{K}}(x)\right\},\] (P) s.t. \[Cx+d-w=0_{l},\] \[Ax=b.\] **Remark 1**.: _Let us notice that the model in (P) can readily accommodate terms of the form of \(\|Cx+d\|_{1}\) where \(C\in\mathbb{R}^{l\times n}\), and \(d\in\mathbb{R}^{l}\). Indeed, letting \(c=-C^{\top}\mathds{1}_{l}\) and adding the constant term \(-\mathds{1}_{l}^{\top}d\) in the objective of (P), we notice that_ \[\|Cx+d\|_{1}\equiv-\mathds{1}_{l}^{\top}(Cx+d)+\sum_{j=1}^{l}\left(2(Cx+d)_{j }\right)_{+},\] _where \(\mathbf{1}_{l}\coloneqq(1,\ldots,1)^{\top}\in\mathbb{R}^{l}\). Similarly, any piecewise-linear term of the form_ \[\sum_{i=1}^{l}\max\left\{\left(C_{1}x+d_{1}\right)_{i},\left(C_{2}x+d_{2}\right) _{i}\right\},\] _where \(C_{1},\ C_{2}\in\mathbb{R}^{l\times n}\) and \(d_{1},\ d_{2}\in\mathbb{R}^{l}\), can also be readily modeled. Indeed, setting \(c=C_{2}^{\top}\,\mathds{1}\) and adding the term \(d_{2}^{\top}\,\mathds{1}_{l}\) in the objective yields_ \[\mathds{1}_{l}^{\top}\left(C_{2}x+d_{2}\right)+\sum_{i=1}^{l}\left(\left(C_{1 }x+d_{1}-C_{2}x-d_{2}\right)_{i}\right)_{+}=\sum_{i=1}^{l}\max\left\{\left(C_{ 1}x+d_{1}\right)_{i},\left(C_{2}x+d_{2}\right)_{i}\right\}.\] _Finally, it is important to note that model (P) allows for multiple piecewise-linear terms of the form \(\max\{Cx+d,0_{l}\}\), \(\|Cx+d\|_{1}\) or \(\max\left\{C_{1}x+d_{1},C_{2}x+d_{2}\right\}\), since we can always adjust \(l\) to account for more than one terms. Hence, one can observe that (P) is quite general and can be used to model a plethora of very important problems that arise in practice._ In light of the discussion in Remark 1, it is easily observed that problem (P) can model a plethora of very important problems arising in several application domains spanning, among others, operational research, machine learning, data science, and engineering. More specifically, various lasso and fussed lasso instances (with applications to sparse approximations for classification and regression [19, 54], portfolio allocation [1], or medical diagnosis [23], among many others) can be readily modeled by (P). Additionally, various risk-minimization problems with linear random cost functions can be modeled by (P) (e.g. see [44, 33, 48]). Furthermore, even risk-minimization problems with nonlinear random cost functions, which are typically solved via Gauss-Newton schemes (e.g. see [10]), often require the solution of sub-problems like (P). Finally, continuous relaxations of integer programming problems with applications to operational research (e.g. [32]) often take the form of (P). Given the multitude of problems requiring easy access to (usually accurate) solutions of (P), the derivation of efficient, robust, and scalable solution methods is of paramount importance. Problem (P) can be solved by various first- or second-order methods. In particular, using a standard reformulation, by introducing several auxiliary variables, (P) can be written as a convex quadratic programming (QP) one and efficiently solved by, among others, an _interior point method_ (IPM; e.g. [19, 38]), an _alternating direction method of multipliers_ (ADMM; e.g. [20]), or a _proximal point method_ (e.g. [28, 43]). However, the reformulation of (P) into a convex QP is not expected to lead to scalable solution methods, since the dimension of the problem is significantly increased, and hence an already large-scale instance might be very difficult to solve in this way. Alternatively, ADMM (or general splitting) schemes can be developed for the solution of (P) without the need of additional auxiliary variables (see Section 3). However, no first-order scheme would be able to consistently yield sufficiently accurate solutions (i.e. of 4-, 5- or 6-digit accuracy). If such a solution is sought, we have to employ a semismooth Newton method (SSN; e.g. [12, 24, 37]), or a combination of a proximal point method with an SSN scheme utilized for the solution of its associated sub-problems. SSN methods have been successfully applied to a plethora of problems, however, their success is heavily reliant on the properties of the problem at hand (e.g. the rank of the linear constraints, or the conditioning of the Hessian). On the other hand, the combination of the proximal point method with the SSN can circumvent these issues, since the associated nonsmooth sub-problems can be guaranteed to be well-defined and well-conditioned. Various such solvers have been developed and analyzed in the literature. For example, the authors in [53] developed a dual augmented Lagragian scheme combined with an SSN method for the solution of semidefinite programming problems, and obtained very promising results. This scheme was then utilized for the solution of linear programming problems in [35], and for lasso-regularized problems in [34]. A similar primal-dual approach for \(\ell_{1}\)-regularized convex quadratic programming problems was developed and analyzed in our accompanying paper [39] and was shown to be especially efficient for the solution of elastic-net linear regression and \(L^{1}\)-regularized partial differential equation constrained optimization problems. In fact, the proposed active set method developed in this work is a direct extension of the method given in [39], altered in a specific way so that it can efficiently handle most piecewise-linear terms that appear in practice, via restricting its memory requirements. Indeed, we showcase that each of the nonsmooth terms in the objective of (P) can be utilized for reducing the memory requirements of the proposed method. In particular, when computing the _Clarke subdifferential_ ([13]) of an appropriate augmented Lagrangian penalty function, we can show that the Clarke derivatives of such piecewise-linear terms can act as projectors. As a result, we obtain an active-set scheme that reduces the sub-problems' dimensions significantly, leading to better performance and reduced memory requirements. In particular, we observe that a thresholding operation (originating from the presence of \(\ell_{1}\) terms in the objective) determines which of the variables \(x\) are inactive, allowing us to throw away entire columns from matrices \(A\) and \(C\) when solving the associated sub-problems. Furthermore, the \(\max\{\cdot,0\}\) terms in the objective determine which of the rows of \(C\) are non-important, allowing us to further eliminate such rows. We showcase the robustness and the efficiency of the resulting active-set scheme on various optimization problems arising in risk-averse portfolio selection, quantile regression, and binary classification via linear support vector machines. In each of these cases the proposed scheme is compared against the robust polynomially convergent regularized interior point method developed and analyzed in [38], as well as the well-known ADMM-based OSQP solver ([49]). We demonstrate the reduced memory requirements of the active set scheme (and hence its improved scalability), as compared to interior point and ADMM alternatives applied to QP reformulations, while showcasing its efficiency and robustness. Structure of the paperIn Section 2 we briefly present the proposed inner-outer active-set method. In particular, in Section 2.1 we derive a proximal method of multipliers (outer scheme) for the solution of (P), assuming that we can find an \(\epsilon\)-optimal solution of the associated sub-problems. Then, in Section 2.2, we briefly present a semismooth Newton method (inner scheme) for finding approximate solutions to sub-problems arising from the proximal method of multipliers. Focusing on the structure of problem (P), by selecting appropriate Clarke derivatives, we show that the proposed inner-outer method is in fact an active-set scheme, the associated linear systems of which are well-conditioned and stable. In Section 2.3, we discuss an extension of the method for dealing with problems having arbitrary piecewise-linear terms in the objective (with applications to, among others, robust optimization) that are not currently covered within the paper. Subsequently, in Section 3, we derive a proximal alternating direction method of multipliers for the approximate solution of (P) in order to obtain good initial estimates for the primal and dual variables of the problem. This can then be used to warm-start the proposed second-order algorithm. A good starting point for the algorithm could mean that only a small portion of the problem data matrices is used at each inner-outer iteration of the scheme, while the outer method is expected to achieve its local linear (and potentially superlinear) convergence rate in a small number of inner-outer iterations. In Section 4, we showcase the efficiency and robustness of the approach on several important real-life applications arising in risk-averse portfolio optimization, statistical regression, and binary classification via linear support vector machines. Finally, we discuss our conclusions in Section 5. NotationGiven a vector \(x\) in \(\mathbb{R}^{n}\), \(\|x\|\) denotes its Euclidean norm. Given a closed set \(\mathcal{K}\subset\mathbb{R}^{n}\), \(\Pi_{\mathcal{K}}(x)\) denotes the Euclidean projection onto \(\mathcal{K}\), that is \(\Pi_{\mathcal{K}}(x)\coloneqq\arg\min\{\|x-z\|\colon z\in\mathcal{K}\}\). Given a closed set \(\mathcal{K}\), we write \(\operatorname{dist}(z,D)\coloneqq\inf_{z^{\prime}\in\mathcal{K}}\|z-z^{\prime}\|\). Given a convex function \(p\colon\mathbb{R}^{n}\mapsto\mathbb{R}\), we define the proximity operator as \(\operatorname{\mathbf{prox}}_{p}(u)\coloneqq\arg\min_{x}\left\{p(x)+\frac{1}{2 }\|u-x\|^{2}\right\}\). Given an index set \(\mathcal{D}\), \(|\mathcal{D}|\) denotes its cardinality. Given a rectangular matrix \(A\in\mathbb{R}^{m\times n}\) and an index set \(\mathcal{B}\subseteq\{1,\ldots,n\}\), we denote the columns of \(A\), the indices of which belong to \(\mathcal{B}\), as \(A_{\mathcal{B}}\). Given a matrix \(A\in\mathbb{R}^{m\times n}\), and two index sets \(\mathcal{B}_{1}\subseteq\{1,\ldots,m\}\) and \(\mathcal{B}_{2}\subseteq\{1,\ldots,n\}\), we denote the subset of rows and columns of \(A\), the indices of which belong to \(\mathcal{B}_{1},\ \mathcal{B}_{2}\) respectively, as \(A_{(\mathcal{B}_{1},\mathcal{B}_{2})}\). Finally, given an arbitrary vector \(d\) with \(n\) components as well as some indexes \(1\leq i_{1}\leq i_{2}\leq n\), we denote by \(d_{i_{1}:i_{2}}\) the vector containing the \(i_{1}\)-set up to the \(i_{2}\)-nd component of this vector. To avoid confusion, indexes \(i\) are always used to denote entries of vectors or matrices, while \(k\) and \(j\) are reserved to denote iteration counters (outer and inner, respectively). ## 2 An active-set method In this section we derive an active-set method for the solution of (P). The algorithm is an inner-outer scheme, which results by combining an outer proximal method of multipliers, and an inner semismooth Newton method for the solution of the PMM sub-problems. Following the discussion in [39, Section 2], we briefly derive the outer PMM scheme. Then, we briefly present the inner semismooth Newton method and discuss the solution of its associated linear systems, which is the main bottleneck of the algorithm. ### Outer scheme: A primal-dual proximal method of multipliers In this section, following and extending the developments in [39], we derive a primal-dual proximal method of multipliers for the approximate solution of (P). A convergence analysis is not provided, and the reader is referred to [39, Section 2], for an outline of such an analysis (since it applies directly to the case under consideration). Given a penalty parameter \(\beta>0\), and some dual multiplier estimates \(y,\ z\), we follow [39, Section 2] to obtain the augmented Lagrangian corresponding to (P), which reads \[\begin{split}\mathcal{L}_{\beta}(x,w;y,z)\coloneqq& \ c^{\top}x+\frac{1}{2}x^{\top}Qx+\sum_{i=1}^{l}\left(w_{i} \right)_{+}+\|Dx\|_{1}-y^{\top}\left(\begin{bmatrix}Cx+d-w\\ Ax-b\end{bmatrix}\right)\\ &+\frac{\beta}{2}\left\|\begin{bmatrix}Cx+d-w\\ Ax-b\end{bmatrix}\right\|^{2}-\frac{1}{2\beta}\|z\|^{2}+\frac{1}{2\beta}\|z+ \beta x-\beta\Pi_{\mathcal{K}}(\beta^{-1}z+x)\|^{2}.\end{split} \tag{2.1}\] Indeed, this can be verified utilizing certain standard properties of Fenchel duality and of the proximity operator (see [39, Section 2]). During iteration \(k\geq 0\) of the proximal method of multipliers, we have the estimates \((x_{k},y_{k},z_{k})\) as well as the penalty parameters \(\beta_{k},\ \rho_{k}\), such that \(\rho_{k}\coloneqq\frac{\beta_{k}}{\tau_{k}}\), where \(\{\tau_{k}\}_{k=0}^{\infty}\) is a non-increasing positive sequence. For simplicity of exposition, let \(g\left(x,w\right)\coloneqq g_{1}(x)+g_{2}(w)\), where \(g_{1}(x)\coloneqq\left\|Dx\right\|_{1}\), and \(g(w)\coloneqq\sum_{i=1}^{l}\left(w_{i}\right)_{+}.\) We consider the following regularized continuously differentiable function: \[\phi(x,w)\equiv\ \phi_{\beta_{k},\rho_{k}}(x,w;x_{k},y_{k},z_{k})\coloneqq \mathcal{L}_{\beta_{k}}(x,w;y_{k},z_{k})+\frac{1}{2\rho_{k}}\left\|x-x_{k} \right\|^{2}-g\left(x,w\right).\] Notice that we introduce primal proximal regularizer only for variable \(x\), and not for \(w\). This is a very important algorithmic choice that departs from the developments in [39]. We want to develop a memory-efficient active-set method, and for that reason we avoid introducing a proximal term for the auxiliary variables \(w\). This choice, which does not hinder the stability of the proposed approach, leads to better memory efficiency of the resulting active-set scheme, by promoting a sparsification of the associated linear systems solved at each inner-outer iteration (this point will become clear in Section 2.2.1). The minimization of the proximal augmented Lagrangian function can be written as \[\min_{x,w}\ \left\{\mathcal{L}_{\beta_{k}}(x,w;y_{k},z_{k})+\frac{1}{2\rho_{k}} \left\|x-x_{k}\right\|^{2}\right\}\equiv\min_{x,w}\left\{\phi(x,w)+g(x,w) \right\},\] and thus we need to find \((x^{*},w^{*})\in\mathbb{R}^{n}\times\mathbb{R}^{l}\) such that \[\left(\nabla\phi(x^{*},w^{*})\right)^{\top}((x,w)-(x^{*},w^{*}))+g(x,w)-g(x^{ *},w^{*})\geq 0,\quad\forall\ (x,w)\in\mathbb{R}^{n}\times\mathbb{R}^{l}.\] To that end, we observe that \[\nabla_{x}\phi(x,t,w) =c+Qx-\left[C^{\top}\quad A^{\top}\right]y_{k}+\beta_{k}\left[C^ {\top}\quad A^{\top}\right]\left(\begin{bmatrix}Cx+d-w\\ Ax-b\end{bmatrix}\right)\] \[\qquad+(z_{k}+\beta_{k}x)-\beta_{k}\Pi_{\mathcal{K}}(\beta_{k}^{ -1}z_{k}+x)+\rho_{k}^{-1}(x-x_{k}),\] \[\nabla_{w}\phi(x,t,w) =(y_{k})_{1:l}-\beta_{k}\left(Cx+d-w\right).\] By introducing the dual variable \(y\coloneqq y_{k}-\beta_{k}\left(\begin{bmatrix}Cx+d-w\\ Ax-b\end{bmatrix}\right)\in\mathbb{R}^{l+m}\), the optimality conditions of \(\min_{x,w}\ \left\{\phi(x,w)+g(x,w)\right\}\) can be written as \[(0_{n+l},0_{l+m})\in\mathcal{M}_{\beta_{k},\rho_{k}}(x,w,y;x_{k},y_{k},z_{k}) \equiv\mathcal{M}_{k}(x,w,y), \tag{2.2}\] where \[\mathcal{M}_{k}(x,w,y)\coloneqq\left\{(u^{\prime},v^{\prime})\colon u^{\prime} \in r_{k}(x,y)+\begin{bmatrix}\partial g_{1}(x)\\ \partial g_{2}(w)\end{bmatrix},\ v^{\prime}=\begin{bmatrix}Cx+d-w\\ Ax-b\end{bmatrix}+\beta_{k}^{-1}(y-y_{k})\right\},\] \[r_{k}(x,y)\coloneqq\left[c+Qx-\begin{bmatrix}C^{\top}&A^{\top}\end{bmatrix}y+(z_{k} +\beta_{k}x)-\beta_{k}\Pi_{\mathcal{K}}(\beta_{k}^{-1}z_{k}+x)+\rho_{k}^{-1}(x-x _{k})\right].\] We now describe the proposed proximal method of multipliers in Algorithm PMM. ``` 0:\((x_{0},w_{0},y_{0},z_{0})\in\mathbb{R}^{n}\times\mathbb{R}^{l}\times\mathbb{R}^{l+m} \times\mathbb{R}^{n}\), \(\beta_{0},\ \beta_{\infty}>0\), \(\{\tau_{k}\}_{k=0}^{\infty}\), such that \(\tau_{k}\geq\tau_{\infty}>0\), \(\forall\ k\geq 0\). Choose a sequence of positive numbers \(\{\epsilon_{k}\}\) such that \(\epsilon_{k}\to 0\). for\((k=0,1,2,\ldots)\)do Find \((x_{k+1},w_{k+1},y_{k+1})\) such that: \[\varepsilon_{k}\coloneqq\operatorname{dist}\left(0_{n+2l+m},\mathcal{M}_{k} \left(x_{k+1},w_{k+1},y_{k+1}\right)\right)\leq\epsilon_{k},\] (2.3) where letting \(\hat{r}=r_{k}(x_{k+1},y_{k+1})\) and \(\mathcal{U}=\left\{u\in\mathbb{R}^{n+l}\colon u\in\partial g(x_{k+1},w_{k+1}) \right\},\) we have \[\varepsilon_{k}=\left\|\left[\begin{bmatrix}C\\ A\end{bmatrix}x_{k+1}+\begin{bmatrix}d\\ -b\end{bmatrix}-\begin{bmatrix}I_{l}\\ 0_{m,l}\end{bmatrix}w_{k+1}+\beta_{k}^{-1}(y_{k+1}-y_{k})\right]\right\|.\] \[z_{k+1}=\ (z_{k}+\beta_{k}x_{k+1})-\beta_{k}\Pi_{\mathcal{K}} \big{(}\beta_{k}^{-1}z_{k}+x_{k+1}\big{)}.\] (2.4) \[\beta_{k+1}\nearrow\beta_{\infty}\leq\infty,\quad\rho_{k+1}=\frac{ \beta_{k+1}}{\tau_{k+1}}.\] (2.5) endfor return\((x_{k+1},w_{k+1},y_{k+1},z_{k+1})\). ``` **Algorithm PMM** (_proximal method of multipliers_) Let us notice that given a triple \((\tilde{x},\tilde{w},\tilde{y})\), we can easily evaluate the distance in (2.3), due to the piecewise linear structure of the associated function \(g(\cdot)\). A trivial extension of the analysis in [39, Section 3] yields that Algorithm PMM is globally convergent under the standard assumption of primal-dual feasibility. Furthermore, given some additional conditions on the error sequence \(\{\epsilon_{k}\}\) one can show that in fact Algorithm PMM achieves a local linear convergence rate (which becomes superlinear if \(\beta_{k}\to\infty\) at a suitable rate). Finally, the algorithm exhibits a global linear convergence rate, assuming that the starting point is chosen properly. For more details, we refer the reader to [39, Theorems 2.2, 2.3]. ### Inner scheme: A semismooth Newton method Next, we employ a semismooth Newton (SSN) method to solve problem (2.3) appearing in Algorithm PMM, and verify that the resulting inner-outer scheme admits an active-set interpretation. Given the estimates \((x_{k},y_{k},z_{k})\) as well as the penalties \(\beta_{k},\ \rho_{k}\), we apply SSN to approximately solve (2.2). Given any bounded positive penalty \(\zeta_{k}>0\), the optimality conditions in (2.2) can be written as \[\widehat{\mathcal{M}}_{k}\left(x,w,y\right)\coloneqq\zeta_{k} \left[\begin{bmatrix}\zeta_{k}^{-1}\left(\begin{bmatrix}x\\ w\end{bmatrix}-\mathbf{prox}_{\zeta_{k}g}\left(x-\zeta_{k}r_{k}(x,y)\right) \right)\\ \begin{bmatrix}C\\ A\end{bmatrix}x+\begin{bmatrix}d\\ -b\end{bmatrix}\begin{bmatrix}I_{l}\\ 0_{m,l}\end{bmatrix}w+\beta_{k}^{-1}(y-y_{k})\end{bmatrix}=\begin{bmatrix}0_{n+l }\\ 0_{l+m}\end{bmatrix}. \tag{2.6}\] We set \(x_{k_{0}}=x_{k}\), \(y_{k_{0}}=y_{k}\), and at every iteration \(k_{j}\) of SSN, we solve \[\underbrace{\left[\begin{array}{cc}H_{k_{j}}&0_{n,l}&-\zeta_{k}B_{g_{1},k_{j }}\begin{bmatrix}C^{\top}&A^{\top}\end{bmatrix}\\ 0_{l,n}&\begin{pmatrix}I_{l}-B_{g_{2},k_{j}}\end{pmatrix}&\zeta_{k}\begin{bmatrix} B_{g_{2},k_{j}}&0_{l,m}\end{bmatrix}\\ \zeta_{k}\begin{bmatrix}C\\ A\end{bmatrix}&\zeta_{k}\begin{bmatrix}-I_{l}\\ 0_{m,l}\end{bmatrix}&\zeta_{k}\beta_{k}^{-1}I_{l+m}\end{bmatrix}\right]}_{M_{k_{j}}} \left[\begin{bmatrix}d_{x}\\ d_{y}\end{bmatrix}=-\widehat{\mathcal{M}}_{k}\left(x_{k_{j}},w_{k_{j}},y_{k_{j}} \right), \tag{2.7}\] where we have introduced the notation \[H_{k_{j}}\coloneqq I_{n}-B_{g_{1},k_{j}}+\zeta_{k}\beta_{k}B_{g_{1},k_{j}}\left( \left(1+\rho_{k}^{-1}\beta_{k}^{-1}\right)I_{n}-B_{\delta,k_{j}}+\beta_{k}^{-1} Q\right),\] assuming that we are given some arbitrary matrices \[\begin{split}& B_{\delta,k_{j}}\in\partial_{x}^{C}\Pi_{\mathcal{K}} \left(\beta_{k}^{-1}z_{k}+x_{k_{j}}\right),\\ & B_{g_{1},k_{j}}\in\partial_{x}^{C}\left(\mathbf{prox}_{\zeta_{k }g_{1}}\left(x_{k_{j}}-\zeta_{k}(r_{k}(x_{k_{j}},y_{k_{j}}))_{1:n}\right) \right),\\ & B_{g_{2},k_{j}}\in\partial_{w}^{C}\left(\mathbf{prox}_{\zeta_{k }g_{2}}\left(w_{k_{j}}-\zeta_{k}(y_{k_{j}})_{1:l}\right)\right).\end{split} \tag{2.8}\] The symbol \(\partial_{x}^{C}(\cdot)\) denotes the Clarke subdifferential of a function (see [13]). In our case, any element of the Clarke subdifferential is a _Newton derivative_ (see [14, Chapter 13]), since \(r_{k}(\cdot,\cdot)\) and \(g(\cdot,\cdot)\equiv g_{1}(\cdot)+g_{2}(\cdot)\) are _piecewise continuously differentiable_ and _regular functions_. For any \(u\in\mathbb{R}^{n}\) and any \(i\in\{1,\ldots,n\}\), it holds that \[\partial_{u_{i}}^{C}\left(\Pi_{[a_{i},b_{i}]}(u_{i})\right)=\begin{cases}\{1 \},&\quad\text{if}\quad u_{i}\in(a_{i},b_{i}),\\ \{0\},&\quad\text{if}\quad u_{i}\notin[a_{i},b_{i}],\\ [0,1],&\quad\text{if}\quad u_{i}\in\{a_{i},b_{i}\}.\end{cases}\] Since \(g_{1}(x)=\|Dx\|_{1}\) and \(D\succeq 0_{n}\) is diagonal, we have that for all \(u\in\mathbb{R}^{n}\) and any \(i\in\{1,\ldots,n\}\), \[\left(\mathbf{prox}_{\zeta_{k}g_{1}}\left(u\right)\right)_{i}=\mathbf{soft}_{ \left[-\zeta_{k}D_{(i,i)},\zeta_{k}D_{(i,0)}\right]}\left(u_{i}\right)\equiv \max\big{\{}|u_{i}|-\zeta_{k}D_{(i,i)},0\big{\}}\operatorname{sign}(u_{i}).\] Finally, since \(g_{2}(w)=\max\{w,0\}\), we have that for all \(u\in\mathbb{R}^{l}\) and any \(i=1,\ldots,l\), \[\left(\mathbf{prox}_{\zeta_{k}g_{2}}\left(u\right)\right)_{i}=\mathbf{soft}_{ \left[0,\zeta_{k}\right]}\left(u_{i}\right)\equiv\max\{u_{i}-\zeta_{k},0\}+ \min\{u_{i},0\}.\] Then, we can show (e.g. see [14, Example 14.9]) that \[\left(\partial_{u}^{C}\left(\mathbf{prox}_{\zeta_{k}g_{1}}\left(u\right) \right)\right)_{i}=\begin{cases}\{1\},&\quad\text{if}\quad|u_{i}|>\zeta_{k}D_{ (i,i)}\text{ or }D_{(i,i)}=0,\\ \{0\},&\quad\text{if}\quad|u_{i}|<\zeta_{k}D_{(i,0)},\\ [0,1],&\quad\text{if}\quad u_{i}=\zeta_{k}D_{(i,i)},\end{cases}\quad\text{ for all }i\in\{1,\ldots,n\},\] and \[\left(\partial_{u}^{C}\left(\mathbf{prox}_{\zeta_{k}g_{2}}\left(u\right) \right)\right)_{i}=\begin{cases}\{1\},&\quad\text{if}\quad u_{i}>\zeta_{k}\text { or }u_{i}<0,\\ \{0\},&\quad\text{if}\quad 0<u_{i}<\zeta_{k},\\ [0,1],&\quad\text{if}\quad u_{i}=\zeta_{k}\text{ or }u_{i}=0,\end{cases}\quad\text{ for all }i\in\{1,\ldots,l\}.\] We complete the derivation of the SSN, by defining a primal-dual merit function for globalizing the semismooth Newton method via a backtracking line-search scheme. Following [39], we employ the following merit function \[\Theta_{k}(x,w,y)\coloneqq\left\|\widehat{M}_{k}\left(x,w,y\right)\right\|^{2}. \tag{2.9}\] This function is often used for globalizing SSN schemes applied to nonsmooth equations of the form of (2.6). In Algorithm SSN, we outline a locally superlinearly convergent semismooth Newton method for the approximate solution of (2.3). The associated linear systems can be solved approximately (e.g. by Krylov subspace methods as in [39]), although in this work a suitable factorization scheme is utilized. Local superlinear convergence of Algorithm SSN follows directly from [37, Thereom 3], since the equation in (2.6) is _BD-regular_ (i.e. the Bouligand subdifferential at the optimum contains nonsingular matrices) by construction, as it models "regularized" sub-problems arising from the proximal method of multipliers. On the other hand, if we assume that the directional derivative of (2.9) is continuous at the optimum, then we can mirror the analysis in [25] to show that Algorithm SSN is globally convergent. There is a wide literature on similar semismooth Newton schemes, and we refer the reader to [12, 25, 37, 40] and the references therein for more details. #### 2.2.1 The SSN linear systems The major bottleneck of the previously presented inner-outer scheme is the solution of the associated linear systems given in (2.7). One could alter Algorithm SSN so that it does not require an exact solution. In turn this would allow for the utilization of preconditioned Krylov subspace solvers for the efficient solution of such systems (e.g. as in [39, Section 3]). In particular, any preconditioner derived in [21] can be utilized for the proposed solver. However, for simplicity of exposition, we employ a standard factorization approach. As will become evident in Section 4, the active-set nature of the proposed algorithm enables the use of factorization even for large-scale problems of interest, since most of the data are inactive at each inner-outer iteration \(k_{j}\). Indeed, in the presence of multiple piecewise-linear terms in the objective, and assuming that the method is appropriately warm-started, one can ensure that most of the problem data are not incorporated when forming the active Hessian at each inner-outer iteration. In what follows we derive the associated linear systems. Let \(k\geq 0\) be an arbitrary iteration of Algorithm PMM, and \(j\geq 0\) an arbitrary iteration of Algorithm SSN. Firstly, let us notice that any element \(B_{\delta}\in\partial_{x}^{C}\left(\Pi_{\mathcal{K}}\left(\cdot\right)\right)\) yields a Newton derivative (see [14, Theorem 14.8]). The same applies for any element \(B_{g_{1}}\in\partial_{x}^{C}\left(\mathbf{prox}_{\zeta_{k_{0}}g_{1}}\left( \cdot\right)\right)\), and \(B_{g_{2}}\in\partial_{w}^{C}\left(\mathbf{prox}_{\zeta_{k_{0}}g_{2}}\left( \cdot\right)\right)\). Thus, using (2.8) we can choose \(B_{\delta,k_{j}},\ B_{g_{1},k_{j}},\ B_{g_{2},k_{j}}\) from the Bouligand subdifferential to improve computational efficiency, by reducing the active variables and constraint rows. To that end, we set \(B_{\delta,k_{j}}\) as a diagonal matrix with \[\left(B_{\delta,k_{j}}\right)_{\left(i,i\right)}\coloneqq\begin{cases}1,& \text{if}\quad\left(\beta_{k}^{-1}z_{k}+x_{k_{j}}\right)_{i}\in\left(a_{l_{i}},a_{u_{i}}\right),\\ 0,&\text{otherwise},\end{cases} \tag{2.10}\] for all \(i\in\{1,\ldots,n\}\), \(B_{g_{1},k_{j}}\) as the following diagonal matrix \[\left(B_{g_{1},k_{j}}\right)_{\left(i,i\right)}\coloneqq\begin{cases}1,& \text{if}\quad\left|\left(\widehat{u}_{k_{j}}\right)_{i}\right|>\zeta_{k}D_{ \left(i,i\right)},\text{ or }\quad D_{\left(i,i\right)}=0,\\ 0,&\text{otherwise},\end{cases} \tag{2.11}\] for all \(i\in\{1,\ldots,n\}\), where \(\widehat{u}_{k_{j}}\coloneqq x_{k_{j}}-\zeta_{k}(r_{k}(x_{k_{j}},y_{k_{j}}))_ {1:n}\), and \(B_{g_{2},k_{j}}\) as \[\left(B_{g_{2},k_{j}}\right)_{\left(i,i\right)}\coloneqq\begin{cases}1,& \text{if}\quad\left(w_{k_{j}}\right)_{i}-\zeta_{k}\left(y_{k_{j}}\right)_{i} \leq 0,\text{ or }\left(w_{k_{j}}\right)_{i}-\zeta_{k}\left(y_{k_{j}}\right)_{i} \geq\zeta_{k},\\ 0,&\text{otherwise}\end{cases} \tag{2.12}\] for all \(i\in\{1,\ldots,l\}\). Given the aforementioned choices for the projection matrices, we can now explicitly eliminate certain variables from the system in (2.7), in order to obtain a saddle-point system. To that end, from the second block-equation in (2.7), we have \[\left(I_{l}-B_{g_{2},k_{j}}\right)d_{w}+\zeta_{k}B_{g_{2},k_{j}}(d_{y})_{1:l} =-\left(w_{k_{j}}-\mathbf{prox}_{\zeta_{k}g_{2}}\left(w_{k_{j}}-\zeta_{k}(y_{ k_{j}})_{1:l}\right)\right).\] Let \(\mathcal{B}_{g_{2},k_{j}}\coloneqq\left\{i\in\{1,\ldots,l\}\colon\left(B_{g_{2},k_{j }}\right)_{(i,i)}=1\right\}\). Then, we obtain \[(d_{y})_{\mathcal{B}_{g_{2},k_{j}}}=-\zeta_{k}^{-1}\left(w_{k_{j}}-\mathbf{prox} _{\zeta_{k}g_{2}}\left(w_{k_{j}}-\zeta_{k}(y_{k_{j}})_{1:l}\right)\right)_{ \mathcal{B}_{g_{2},k_{j}}}, \tag{2.13}\] where the right-hand side can be evaluated easily. Letting \(\mathcal{N}_{g_{2},k_{j}}\coloneqq\{1,\ldots,l\}\setminus\mathcal{B}_{g_{2},k _{j}}\), we have \[(d_{w})_{\mathcal{N}_{g_{2},k_{j}}}=-(w_{k_{j}})_{\mathcal{N}_{g_{2},k_{j}}}. \tag{2.14}\] On the other hand, from the third block-equation of (2.7), we observe that \[(d_{w})_{\mathcal{B}_{g_{2},k_{j}}}=-\left(w_{k_{j}}-d-C\left(x_{k_{j}}+d_{x} \right)-\beta_{k}^{-1}\left(y_{k_{j}}+d_{y}-y_{k}\right)_{1:l}\right)_{ \mathcal{B}_{g_{2},k_{j}}},\] which can be computed after solving the reduced system. We define the following index sets \[\mathcal{B}_{g_{1},k_{j}}\coloneqq\left\{i\in\{1,\ldots,n\}\colon\left(B_{g_{ 1},k_{j}}\right)_{(i,i)}=1\right\},\qquad\mathcal{N}_{g_{1},k_{j}}\coloneqq\{ 1,\ldots,n\}\setminus\mathcal{B}_{g_{1},k_{j}}.\] From the first block equation of (2.7), we obtain \[(d_{x})_{\mathcal{N}_{g_{1},k_{j}}}=-\left(x_{k_{j}}-\mathbf{prox}_{\zeta_{k}g _{1}}\left(x_{k_{j}}-\zeta_{k}\left(r_{\beta_{k},p_{k}}(x_{k_{j}},y_{k_{j}}) \right)_{1:n}\right)\right)_{\mathcal{N}_{g_{1},k_{j}}}.\] After a straightforward (if tedious) calculation, we can pivot \((d_{y})_{\mathcal{B}_{g_{2},k_{j}}}\), \(d_{w}\), and \((d_{x})_{\mathcal{N}_{g_{1},k_{j}}}\) in system (2.7). This results in the following saddle-point system: \[\underbrace{\begin{bmatrix}-\left(\zeta_{k}^{-1}H_{k_{j}}\right)_{\left( \mathcal{B}_{g_{1},k_{j}},\mathcal{B}_{g_{1},k_{j}}\right)}&\begin{bmatrix} \widehat{C}\\ \widehat{A}\end{bmatrix}^{\top}\\ \begin{bmatrix}\widehat{C}\\ \widehat{A}\end{bmatrix}&\beta_{k}^{-1}I_{m+|\mathcal{N}_{g_{2},k_{j}}|}\\ \end{bmatrix}}_{\widehat{M}_{k_{j}}}\left[\begin{bmatrix}d_{x,\mathcal{B}_{g_{ 1},k_{j}}}\\ \begin{pmatrix}d_{y}\end{pmatrix}_{\mathcal{N}_{g_{2},k_{j}}}\\ \begin{pmatrix}d_{y}\end{pmatrix}_{\mathcal{N}_{g_{2},k_{j}}}\\ \begin{pmatrix}d_{y}\end{pmatrix}_{(t+1:l+m)}\\ \end{bmatrix}\right]=\widehat{r}_{k_{j}}, \tag{2.15}\] where \(\widehat{C}\coloneqq C_{\left(\mathcal{N}_{g_{2},k_{j}},\mathcal{B}_{g_{1},k _{j}}\right)}\), \(\widehat{A}\coloneqq A_{\mathcal{B}_{g_{1},k_{j}}}\), and \[\widehat{r}_{k_{j}}\coloneqq\begin{bmatrix}\zeta_{k}^{-1}\left( \widehat{M}_{k}(x_{k_{j}},w_{k_{j}},y_{k_{j}})\right)_{\mathcal{B}_{g_{1},k_{j }}}\\ \left(\left(\left(-\widehat{M}_{k}(x_{k_{j}},w_{k_{j}},y_{k_{j}})\right)_{(n+l +1:n+2l)}\right)_{\mathcal{N}_{g_{2},k_{j}}}\\ \left(-\widehat{M}_{k}(x_{k_{j}},w_{k_{j}},y_{k_{j}})\right)_{(n+2l+1:n+2l+m)} \end{bmatrix}\right.\\ +\begin{bmatrix}Q_{\left(\mathcal{B}_{g_{1},k_{j}},\mathcal{N}_{g_{1},k_{j }}\right)}d_{x,\mathcal{N}_{g_{1},k_{j}}}+\left(C_{\left(\mathcal{B}_{g_{2},k_{ j}},\mathcal{B}_{g_{1},k_{j}}\right)}\right)^{\top}(d_{y})_{\mathcal{B}_{g_{2},k_{j }}}\\ -C_{\left(\mathcal{N}_{g_{2},k_{j}},\mathcal{N}_{g_{1},k_{j}}\right)}(d_{x})_{ \mathcal{N}_{g_{1},k_{j}}}+(d_{w})_{\mathcal{N}_{g_{2},k_{j}}}\\ \hskip 14.226378pt-\Lambda_{\mathcal{N}_{g_{1},k_{j}}}d_{x,\mathcal{N}_{g_{1}, k_{j}}}\end{bmatrix}.\] Notice that the coefficient matrix of (2.15) is symmetric and _quasi-definite_ (see [50]). As such, it is strongly factorizable, that is, each symmetric permutation of it admits an \(L\Delta L^{\top}\) decomposition, with \(\Delta\) diagonal. Alternatively, one could eliminate further variables in order to obtain a positive definite linear system. This could be beneficial in certain applications, but is omitted here for the sake of generality. We note that \(\widehat{C}\) contains only a subset of the columns and rows of \(C\). Similarly, \(\widehat{A}\) contains only a subset of the columns of \(A\). As we will verify in practice, relatively close to the solution of (P) the active-set matrices \(\widehat{C}\), \(\widehat{A}\), can be significantly smaller than \(C\) and \(A\), respectively, allowing the proposed approach to solve large-scale instances, without requiring excessive memory. Finally, we note that if we had included a proximal term for the auxiliary variables \(w\) (i.e. if we were to directly apply the algorithm given in [39]), then \(\widehat{C}\) would only contain a subset of the columns of \(C\) but all of its rows (which, in general, are expected to be much more problematic, since it is often the case that \(l\gg n\)). ### Extension to arbitrary piecewise-linear terms Before closing this section we would like to notice that there are certain piecewise-linear terms that are not captured by the model in (P). In particular, given a set of \(K\) pairs \((C_{r},d_{r})\in\mathbb{R}^{l\times n}\times\mathbb{R}^{l}\) (with \(K\geq 3\)), where \(r\in\{1,\ldots,K\}\), we could consider problems of the following form \[\min_{x\in\mathbb{R}^{n}} \left\{c^{\top}x+\frac{1}{2}x^{\top}Qx+\sum_{i=1}^{l}\max_{r\in \{1,\ldots,R\}}\{\left(C_{r}x+d_{r}\right)_{i}\}+\|Dx\|_{1}+\delta_{\mathcal{K} }(x)\right\},\] s.t. \[Ax=b.\] In order to do so, we would have to introduce \(K\) auxiliary vectors \(w_{r}\in\mathbb{R}^{l}\), \(r\in\{1,\ldots,K\}\), and reformulate the problem in the following form \[\min_{x\in\mathbb{R}^{n}} \left\{c^{\top}x+\frac{1}{2}x^{\top}Qx+\sum_{i=1}^{l}\max_{r\in \{1,\ldots,K\}}\{(w_{r})_{i}\}+\|Dx\|_{1}+\delta_{\mathcal{K}}(x)\right\},\] s.t. \[C_{r}x+d_{r}-w_{r}=0_{l},\qquad r\in\{1,\ldots,K\},\] \[Ax=b.\] Subsequently, in the derivation of the semismooth Newton method, we would have to evaluate the proximity operator of \(\tilde{g}(u):=\max_{i\in\{1,\ldots,K\}}(u_{i})\), \(u\in\mathbb{R}^{K}\), which admits a closed-form solution: \[\left(\mathbf{prox}_{\tilde{\phi}}(u)\right)_{i}=\min\{u_{i},s\},\quad\text{ where }s\in\mathbb{R}\text{ is such that }\sum_{i=1}^{K}\left(|u_{i}|-s\right)_{+}=\zeta.\] Then, the Clarke derivative of the previous could be computed by utilizing [14, Theorem 14.7]. This was not considered in this work for brevity of presentation, and is left as a future research direction. Indeed, this extension paves the way for the generalization of the proposed active-set scheme to a plethora of robust optimization problems that appear in practice (e.g. see [4]), as well as delivering an alternative to standard cutting-plane methods (e.g. see [29]), decomposition methods (e.g. see [3, 5, 18, 46]), or specialized interior point (e.g. see [22]) approaches appearing in the literature. ## 3 Warm-starting Following the developments in [35, 39], we would like to find a starting point for Algorithm PMM that is relatively close to a primal-dual solution. Indeed, this is crucial since, on the one hand, then we can expect to observe early linear convergence of Algorithm PMM, while on the other hand, we can expect to be close to identifying the correct active-sets which in turn implies that the memory requirements of Algorithm SSN are significantly reduced. To that end, we employ a proximal alternating direction method of multipliers (pADMM; e.g. see [20]). We reformulate (P) by introducing an artificial variable \(u\in\mathbb{R}^{n+l}\), as \[\min_{(x,w,u)\ \in\ \mathbb{R}^{n}\times\mathbb{R}^{l}\times \mathbb{R}^{n+l}} \left\{c^{\top}x+\frac{1}{2}x^{\top}Qx+\sum_{i=n+1}^{n+l}\left(u_ {i}\right)_{+}+\|D(u_{1:n})\|_{1}+\delta_{\mathcal{K}}(u_{1:n})\right\},\] (P') s.t. \[\underbrace{\begin{bmatrix}C&-I_{1}&0_{l,l+n}\\ A&0_{m,l}&0_{m,l+n}\\ -I_{l+n}&I_{l+n}\end{bmatrix}}_{M_{r}}\begin{bmatrix}x\\ w\\ u\end{bmatrix}=\begin{bmatrix}-d\\ b\\ 0_{l+n}\end{bmatrix}.\] Given a penalty \(\sigma>0\), we associate the following augmented Lagrangian to (P') \[\widehat{\mathcal{L}}_{\sigma}(x,w,u,y)\coloneqq c^{\top}x+\frac{1}{2}x^{\top}Qx+\sum_{i=n+1}^{n+l}\left(u_{i} \right)_{+}+\|D(u_{1:n})\|_{1}+\delta_{\mathcal{K}}(u_{1:n})\] \[-y^{\top}\left(M_{r}\begin{bmatrix}x\\ w\\ u\end{bmatrix}-\begin{bmatrix}-d\\ b\\ 0_{l+n}\end{bmatrix}\right)+\frac{\sigma}{2}\left\|M_{r}\begin{bmatrix}x\\ w\\ u\end{bmatrix}-\begin{bmatrix}-d\\ b\\ 0_{l+n}\end{bmatrix}\right\|^{2},\] where \(y\in\mathbb{R}^{m+n+2l}\) denotes the dual multipliers associated to the linear equality constraints of (P'). Let an arbitrary positive definite matrix \(R\in\mathbb{R}^{(n+l)\times(n+l)}\) be given, and assume the notation \(\|v\|_{R}^{2}=v^{\top}Rv\), for any \(v\in\mathbb{R}^{n+l}\). Also, as in Section 2.1, we denote \(g(x,w)=\|Dx\|_{1}+\sum_{i=1}^{l}\left(w_{i}\right)_{+}\). Algorithm pADMM describes a proximal ADMM for the approximate solution of (P'). ``` 0:\(\sigma>0\), \(R\succ 0\), \(\gamma\in\left(0,\frac{1+\sqrt{5}}{2}\right)\), \((x_{0},w_{0},u_{0},y_{0})\in\mathbb{R}^{2n+3l+m}\). for\((k=0,1,2,\ldots)\)do \[u_{k+1} =\operatorname*{arg\,min}_{u}\left\{\widehat{\mathcal{L}}_{ \sigma}\left(x_{k},w_{k},u,y_{k}\right)\right\}\equiv\Pi_{\mathcal{K}\times \mathbb{R}^{l}}\left(\mathbf{prox}_{\sigma^{-1}g}\left(\begin{bmatrix}x_{k}\\ w_{k}\end{bmatrix}+\sigma^{-1}(y_{k})_{(m+l+1:m+n+2l)}\right)\right).\] \[\begin{bmatrix}x_{k+1}\\ w_{k+1}\end{bmatrix} =\operatorname*{arg\,min}_{x,\ w}\left\{\widehat{\mathcal{L}}_{ \sigma}\left(x,w,u_{k+1},y_{k}\right)+\frac{1}{2}\left\|\begin{bmatrix}x-x_{k} \\ w-w_{k}\end{bmatrix}\right\|_{R}^{2}\right\}.\] \[y_{k+1} =y_{k}-\gamma\sigma\left(M_{r}\begin{bmatrix}x\\ w\\ u\end{bmatrix}-\begin{bmatrix}-d\\ b\\ 0_{l+n}\end{bmatrix}\right).\] endfor ``` **Algorithm pADMM** (_proximal ADMM_) Let us notice that under certain standard assumptions on (P'), Algorithm pADMM converges globally, potentially at a linear rate (see [20]). We can employ the regularization matrix \(R\) as a means of ensuring memory efficiency of Algorithm pADMM. For example, we can recover the _prox-linear ADMM_[20, Section 1.1], where given some sufficiently large constant \(\hat{\sigma}>0\), one defines \[R\coloneqq\hat{\sigma}I_{n+l}-\sigma\begin{bmatrix}C^{\top}C+A^{\top}A+ \sigma^{-1}\text{Off}(Q)&-C^{\top}\\ -C&0_{l,l}\end{bmatrix}\succ 0.\] Given this choice of \(R\), the second step of Algorithm pADMM consists of only matrix-vector multiplications with \(A,\ C\) and \(Q\), and thus no (non-diagonal) matrix inversion is required. If memory is not an issue, one could use a positive-definite diagonal regularizer \(R\), yielding a standard regularized ADMM. In our implementation, the user can specify which of the two strategies should be employed. In either case, the first step of Algorithm pADMM is trivial to solve, since we know that the proximity operator of \(g(\cdot)\) can be computed analytically as in Section 2.2. Finally, once an approximate solution \((\tilde{x},\tilde{w},\tilde{u},\tilde{y})\) is computed, we set the starting point of Algorithm PMM as \((x_{0},w_{0},y_{0},z_{0})=\left(\tilde{x},\tilde{w},(\tilde{y})_{(1:m+l)}, \tilde{z}\right)\), where \[\tilde{z}=(\tilde{y})_{(m+l+1:m+l+n)}-\Pi_{g_{1}(\tilde{u}_{1:n})}\left(( \tilde{y})_{(m+l+1:m+l+n)}\right).\] Indeed, an optimal primal-dual solution of (P') satisfies \[(\tilde{y}^{*})_{(m+l+1:m+2l+n)}\in\partial g\left(\tilde{u}^{*}\right)+ \partial\delta_{\mathcal{K}\times\mathbb{R}^{l}}\left(\tilde{u}^{*}\right),\] thus the characterization of \(\tilde{z}\) in Algorithm PMM follows from the Appendix, where one can also find the termination criteria of Algorithm pADMM (see (A.2)). ## 4 Applications and numerical results In this section we present various applications that can be modeled by problem (P), focusing on portfolio optimization, quantile regression, and binary classification. In particular, we first discuss (and numerically demonstrate) the effectiveness of the approach for the solution of single-period mean-risk portfolio optimization problems, where risk is measured via the _conditional value at risk_ or the _mean absolute semi-deviation_. Subsequently, we apply the proposed scheme for the solution of quantile regression problems, demonstrating its scalability. Finally, we apply the active-set scheme for the solution of binary classification problems via linear support vector machines on certain large-scale datasets. Implementation detailsBefore proceeding to the numerical results, we mention certain implementation details of the proposed algorithm. The implementation follows closely the developments in [39], and can be found on GitHub1. The code is written in MATLAB, and the experiments are run on a PC with a 2.2GHz Intel core i7-8750H processor (hexa-core), 16GB of RAM, using the Windows 10 operating system. Footnote 1: [https://github.com/spoughkaiotis/Active_set_method_for_COP_piecewise_LP](https://github.com/spoughkaiotis/Active_set_method_for_COP_piecewise_LP) We run Algorithm pADMM (warm-start) for at most 100 iterations (or until a 3-digit accurate solution is found). The user can choose whether the warm-starting scheme should be matrix-free or not. In the presented experiments, the matrix-free scheme was only used for the largest quantile regression and classification problems, for which standard ADMM crashed due to excessive memory requirements. Then, starting with \(\beta_{0}=10\), \(\rho_{0}=50\), we run Algorithms PMM-SSN. Following [39], when solving the PMM sub-problems using Algorithm SSN we use a predictor-corrector-like heuristic in which the first iteration is accepted without line-search and then line-search is activated for subsequent iterations. Algorithm PMM is allowed to run for at most 200 iterations, while Algorithm SSN is stopped after 20 inner iterations. An iterate is accepted as optimal if the conditions given in (A.1) are satisfied for the tolerance specified by the user. Most other implementation details follow directly from the developments in Sections 2.1-2.2. We refer the reader to the implementation on GitHub for additional details. In the presented experiments we compare the proposed approach against the interior point solver given in [38], the implementation of which can be found on GitHub2, as well as the ADMM-based OSQP solver given in [49], the implementation of which can also be found on GitHub3. We note that most problems considered herein were especially challenging for OSQP (mostly due to requesting highly accurate solutions), and for that reason we allow it to run for 50,000 iterations (unlike its default iteration threshold, which is 4000). Footnote 2: [https://github.com/spoughkaiotis/IP_PMM](https://github.com/spoughkaiotis/IP_PMM) Footnote 3: [https://github.com/osqp/osqp-matlab](https://github.com/osqp/osqp-matlab) ### Portfolio optimization We first consider the mean-risk portfolio selection problem (originally proposed in [36]), where we minimize some convex risk measure, while keeping the expected return of the portfolio above some desirable level. A variety of models for this problem have been intensely analyzed and solved in the literature (e.g. see [1, 31, 44]). The departure from the variance as a measure of risk often allows for great flexibility in the decision making of investors, enabling them to follow potential regulations as well as to better control the risk associated with an "optimal" portfolio. Optimality conditions and existence of solutions for several general deviation measures have been characterized in [45]. The comparison of portfolios obtained by minimizing different risk measures has also been considered (e.g. see [41, 42, 48]). The method presented in this paper is general and not related to a specific model choice. Indeed, we would like to showcase the efficiency of our approach for obtaining accurate solutions to portfolio optimization problems with various risk measures of practical interest. Thus, we focus on the solution of a standard portfolio selection model that has been used in the literature. All numerical results are obtained on real-world datasets. As a result the problems are of medium-scale. Nonetheless, even for such medium-scale problems, we will be able to demonstrate the efficiency of the proposed approach, when compared to the efficient interior point method employed in the literature [19] for similar problems. We also note that both second-order solvers (i.e. IPM and active-set) significantly outperform OSQP on these instances, however it was included in the comparison for completeness. Some large-scale instances will be tackled in the following subsections, where the method will be applied to quantile regression and binary classification instances. Let \(x\in\mathbb{R}^{n}\) represent a portfolio of \(n\) financial instruments, such that \[x_{i}\in\left[a_{l_{i}},a_{u_{i}}\right],\quad\text{with }a_{l_{i}}\geq-1,\;a_{u_{i}}\leq 1,\text{ for all }i=1,\ldots,n,\quad\text{and }\sum_{i=1}^{n}x_{i}=1.\] This requirement indicates that short positions for each stock are restricted by some percentage (\(a_{l_{i}}\%\)) of the available wealth (assuming \(a_{l_{i}}<0\)), and no more than \(a_{u_{i}}\%\) of the total wealth can be invested to instrument \(i\). Let \(\mathbf{\xi}\in\mathbb{R}^{n}\) denote a random vector, the \(i\)-th entry of which represents the random return of the \(i\)-th instrument. Then, the random loss (i.e. the negative of the random return) of a given portfolio \(x\) is given by \(f(x,\mathbf{\xi})\coloneqq-x^{\top}\mathbf{\xi}\). In this paper we assume that \(\mathbf{\xi}\) follows some continuous distribution \(p(\mathbf{\xi})\), as well as that there is a one-to-one correspondence between percentage return and monetary value (as in [44, Section 3]). Additionally, given some expected benchmark return \(r\) (e.g. the _market index_), we only consider portfolios that yield an expected return above a certain threshold, i.e. \(\mathbb{E}[-f(x,\mathbf{\xi})]\geq r\). Finally, given the previously stated constraints, we would like to minimize some convex risk measure \(\varrho(\cdot)\) of interest. However, in order to make sure that the problem is well-posed, while the transaction costs are not excessive, we include an \(\ell_{1}\) term in the objective. This is a well-known modeling choice that yields sparse portfolios, and thus regulates the transaction costs in the single-period portfolio setting (e.g. see [1, 16, 31]). Additionally, with an appropriate tuning of the \(\ell_{1}\) regularization parameter \(\tau>0\), one could also control the amount of short positions (see [16]). It should be mentioned here that in the multi-period setting such an \(\ell_{1}\) term does not guarantee a reduction in the transaction costs (but mostly in the _holding costs_), and an additional _total variation_ term should also be added in the objective (see [15, 19]). This is omitted here, since we focus on the single-period case, but the model in (P) could easily incorporate such an additional term (see Remark 1). By putting everything together, the model reads as \[\min_{x} \ \varrho\left(f(x,\mathbf{\xi})\right)+\tau\|x\|_{1}, \tag{4.1}\] \[\text{s.t.} \ \sum_{i=1}^{n}x_{i}=1,\] \[\ \mathbb{E}\left[-f(x,\mathbf{\xi})\right]\geq r,\] \[\ x_{i}\in\left[a_{l_{i}},a_{u_{i}}\right],\qquad i=1,\ldots,n.\] There are several methods for solving such stochastic problems; let us mention two important variants. There is the _parametric approach_ (e.g. as in [1, 44]), where one assumes that the returns follow some known distribution which is subsequently sampled to yield finite-dimensional optimization problems, and the _sampling approach_ (e.g. as in [31]), where one obtains a finite number of samples (without assuming a specific distribution). Such samples are often obtained by historical observations, and this approach is also followed in this paper. It is well-known that historical data cannot fully predict the future (see [31]), however it is a widely-used practice. The reader is referred to [27] for an extensive discussion onprobabilistic models for portfolio selection problems. Additional soft or hard constraints can be included when solving a portfolio selection problem. Such constraints can either be incorporated directly via the use of a model (e.g. see [7, Section 2]) as hard constraints or by including appropriate penalty terms in the objective (soft constraints). It is important to note that the model given in (P) is quite general and as a result has great expressive power, allowing one to incorporate various important risk measures (and their combinations), as well as modeling constraints of interest. Real-world datasets:In what follows, we solve two different instances of problem (4.1). In particular, we consider two potential risk measures; the conditional value at risk (e.g. see [44]), as well as the mean absolute semi-deviation (e.g. see [47, Section 6.2.2.], noting that this is in fact equivalent to the mean absolute deviation originally proposed in [30]). In each of the above cases problem (4.1) has the form of (P) and thus Algorithm PMM can directly be applied. We showcase the effectiveness of the proposed approach on 6 real datasets taken from [9]. Each dataset contains time series for weekly asset returns and market indexes for different major stock markets, namely, DowJones, NASDAQ100, FTSE100, SP500, NASDAQComp, and FF49Industries. In the first 5 markets the authors in [9] provide the market indexes, while for the last dataset the uniform allocation strategy is considered as a benchmark. Additional information on the datasets is collected in Table 1. We note that stocks with less than 10 years of observations have been disregarded. #### 4.1.1 Conditional value at risk First, we consider portfolio optimization problems that seek a solution minimizing the conditional value at risk; a measure which is known to be _coherent_ (see [2] for a definition of coherent risk measures). In particular using the notation introduced earlier, we consider the following optimization problem \[\min_{x\in\mathbb{R}^{n}}\left\{\text{CVaR}_{\alpha}\left(f(x,\mathbf{\xi}) \right)+\tau\|x\|_{1}+\delta_{\mathcal{K}}(x)\right\},\qquad\text{s.t.}\ Ax=b, \tag{4.2}\] where \(f(x,\boldsymbol{\xi})\) is the random cost function, \(A\in\mathbb{R}^{m\times n}\) models the linear constraint matrix of problem (4.1) (where an auxiliary variable has been introduced to transform the inequality constraint involving the expectation into an equality), and \(\mathcal{K}\coloneqq[a_{l},a_{u}]\). In the above \(1-\alpha\in(0,1)\) is the confidence level. It is well-known ([44]) that given a continuous random variable \(X\), the conditional value at risk can be computed as \[\mathrm{CVaR}_{\alpha}\left(X\right)\coloneqq\min_{t\in\mathbb{R}}\left\{t+ \alpha^{-1}\mathbb{E}\left[\left(X-t\right)_{+}\right]\right\}.\] We can write problem (4.2) in the following equivalent form: \[\min_{(x,t)\in\mathbb{R}^{n}\times\mathbb{R}}\left\{t+\frac{1}{l\alpha}\sum_{i =1}^{l}\left(-\xi_{i}^{\top}x-t\right)_{+}+\|Dx\|_{1}+\delta_{\mathcal{K}}(x) \right\},\qquad\text{s.t.}\;Ax=b,\] where the expectation has been substituted by summation since we assume the availability of a dataset \(\{\xi_{1},\ldots,\xi_{l}\}\). Introducing an auxiliary variable \(w\in\mathbb{R}^{l}\), the previous can be re-written as \[\min_{(x,t,w)\in\mathbb{R}^{n}\times\mathbb{R}\times\mathbb{R}^{l }}\left\{t+\sum_{i=1}^{l}\left(w_{i}\right)_{+}+\|Dx\|_{1}+\delta_{\mathcal{K }}(x)\right\},\] \[\text{s.t.}\qquad\qquad\frac{1}{l\alpha}\left(-\xi_{i}^{\top}x-t \right)-w_{i}=0,\quad i=1,\ldots,l,\] \[Ax=b.\] We solve problem (4.2) using the proposed active-set method (AS), the _interior point-proximal method of multipliers_ (IP-PMM) given in [38], and OSQP ([49]). We allow any short position, that is \(a_{l_{i}}=-1\), and restrict investing more that \(60\%\) of the available wealth on a single stock (i.e. \(a_{u_{i}}=0.6\)). We set \(\texttt{tol}=10^{-5}\) and \(\tau=10^{-2}\), and run the three methods for each of the datasets described in Table 1 for varying confidence level. We report the confidence parameter \(\alpha\), the number of PMM, SSN, IP-PMM and OSQP iterations, the CPU time needed by each of the three schemes, as well as the number of factorizations used within the active set (PMM-SSN) scheme. Indeed, it often happens that the active-set is not altered from one iteration to the next, and the factorization does not need to be re-computed. The results are collected in Table 2. In all the numerical results that follow, the lowest running time exhibited by a solver, assuming it successfully converged, is presented in bold. From Table 2 we observe that for the smaller instances (DowJones, NaSDAQ100, FTSE100, FF49Industries) both second-order methods perform quite well and are comparable in terms of CPU time. Nonetheless, even in this case, the proposed active-set solver requires significantly less memory. Indeed, this can be seen by the fact that the two methods achieve similar times but the active-set scheme is performing significantly more factorizations. On the other hand, for the larger instances (SP500, NASDAQComp) the proposed active-set scheme outperforms the interior point method significantly. This is mostly due to the efficiency gained from the lower memory requirements and cheaper factorizations of the active-set solver. Also, we observe that both methods are quite robust with respect to the confidence level and consistently outperform OSQP, which struggles to find a 5-digit accurate solution. We note that OSQP could potentially be competitive for smaller tolerances (e.g. for finding a 3-digit accurate solution), however, the application under consideration dictates that an accurate solution is needed, since a small improvement in the portfolio output can translate into huge profits in practice. \begin{table} \begin{tabular}{l l l l} \hline \hline **Name** & **\# of assets** & **\# of data points** & **Timeline** \\ \hline DowJones & 28 & 1363 & Feb. 1990–Apr. 2016 \\ NASDAQ100 & 82 & 596 & Nov. 2004–Apr. 2016 \\ FTSE100 & 83 & 717 & Jul. 2002–Apr. 2016 \\ FF49Industries & 49 & 2325 & Jul. 1969–Jul. 2015 \\ SP500 & 442 & 595 & Nov. 2004–Apr. 2016 \\ NASDAQComp & 1203 & 685 & Feb. 2003–Apr. 2016 \\ \hline \hline \end{tabular} \end{table} Table 1: Portfolio optimization datasets. In order to observe the behaviour of the three solvers under a different \(\ell_{1}\) regularization value, we fix \(\mathtt{tol}=10^{-5}\), \(\tau=10^{-1}\), and run them for varying confidence level. The results are collected in Table 3. Similar observations can be drawn from Table 3, while noting that both second-order approaches remain efficient for different values of the regularization parameter \(\tau\). Notice, however, that one should be careful when choosing this regularization parameter, since otherwise the obtained portfolios could be meaningless. In light of this, we have chosen values for \(\tau\) that yield reasonable portfolios, with controlled short positions. Additionally we should note that both second-order solvers were able to solve all problems in higher accuracy (e.g. up to 6- or 7-digits of accuracy), however, the differences in the obtained portfolios (in terms of positions and/or associated risk) were negligible and hence such accuracies were not considered here. #### 4.1.2 Mean absolute semi-deviation Next, we consider portfolio optimization problems that seek a solution minimizing the mean absolute semi-deviation, which is also coherent. We consider the following optimization problem \[\min_{x\in\mathbb{R}^{n}}\left\{\mathrm{MASD}\left(f(x,\mathbf{\xi})\right)+\tau \|x\|_{1}+\delta_{\mathcal{K}}(x)\right\},\qquad\text{s.t. }Ax=b, \tag{4.3}\] where, given a continuous random variable \(X\), the associated risk is defined as \[\mathrm{MASD}\left(X\right)\coloneqq\mathbb{E}\left[\left(X-\mathbb{E}[X] \right)_{+}\right]\equiv\frac{1}{2}\mathbb{E}\left|X-\mathbb{E}[X]\right|,\] where the equivalence follows from [47, Prospotion 6.1]. Given a dataset \(\left\{\xi_{1},\ldots,\xi_{l}\right\}\), problem (4.3) can be written as \[\min_{(x,w)\in\mathbb{R}^{n}\times\mathbb{R}^{l}}\left\{\sum_{i=1} ^{l}\left(w_{i}\right)_{+}+\|Dx\|_{1}+\delta_{\mathcal{K}}(x)\right\},\] \[\text{s.t.}\qquad\frac{1}{l}\left(-\xi_{i}^{\top}x+\mathbf{\mu}^{ \top}x\right)-w_{i}=0,\quad i=1,\ldots,l,\] \[Ax=b,\] where \(\mathbf{\mu}\coloneqq\frac{1}{l}\sum_{i=1}^{l}\xi_{i}^{\top}x\). Note that this model is in the form of (P). \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{**Dataset**} & \multirow{2}{*}{\(\mathbf{\alpha}\)} & \multicolumn{4}{c}{**Iterations**} & \multicolumn{3}{c}{**Time (s)**} \\ \cline{3-8} & & \multicolumn{2}{c}{PMM(SSN)[Fact.]} & \multicolumn{1}{c}{IP–PMM} & \multicolumn{1}{c}{OSQP} & \multicolumn{1}{c}{AS} & \multicolumn{1}{c}{IP–PMM} & \multicolumn{1}{c}{OSQP} \\ \hline \multirow{3}{*}{DowJones} & 0.05 & 35(169)[142] & 18 & 32,575 & **0.65** & 0.71 & 10.94 \\ & 0.10 & 36(166)[145] & 21 & 42,625 & **0.75** & 0.83 & 13.61 \\ & 0.15 & 36(144)[125] & 13 & 43,100 & 0.67 & **0.53** & 14.45 \\ \hline \multirow{3}{*}{NASDAQ100} & 0.05 & 31(156)[138] & 19 & 30,575 & 0.66 & **0.60** & 7.92 \\ & 0.10 & 33(168)[150] & 16 & 34,700 & 0.65 & **0.50** & 8.81 \\ & 0.15 & 33(163)[142] & 15 & 40,500 & 0.62 & **0.57** & 10.46 \\ \hline \multirow{3}{*}{FTSE100} & 0.05 & 32(179)[164] & 17 & 21,825 & **0.77** & 0.90 & 10.96 \\ & 0.10 & 34(171)[147] & 19 & 35,825 & **0.80** & 0.99 & 17.70 \\ & 0.15 & 35(179)[156] & 11 & 36,550 & 0.93 & **0.59** & 18.24 \\ \hline \multirow{3}{*}{FF49Industries} & 0.05 & 41(215)[186] & 27 & 50,00011 **1.77** & 4.65 & 59.161 \\ & 0.10 & 42(224)[183] & 22 & 50,0001 **2.37** & 3.62 & 57.92 \\ & 0.15 & 39(205)[157] & 11 & 50,0001 **2.66** & **2.18** & 51.25 \\ \hline \multirow{3}{*}{SP500} & 0.05 & 30(203)[193] & 20 & 21,075 & **5.93** & 12.82 & 65.70 \\ & 0.10 & 34(199)[187] & 11 & 32,375 & **5.88** & 6.98 & 99.39 \\ & 0.15 & 34(173)[160] & 17 & 39,225 & **5.78** & 10.17 & 119.95 \\ \hline \multirow{3}{*}{NASDAQComp} & 0.05 & 28(198)[192] & 10 & 16,100 & **22.07** & 35.52 & 143.82 \\ & 0.10 & 31(171)[169] & 14 & 31,275 & **23.31** & 48.99 & 291.97 \\ \cline{1-1} & 0.15 & 34(173)[167] & 19 & 44,775 & **21.67** & 67.14 & 403.75 \\ \hline \hline \end{tabular} \end{table} Table 2: CVaR portfolio selection: varying confidence level (\(\mathtt{tol}=10^{-5}\), \(\tau=10^{-2}\)). We fix \(\texttt{tol}=10^{-5}\) and run the two methods on the 6 datasets for two different sensible values of the regularization parameter \(\tau\). The results are collected in Table 4. From Table 4 we observe that both second-order schemes are robust and comparable for all instances. In this case, the larger instances (SP500, NASDAQComp) were solved in comparable time by both solvers. Nevertheless, it is important to note that the active-set scheme is the better choice, since it has significantly less memory requirements. Again, this can be readily observed from the fact that its factorizations are significantly cheaper compared to those of IP-PMM (since the active-set scheme performs significantly more factorizations and still achieves comparable performance). Indeed, as will become clear when solving large-scale quantile regression and binary classification instances, the proposed active-set solver scales better than IP-PMM or OSQP. Additionally, we also observe that the method is robust (i.e. converges reliably to an accurate solution). Finally, as in the case of the CVaR instances, OSQP struggled to find accurate solutions at a reasonable number of iterations, making it a less efficient choice for such problems. \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline \multirow{2}{*}{**Dataset**} & \multirow{2}{*}{\(\mathbf{\tau}\)} & \multicolumn{4}{c}{**Iterations**} & \multicolumn{3}{c}{**Time (s)**} \\ \cline{3-8} & & \multicolumn{2}{c}{PMM(SSN)[Fact.]} & \multicolumn{1}{c}{IP–PMM} & \multicolumn{1}{c}{OSQP} & \multicolumn{1}{c}{AS} & \multicolumn{1}{c}{IP–PMM} & \multicolumn{1}{c}{OSQP} \\ \hline \multirow{3}{*}{DowJones} & 0.01 & 51(142)[139] & 10 & 32,350 & 1.05 & **0.56** & 10.47 \\ & 0.05 & 52(142)[139] & 13 & 14,050 & 0.72 & **0.54** & 4.60 \\ \hline \multirow{3}{*}{NASDAQ100} & 0.01 & 47(167)[160] & 11 & 18,800 & 0.67 & **0.26** & 5.02 \\ & 0.05 & 48(176)[161] & 10 & 40,550 & 0.67 & **0.27** & 10.72 \\ \hline \multirow{3}{*}{FTSE100} & 0.01 & 46(153)[150] & 10 & 18,325 & 0.79 & **0.27** & 9.56 \\ & 0.05 & 46(153)[150] & 14 & 47,125 & 0.74 & **0.35** & 23.29 \\ \hline \multirow{3}{*}{FF49Industries} & 0.01 & 53(141)[136] & 11 & 50,0001 & **1.49** & 1.72 & 57.59\({}^{\ddagger}\) \\ & 0.05 & 52(137)[132] & 9 & 12,050 & **1.35** & 1.51 & 14.24 \\ \hline \multirow{3}{*}{SP500} & 0.01 & 42(165)[163] & 17 & 28,375 & 5.33 & **2.91** & 87.10 \\ & 0.05 & 41(157)[153] & 17 & 41,275 & 4.39 & **3.07** & 128.09 \\ \hline \multirow{3}{*}{NASDAQComp} & 0.01 & 44(143)[133] & 17 & 34,175 & **9.75** & 9.91 & 310.12 \\ & 0.05 & 44(143)[132] & 18 & 41,000 & 10.22 & **9.72** & 367.32 \\ \hline \hline \end{tabular} * \({}^{\ddagger}\) indicates that the solver reached the maximum number of iterations. \end{table} Table 4: MAsD portfolio selection: varying regularization (\(\texttt{tol}=10^{-5}\)). \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline \multirow{2}{*}{**Dataset**} & \multirow{2}{*}{\(\mathbf{\alpha}\)} & \multicolumn{4}{c}{**Iterations**} & \multicolumn{3}{c}{**Time (s)**} \\ \cline{3-8} & & \multicolumn{2}{c}{PMM(SSN)[Fact.]} & \multicolumn{1}{c}{IP–PMM} & \multicolumn{1}{c}{OSQP} & \multicolumn{1}{c}{AS} & \multicolumn{1}{c}{IP–PMM} & \multicolumn{1}{c}{OSQP} \\ \hline \multirow{3}{*}{DowJones} & 0.05 & 35(168)[144] & 21 & 24,975 & **0.57** & 0.85 & 8.31 \\ & 0.10 & 36(154)[133] & 20 & 21,925 & **0.64** & 0.79 & 7.32 \\ & 0.15 & 36(143)[128] & 17 & 29,625 & **0.68** & 0.69 & 9.95 \\ \hline \multirow{3}{*}{NASDAQ100} & 0.05 & 34(157)[124] & 19 & 17,225 & **0.46** & 0.57 & 4.50 \\ & 0.10 & 34(155)[133] & 16 & 18,100 & 0.59 & **0.49** & 4.80 \\ & 0.15 & 35(157)[132] & 16 & 20,775 & 0.56 & **0.48** & 5.49 \\ \hline \multirow{3}{*}{FTSE100} & 0.05 & 34(166)[137] & 21 & 18,900 & **0.65** & 1.17 & 10.10 \\ & 0.10 & 34(174)[157] & 16 & 24,750 & 0.92 & **0.86** & 13.15 \\ & 0.15 & 36(166)[147] & 17 & 28,850 & 0.95 & **0.92** & 15.53 \\ \hline \multirow{3}{*}{FF49Industries} & 0.05 & 43(215)[171] & 19 & 48,000 & **1.63** & 3.00 & 59.33 \\ & 0.10 & 41(184)[146] & 22 & 40,475 & **1.57** & 3.53 & 48.48 \\ & 0.15 & 41(160)[136] & 19 & 35,525 & **1.45** & 3.06 & 40.91 \\ \hline \multirow{3}{*}{SP500} & 0.05 & 32(163)[144] & 20 & 21,425 & **4.68** & 12.26 & 66.29 \\ & 0.10 & 33(165)[143] & 13 & 22,100 & **5.86** & 12.43 & 73.56 \\ \cline{1-1} & 0.15 & 35(169)[148] & 19 & 32,225 & **5.61** & 11.51 & 98.25 \\ \hline \multirow{3}{*}{NASDAQComp} & 0.05 & 32(190)[182] & 26 & 17,200 & **15.55** & 90.78 & 157.83 \\ \cline{1-1} & 0.10 & 34(180)[172] & 15 & 13,925 & **22.75** & 51.81 & 127.26 \\ \cline{1-1} & 0.15 & 34(170)[167] & 18 & 14,675 & **24.69** & 62.40 & 137.42 \\ \hline \hline \end{tabular} * \({}^{\ddagger}\) indicates that the solver reached the maximum number of iterations. \end{table} Table 3: CVaR portfolio selection: varying confidence level (\(\texttt{tol}=10^{-5}\), \(\tau=10^{-1}\)). #### 4.1.3 Extensions and alternative risk measures Let us notice that the presented methodology can be easily extended to the multi-period case [7, 33]. Then, one could include an additional fused-lasso term in the objective function in order to ensure low transaction costs. It is important to note that in this case the \(\ell_{1}\) term added in the objective has the effect of reducing holding costs as well as short positions (e.g. see [15, 16, 19]). As noted in Remark 1, the additional fused-lasso term can be easily incorporated in the objective of (P). Multi-period portfolio selection problems are not considered here, however, one can expect a very similar behaviour to that observed in the single-period case. Finally, we could easily deal with alternative risk measures, such as the variance (e.g. [36]), combination of CVaR and MASD (e.g. see [6]), or approximations of other risk measures via multiple CVaR measures (see [26]). These were not included here for brevity of presentation. ### Penalized quantile regression Next we consider linear regression models of the following form \[y_{i}=\beta_{0}+\xi_{i}^{\top}\beta+\epsilon_{i},\qquad i\in\{1,\ldots,l\}\] where \(\xi_{i}\) is a \(d\)-dimensional vector of covariates, \((\beta_{0},\beta)\) are the regression coefficients and \(\epsilon_{i}\) is some random error. A very popular problem in statistics is the estimation of the optimal coefficients, in the sense of minimizing a model of the following form: \[\min_{(\beta_{0},\beta)\ \in\ \mathbb{R}\times\mathbb{R}^{d}}\left\{\frac{1}{l} \sum_{i=1}^{l}\ell\left(y_{i}-\beta_{0}-\xi_{i}^{\top}\beta\right)+\lambda p( \beta)\right\}, \tag{4.4}\] where \(\ell(\cdot)\) is some loss function and \(p(\cdot)\) is a penalty function with an associated regularization parameter \(\lambda\geq 0\). Following [54], we consider the elastic-net penalty, \[p(\beta)\equiv p_{\tau}(\beta)\coloneqq\tau\|\beta\|_{1}+\frac{1-\tau}{2}\| \beta\|_{2}^{2},\qquad 0\leq\tau\leq 1.\] For the loss function, we employ the quantile loss \[\ell(w)\equiv\rho_{\alpha}(w)\coloneqq(1-\alpha)\,w_{-}+\alpha w_{+}=\frac{1} {2}\left(|w|+(2\alpha-1)w\right),\qquad 0<\alpha<1, \tag{4.5}\] where \(w\in\mathbb{R}\). Notice that the case \(\alpha=\frac{1}{2}\) yields the absolute loss. Letting \(x=\begin{bmatrix}\beta_{0}&\beta^{\top}\end{bmatrix}^{\top}\), and using Remark 1, we can re-write problem (4.4) in the form of (P), as \[\min_{(x,w)\ \in\ \mathbb{R}^{1+d}\times\mathbb{R}^{l}} \left\{(\alpha-1)\mathds{1}_{l}^{\top}w+\frac{1}{2}x^{\top}Qx+ \sum_{i=1}^{l}(w_{i})_{+}+\|Dx\|_{1}\right\},\] \[\text{s.t.} \frac{1}{l}\left(-\begin{bmatrix}1&\xi_{l}^{\top}\end{bmatrix}x+ y_{i}\right)-w_{i}=0,\qquad i=1,\ldots,l,\] where \[Q=\begin{bmatrix}0&0_{1,d}\\ 0_{d,1}&\lambda(1-\tau)I_{d}\end{bmatrix},\qquad D=\begin{bmatrix}0&0_{1,d}\\ 0_{d,1}&\lambda\tau I_{d}\end{bmatrix}.\] Real-world datasets:In what follows, we solve several instances of problem (4.4). We showcase the effectiveness of the proposed approach on 5 regression problems taken from the LIBSVM library (see [11]). Additional information on the datasets is collected in Table 5. We fix \(\texttt{tol}=10^{-4}\), \(\lambda=10^{-2}\), and \(\tau=0.5\), and run all three methods (active-set (AS), IP-PMM and OSQP) on the 5 instances for varying quantile level \(\alpha\). The results are collected in Table 6. From Table 6 we observe that the three approaches are comparable for the smaller instances (space_ga, abalone, and cpusmall). The active-set scheme significantly outperforms IP-PMM and OSQP on the cadata problem, mainly due to its better numerical stability. Indeed, this problem is highly ill-conditioned, and this is reflected in the increased number of IP-PMM iterations needed to obtain a 4-digit accurate solution. Finally, we observe that for the large-scale instance (E2006-tfidf), IP-PMM and OSQP crashed due to memory requirements. In this case, the active set scheme was warm-started by a matrix-free ADMM (as discussed in Section 3), since the standard ADMM also crashed due to excessive memory requirements. The active-set scheme, however, manages to solve these large-scale instances very efficiently, without running into any memory issues (consistently). The OSQP solver is competitive for certain instances, albeit slightly slower than both second-order solvers, but fails to solve the cpusmall instances due to an incorrect classification of these instances as infeasible. Additionally, we observe that it is less consistent than the second-order approaches, since its iterations vary greatly with the problem parameters (see instances space_ga and abalone). Next, we fix \(\texttt{tol}=10^{-4}\), and \(\alpha=0.8\), and run the three methods for varying regularization parameters \(\tau\) and \(\lambda\). The results are collected in Table 7. We observe that both second-order methods are quite robust for a wide range of parameter values. OSQP is competitive for certain instances, however, its behaviour is greatly influenced by the problem parameters. Nonetheless, it was able to outperform the second-order solvers on two out of the four cadata instances. Finally, the active-set scheme consistently solved the large-scale instance (E2006-tfidf) for a wide range of parameter values, without running into any memory issues. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{**Problem**} & \multirow{2}{*}{\(\boldsymbol{\alpha}\)} & \multicolumn{3}{c}{**Iterations**} & \multicolumn{3}{c}{**Time (s)**} \\ \cline{3-8} & & \multicolumn{2}{c}{PMM(SSN)[Fact.]} & \multicolumn{1}{c}{IP–PMM} & \multicolumn{1}{c}{OSQ} & \multicolumn{1}{c}{AS} & \multicolumn{1}{c}{IP–PMM} & \multicolumn{1}{c}{OSQP} \\ \hline \multirow{4}{*}{space\_ga} & 0.50 & 15(43)[30] & 19 & 2350 & 0.38 & **0.35** & 0.51 \\ & 0.65 & 15(50)[35] & 20 & 16,300 & **0.37** & 0.39 & 3.36 \\ & 0.80 & 15(62)[48] & 19 & 16,975 & 0.46 & **0.35** & 3.46 \\ & 0.95 & 15(78)[77] & 20 & 7400 & 0.48 & **0.36** & 1.51 \\ \hline \multirow{4}{*}{abalone} & 0.50 & 14(68)[64] & 35 & 6350 & **1.02** & 1.29 & 2.15 \\ & 0.65 & 14(68)[63] & 35 & 6300 & **0.89** & 1.36 & 2.17 \\ & 0.80 & 14(79)[74] & 27 & 12,425 & **0.89** & 1.01 & 4.13 \\ & 0.95 & 14(87)[79] & 18 & 2875 & 0.82 & **0.69** & 0.97 \\ \hline \multirow{4}{*}{cpusmall} & 0.50 & 15(64)[64] & 29 & \(\overline{\bar{\bar{\land}}}^{1}\) & **2.01** & 2.57 & \(\overline{\bar{\bar{\land}}}\) \\ & 0.65 & 16(74)[74] & 30 & \(\overline{\bar{\land}}\) & **2.33** & 2.81 & \(\overline{\bar{\bar{\land}}}\) \\ & 0.80 & 16(86)[85] & 26 & \(\overline{\bar{\land}}\) & 2.64 & **2.37** & \(\overline{\bar{\bar{\land}}}\) \\ & 0.95 & 15(110)[110] & 22 & \(\overline{\bar{\land}}\) & 2.85 & **1.96** & \(\overline{\bar{\bar{\land}}}\) \\ \hline \multirow{4}{*}{cadata} & 0.50 & 3(66)[65] & 49 & 7125 & **2.96** & 14.63 & 16.10 \\ & 0.65 & 4(85)[83] & 42 & 9050 & **3.65** & 12.43 & 20.27 \\ & 0.80 & 3(77)[76] & 45 & 7275 & **3.45** & 13.13 & 16.45 \\ & 0.95 & 3(225)[224] & 80 & 7450 & **13.14** & 23.32 & 17.08 \\ \hline \multirow{4}{*}{E2006-tfidf} & 0.50 & 14(27)[20] & \(\dagger^{2}\) & \(\dagger\) & **84.82** & \(\dagger\) & \(\dagger\) \\ & 0.65 & 15(34)[26] & \(\dagger\) & \(\dagger\) & **95.08** & \(\dagger\) & \(\dagger\) \\ \cline{1-1} & 0.80 & 16(46)[35] & \(\dagger\) & \(\dagger\) & **112.24** & \(\dagger\) & \(\dagger\) \\ \cline{1-1} & 0.95 & 17(79)[71] & \(\dagger\) & \(\dagger\) & **165.78** & \(\dagger\) & \(\dagger\) \\ \hline \hline \end{tabular} * \({}^{1}\)\(\overline{\bar{\land}}\) indicates that the solver incorrectly identified the problem as infeasible. * \({}^{2}\)\(\dagger\) indicates that the solver ran out of memory. \end{table} Table 6: Quantile regression analysis. ### Binary classification via linear support vector machines Finally, we are interested in training a binary linear classifier using regularized soft-margin linear support vector machines (SVMs), [51]. More specifically, given a training dataset \(\{(y_{i},\xi_{i})\}_{i=1}^{l}\), where \(y_{i}\in\{-1,1\}\) are the _labels_ and \(\xi_{i}\in\mathbb{R}^{d}\) are the _feature vectors_ (with \(d\) the number of features), we would like to solve the following optimization problem \[\min_{(\beta_{0},\beta)\in\mathbb{R}\times\mathbb{R}^{d}}\left\{\frac{1}{l} \sum_{i=1}^{l}\left(1-y_{i}\left(\xi_{i}^{\top}\beta-\beta_{0}\right)\right)_{ +}+\lambda\left(\tau_{1}\|\beta\|_{1}+\frac{\tau_{2}}{2}\|\beta\|_{2}^{2} \right)\right\}, \tag{4.6}\] where \(\lambda>0\) is a regularization parameter, and \(\tau_{1},\ \tau_{2}>0\) are the weights for the \(\ell_{1}\) and \(\ell_{2}\) regularizers, respectively. The standard Euclidean regularization is traditionally used as a trade-off between the margin of the classifier (the larger the better) and the correct classification of \(\xi_{i}\), for all \(i\in\{1,\ldots,l\}\) (e.g. [17]). However, this often leads to a dense estimate for \(\beta\), which has led researchers in the machine learning community into considering the \(\ell_{1}\) regularizer instead, in order to encourage sparsity (e.g. [8]). It is well-known that both regularizers can be combined to obtain the effect of both individual ones, using the elastic-net penalty (see, for example, [52]), assuming that \(\tau_{1}\) and \(\tau_{2}\) are appropriately tuned. Real-world datasets:In what follows we consider elastic-net SVM instances of the form of (4.6). We showcase the effectiveness of the proposed active-set scheme on 3 large-scale binary classification datasets taken from the LIBSVM library ([11]). The problem names, as well as the number of features and training points are collected in Table 8. In the experiments to follow, we only consider large-scale problems that neither IP-PMM nor OSQP can solve, due to excessive memory requirements. Nonetheless, we should note that, by following the developments in [19], IP-PMM can be specialized to problems of this form in order to be able to tackle large-scale instances. However, the same can be said about the proposed active-set method. The schemes tested in this work employ factorization for the solution of their associated linear systems. The introduction of preconditioning, e.g. as in [39], could improve their efficiency in certain cases, but is out of the scope of this article. In the following experiments, \begin{table} \begin{tabular}{l l l l l l l l} \hline \hline \multirow{2}{*}{**Problem**} & \multicolumn{6}{c}{**Iterations**} & \multicolumn{3}{c}{**Time (s)**} \\ \cline{3-8} & & \multicolumn{2}{c}{PMM(SSN)[Fact.]} & \multicolumn{1}{c}{IP–PMM} & \multicolumn{1}{c}{OSQP} & \multicolumn{1}{c}{AS} & \multicolumn{1}{c}{IP–PMM} & \multicolumn{1}{c}{OSQP} \\ \hline \multirow{3}{*}{space\_ga} & \((0.2,5\cdot 10^{-2})\) & 15(58)[49] & 16 & 37,125 & 0.51 & **0.30** & 8.00 \\ & \((0.4,1\cdot 10^{-2})\) & 15(61)[48] & 26 & 12,050 & **0.46** & 0.49 & 2.48 \\ & \((0.6,5\cdot 10^{-3})\) & 15(67)[46] & 25 & 13,025 & **0.44** & 0.46 & 2.68 \\ & \((0.8,1\cdot 10^{-3})\) & 15(82)[62] & 27 & 20,575 & 0.61 & **0.49** & 4.21 \\ \hline \multirow{3}{*}{abalone} & \((0.2,5\cdot 10^{-2})\) & 14(70)[65] & 25 & 16,225 & **0.88** & 1.32 & 5.49 \\ & \((0.4,1\cdot 10^{-2})\) & 14(71)[67] & 34 & 15,575 & **0.86** & 1.48 & 5.16 \\ & \((0.6,5\cdot 10^{-3})\) & 15(80)[76] & 39 & 18,675 & **0.92** & 1.50 & 6.26 \\ & \((0.8,1\cdot 10^{-3})\) & 15(116)[109] & 30 & 9975 & 1.44 & **1.16** & 3.34 \\ \hline \multirow{3}{*}{cpusmall} & \((0.2,5\cdot 10^{-2})\) & 15(83)[81] & 20 & \(\overset{\Xi}{\bar{\Lambda}}\) & **1.98** & 2.14 & \(\overset{\Xi}{\bar{\Lambda}}\) \\ & \((0.4,1\cdot 10^{-2})\) & 15(81)[78] & 26 & \(\overset{\Xi}{\bar{\Lambda}}\) & **2.10** & 2.39 & \(\overset{\Xi}{\bar{\Lambda}}\) \\ & \((0.6,5\cdot 10^{-3})\) & 16(101)[94] & 21 & \(\overset{\Xi}{\bar{\Lambda}}\) & 2.79 & **1.94** & \(\overset{\Xi}{\bar{\Lambda}}\) \\ & \((0.8,1\cdot 10^{-3})\) & 15(106)[103] & 20 & \(\overset{\Xi}{\bar{\Lambda}}\) & 3.16 & **1.85** & \(\overset{\Xi}{\bar{\Lambda}}\) \\ \hline \multirow{3}{*}{cadata} & \((0.2,5\cdot 10^{-2})\) & 9(147)[143] & 62 & 5875 & 15.42 & 18.05 & **13.18** \\ & \((0.4,1\cdot 10^{-2})\) & 4(63)[62] & 50 & 7350 & **2.76** & 14.68 & 17.41 \\ \cline{1-1} & \((0.6,5\cdot 10^{-3})\) & 5(121)[118] & 45 & 9775 & **5.49** & 13.32 & 21.70 \\ \cline{1-1} & \((0.8,1\cdot 10^{-3})\) & 7(276)[276] & 33 & 875 & 14.01 & 9.91 & **2.08** \\ \hline \multirow{3}{*}{E2006-tfdfdf} & \((0.2,5\cdot 10^{-2})\) & 16(47)[38] & \(\dagger^{2}\) & \(\dagger\) & **115.87** & \(\dagger\) & \(\dagger\) \\ \cline{1-1} & \((0.4,1\cdot 10^{-3})\) & 16(43)[36] & \(\dagger\) & \(\dagger\) & **118.89** & \(\dagger\) & \(\dagger\) \\ \cline{1-1} & \((0.6,5\cdot 10^{-3})\) & 16(43)[34] & \(\dagger\) & \(\dagger\) & **115.90** & \(\dagger\) & \(\dagger\) \\ \cline{1-1} & \((0.8,1\cdot 10^{-3})\) & 17(48)[40] & \(\dagger\) & \(\dagger\) & **128.38** & \(\dagger\) & \(\dagger\) \\ \hline \hline \end{tabular} * \({}^{1}\overset{\Xi}{\bar{\Lambda}}\) indicates that the solver incorrectly identified the problem as infeasible. * \({}^{2}\dagger\) indicates that the solver ran out of memory. \end{table} Table 7: Quantile regression: varying regularization (\(\texttt{tol}=10^{-4}\), \(\alpha=0.8\)). we employ the (matrix-free) prox-linear ADMM (as described in Section 3), to avoid any memory issues in the warm-starting phase of the proposed algorithm. We fix \(\texttt{tol}=10^{-5}\), \(\lambda=10^{-2}\), and run the active-set solver for all datasets given in Table 8 for varying values of the regularization parameters \(\tau_{1}\), and \(\tau_{2}\). The results are collected in Table 9. We observe that the solver was able to consistently solve these instances, without running into memory issues. In this case, the behaviour of the active-set solver was affected by the parameters \(\tau_{1}\) and \(\tau_{2}\) (indeed, see the number of SSN iterations for different regularization values), however, we consistently obtain convergence in a very reasonable amount of time. Overall, we observe that the proposed algorithm is very general and can be applied in a plethora of very important applications arising in practice. We were able to showcase that the active-set nature of the method allows one to solve large-scale instances on a personal computer, without the need of employing iterative linear algebra (which could also complement the solver, as in [39]). The proposed method strikes a good balance between first-order methods, which are fast but unreliable, and second-order interior point methods, which are extremely robust but can struggle with the problem size, ill-conditioning and memory requirements. We have demonstrated that \(\ell_{1}\)-regularized convex quadratic problems with piecewise-linear terms can be solved very efficiently using the proposed active-set scheme, and we conjecture that the algorithm can be readily extended to deal with general nonlinear convex objectives, or discretizations of stochastic two-stage problems. ## 5 Conclusions In this paper we derived an efficient active-set method for the solution of convex quadratic optimization problems with piecewise-linear terms in the objective. The method, which complements our developments in the accompanying paper ["_An active-set method for sparse approximations. Part I: Separable \(\ell_{1}\) terms_", _S. Pougkakiolis, J. Gondzio, D. S. Kalogerias_], arises by suitably combining a proximal method of multipliers with a semismooth Newton scheme, and admits an active-set interpretation. By taking advantage of the piecewise-linear terms in the objective, the method has very reasonable memory requirements since it utilizes only a small active-set at every inner-outer iteration. We warm-start the algorithm using an appropriate alternating direction \begin{table} \begin{tabular}{l l l} \hline \hline **Name** & **\# of training points** & **\# of features** \\ \hline rcv1 & 20,242 & 47,236 \\ real-sim & 72,309 & 20,958 \\ news20 & 19,996 & 1,355,191 \\ \hline \hline \end{tabular} \end{table} Table 8: Binary classification datasets. \begin{table} \begin{tabular}{l l l l l} \hline \hline \multirow{2}{*}{**Problem**} & \multirow{2}{*}{\(\mathbf{\tau_{1}}\)} & \multirow{2}{*}{\(\mathbf{\tau_{2}}\)} & **Iterations** & **Time (s)** \\ \cline{3-4} & & & \multicolumn{2}{c}{PMM(SSN)[Fact.]} & \multicolumn{1}{c}{AS} \\ \hline & 0.2 & 0.2 & 48(139)[105] & 43.36 \\ rcv1 & 0.8 & 0.2 & 47(100)[58] & 23.28 \\ & 0.2 & 0.8 & 47(165)[133] & 54.81 \\ & 5 & 5 & 39(85)[45] & 20.57 \\ \hline & 0.2 & 0.2 & 50(130)[100] & 252.25 \\ real-sim & 0.8 & 0.2 & 47(85)[48] & 132.20 \\ & 0.2 & 0.8 & 48(101)[68] & 201.56 \\ & 5 & 5 & 47(85)[48] & 128.75 \\ \hline & 0.2 & 0.2 & 50(142)[102] & 213.12 \\ news20 & 0.8 & 0.2 & 41(85)[43] & 133.94 \\ & 0.2 & 0.8 & 70(212)[161] & 74.42 \\ & 5 & 5 & 41(85)[43] & 130.33 \\ \hline \hline \end{tabular} \end{table} Table 9: Binary classification via elastic-net SVMs: varying regularization (\(\texttt{tol}=10^{-5}\), \(\lambda=10^{-2}\)). method of multipliers, and ensure faster convergence and reduced memory requirements. We showcase the efficiency and robustness of the proposed scheme on a variety of optimization problems arising in risk-averse portfolio optimization, quantile regression, and binary classification via linear support vector machines. A numerical comparison against a robust interior point method and a state-of-the-art alternating direction method of multipliers demonstrates the viability of the approach as well as its limited memory requirements. In particular, we observe a significantly better behaviour, compared to the two other solvers, when dealing with large-scale instances. Overall, the approach remains efficient for a wide range of problems and strikes a good balance between cheap but unreliable first-order methods and expensive but highly reliable interior point methods. ## Appendix A Appendix: Termination criteria The optimality conditions of (P) can be written as \[\begin{split}\mathbf{prox}_{g_{1}}\left(x-c-Qx+\begin{bmatrix}C ^{\top}&A^{\top}\end{bmatrix}y-z\right)=\ x,&\mathbf{prox}_{g_{2}}\left(w-y_ {1:l}\right)=\ w,\\ \begin{bmatrix}Cx+d-w\\ Ax-b\end{bmatrix}=\ 0_{l+m},&\Pi_{\mathcal{K}}(x+z)=\ x,\end{split}\] and the termination criteria for Algorithm PMM (given a tolerance \(\epsilon>0\)) can be summarized as \[\begin{split}\frac{\left\|x-\mathbf{prox}_{g_{1}}\left(x-c-Qx+ \begin{bmatrix}C^{\top}&A^{\top}\end{bmatrix}y-z\right)\right\|}{1+\|c\|_{ \infty}}\leq\epsilon,&\left\|w-\mathbf{prox}_{g_{2}}\left(w-y_{1:l}\right) \right\|\leq\epsilon,\\ \frac{\left\|\begin{bmatrix}Cx+d-w\\ Ax-b\end{bmatrix}\right\|}{1+\|b\|_{\infty}+\|d\|_{\infty}}\leq\epsilon,& \frac{\|x-\Pi_{\mathcal{K}}(x+z)\|}{1+\|x\|_{\infty}+\|z\|_{\infty}}\leq \epsilon.\end{split}\] (A.1) From the reformulation of (P) given in (P'), the termination criteria of Algorithm pADMM are as follows (upon noting that the variables of the algorithm are \((x,w,u,y)\)) \[\begin{split}\frac{\left\|c+Qx-\begin{bmatrix}C^{\top}&A^{\top}-I_{n}&0_{n,l}\end{bmatrix}y\right\|}{1+\|c\|}\leq\epsilon,&\left\|\begin{bmatrix}I_{l}&0_{l,m+n}&I_{l} \end{bmatrix}y\right\|\leq\epsilon,\\ \frac{\left\|\begin{matrix}M_{r}\begin{bmatrix}x\\ w\\ u\end{bmatrix}-\begin{bmatrix}-d\\ b\\ 0_{l+n}\end{bmatrix}\right\|}{\left\|\begin{bmatrix}-d\\ b\end{bmatrix}\right\|+1}\leq\epsilon,&\frac{\left\|u-\Pi_{\mathcal{K}\times \mathbb{R}^{l}}\left(\mathbf{prox}_{g}\left(u+\tilde{y}\right)\right)\right\|} {1+\|u\|+\|\tilde{y}\|}\leq\epsilon,\end{split}\] (A.2) where \(\tilde{y}\coloneqq y_{(l+m+1:2l+m+n)}\).
2306.00101
$\widetilde{\mid}\hspace{1mm}$-divisibility of ultrafilters II: The big picture
A divisibility relation on ultrafilters is defined as follows: ${\cal F}\hspace{1mm}\widetilde{\mid}\hspace{1mm}{\cal G}$ if and only if every set in $\cal F$ upward closed for divisibility also belongs to $\cal G$. After describing the first $\omega$ levels of this quasiorder, in this paper we generalize the process of determining the basic divisors of an ultrafilter. First we describe these basic divisors, obtained as (equivalence classes of) powers of prime ultrafilters. Using methods of nonstandard analysis we determine the pattern of an ultrafilter: the collection of its basic divisors as well as the multiplicity of each of them. All such patterns have a certain closure property in an appropriate topology. We isolate the family of sets belonging to every ultrafilter with a given pattern. We show that every pattern with the closure property is realized by an ultrafilter. Finally, we apply patterns to obtain an equivalent condition for an ultrafilter to be self-divisible.
Boris Šobot
2023-05-31T18:23:27Z
http://arxiv.org/abs/2306.00101v4
# \(|\)-divisibility of ultrafilters II: ###### Abstract A divisibility relation on ultrafilters is defined as follows: \(\mathcal{F}\,\widetilde{}\,\mathcal{G}\) if and only if every set in \(\mathcal{F}\) upward closed for divisibility also belongs to \(\mathcal{G}\). After describing the first \(\omega\) levels of this quasiorder, in this paper we generalize the process of determining the basic divisors of an ultrafilter. First we describe these basic divisors, obtained as (equivalence classes of) powers of prime ultrafilters. Using methods of nonstandard analysis we determine the pattern of an ultrafilter: the collection of its basic divisors as well as the multiplicity of each of them. All such patterns have a certain closure property in an appropriate topology. We isolate the family of sets belonging to every ultrafilter with a given pattern. Finally, we show that every pattern with the closure property is realized by an ultrafilter. Keywords: divisibility, Stone-Cech compactification, ultrafilter MSC2020 classification: 03H15, 11U10, 54D35, 54D80 ## 1 Introduction Let \(\mathbb{N}\) denote the set of natural numbers (without zero), and \(\omega=\mathbb{N}\cup\{0\}\). We will also use the extended set \(\mathbb{N}_{\infty}=\omega\cup\{\infty\}\), where \(\infty=\mathfrak{c}^{+}\) (the reasons for this will become clear later). \(\beta\mathbb{N}\) is the set of all ultrafilters on \(\mathbb{N}\) and, for each \(n\in\mathbb{N}\), the principal ultrafilter \(\{A\subseteq\mathbb{N}:n\in A\}\) is identified with \(n\). Considering the topology with base sets \(\overline{A}=\{\mathcal{F}\in\beta\mathbb{N}:A\in\mathcal{F}\}\), we think of \(\beta\mathbb{N}\) as an extension of the discrete space \(\mathbb{N}\), called the Stone-Cech compactification of \(\mathbb{N}\). In general, for \(S\subseteq\beta\mathbb{N}\), \(\mathrm{cl}S\) will denote the closure of \(S\) in this topology; in particular, for \(A\subseteq\mathbb{N}\), \(\mathrm{cl}A=\overline{A}\). One of the main features of \(\beta\mathbb{N}\) is that each function \(f:\mathbb{N}\to\mathbb{N}\) can be uniquely extended to a continuous \(\widetilde{f}:\mathbb{N}\to\mathbb{N}\). Using this, binary operations can also be extended, so by applying this to the multiplication on \(\mathbb{N}\) (and denoting the extension also by \(\cdot\)) a right-topological semigroup \((\beta\mathbb{N},\cdot)\) is obtained, where \[\mathcal{F}\cdot\mathcal{G}=\{A\subseteq\mathbb{N}:\{n\in\mathbb{N}:A/n\in \mathcal{G}\}\in\mathcal{F}\},\] and \(A/n=\{a/n:a\in A\wedge n\mid a\}\). Many properties of this and other semigroups obtained in this way are described in the book [3]. In [7] several ways to extend the divisibility relation to \(\beta\mathbb{N}\) were proposed. One of them proved to have many nice properties: if \(A\mathord{\uparrow}=\{n\in\mathbb{N}:(\exists a\in A)a\mid n\}\) for \(A\subseteq\mathbb{N}\) and \(\mathcal{U}=\{A\in P(\mathbb{N})\setminus\{\emptyset\}:A=A\mathord{\uparrow}\}\), let \(\mathcal{F}\mathord{\;\;\;\;\mid\;}\mathcal{G}\) if for every \(A\in\mathcal{F}\) holds \(A\mathord{\uparrow}\in\mathcal{G}\). It turned out that a general (so-called canonical) way of extending relations described in [5] gives the same relation, and that another equivalent condition is more convenient in practice: \[\mathcal{F}\mathord{\;\;\;\;\mid\;}\mathcal{G}\Leftrightarrow\mathcal{F}\cap \mathcal{U}\subseteq\mathcal{G}.\] \(\mathord{\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \; **Proposition 1.1**: _There is \(\mathcal{P}\in\overline{P}\setminus P\) such that, for every \(n\in\mathbb{N}\), there are at least two ultrafilters \(\mathcal{F},\mathcal{G}\in\overline{L_{n}}\) distinct from \(\mathcal{P}^{n}\) having \(\mathcal{P}\) as their only basic divisor._ However, in \(\beta\mathbb{N}\setminus L\) things look differently: the relation \(\,\,\widetilde{\,\,\,}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \, Atoms and sets belonging to some \({}^{*}\!A\) for \(A\in V(X)\) are called _internal_. Hence, one must be careful: quantifiers of the form (\(\forall x\in{}^{*}\!A\)) or (\(\exists x\in{}^{*}\!A\)) refer only to internal sets. By _The Internal Definition Principle_, sets defined from internal sets are also internal; for a precise formulation see [2], Section 13.15. For every hyperfinite set \(S\) (that is, for \(S\in{}^{*}\!([\mathbb{N}]^{<\aleph_{0}})\), where \([\mathbb{N}]^{<\aleph_{0}}\) is the family of all finite subsets of \(\mathbb{N}\)) there is unique \(t\in{}^{*}\!\mathbb{N}\) for which an internal bijection \(f:\{1,2,\ldots,t\}\to S\) exists; this \(t\) is called the internal cardinality of \(S\). Nonstandard extensions are not unique, and they differ by richness of objects they contain. A nonstandard extension is \({\mathfrak{c}}^{+}\)-_saturated_ if, for every family \(F\) of internal sets with the finite intersection property such that \(|F|\leq{\mathfrak{c}}\), there is an element in \(\bigcap F\). **We will assume all the time that we are working with a fixed \({\mathfrak{c}}^{+}\)-saturated extension in which \(|{}^{*}\!\mathbb{N}|={\mathfrak{c}}^{+}\).** This requires an additional set theoretic assumption, for example \(2^{\mathfrak{c}}={\mathfrak{c}}^{+}\) will suffice. For the existence of such an extension see Theorem 11.4.5 of [1]. With this assumption, for every \({\cal F}\in\beta\mathbb{N}\) its monad \(\mu({\cal F}):=\{x\in{}^{*}\!\mathbb{N}:v(x)={\cal F}\}\) is nonempty, in fact of cardinality \({\mathfrak{c}}^{+}\). It also implies a connection between divisibility relations \(\widetilde{\phantom{\rule{0.0pt}{1.0pt}}\rule{0.0pt}{1.0pt}}\) and \({}^{*}\!\mid\), as shown in the next result (part of Theorem 3.4 from [11]). **Proposition 1.2**: _The following conditions are equivalent for every two ultrafiters \({\cal F},{\cal G}\in\beta\mathbb{N}\):_ _(i) \({\cal F}\widetilde{\phantom{\rule{0.0pt}{1.0pt}}\rule{0.0pt}{1.0pt}}\) \({\cal G}\);_ _(ii) there are \(x\in\mu({\cal F})\) and \(y\in\mu({\cal G})\) such that \(x\ {}^{*}\!\mid y\);_ _(iii) for every \(x\in\mu({\cal F})\) there is \(y\in\mu({\cal G})\) such that \(x\ {}^{*}\!\mid y\);_ _(iv) for every \(y\in\mu({\cal G})\) there is \(x\in\mu({\cal F})\) such that \(x\ {}^{*}\!\mid y\)._ The nonstandard approach sheds more light on some phenomena occurring in \((\beta\mathbb{N}/{=}_{\sim},\ \widetilde{\phantom{\rule{0.0pt}{1.0pt}}\rule{0.0pt}{1.0pt}})\). First, prime nonstandard numbers are exactly those belonging to monads of prime ultrafilters. If \(p\in\mu({\cal P})\) for some prime \({\cal P}\), then \(v(p^{2})={\cal P}^{2}\), but if \(p,q\in\mu({\cal P})\) are distinct, then \(v(pq)\) is one of the ultrafilters divisible by \({\cal P}\) twice. A similar situation occurs at higher levels of \(L\). Let \(A\in{\cal U}\) be arbitrary, let \(B\) be the set of \(\mid\)-minimal elements of \(A\) and \(B_{n}=B\cap L_{n}\). An ultrafilter that contains some \(B_{n}\) belongs to \(\overline{L_{n}}\); this is exactly the kind of ultrafilters studied in [9]. Note that \(A=\bigcup_{n\in\omega}B_{n}\!\uparrow\). Let \(P_{A}=\{p\in P:(\exists n\in\mathbb{N})p^{n}\in A\}\) and define the function \(h_{A}:P_{A}\to\mathbb{N}\) as follows: \[h_{A}(p)=\min\{n\in\mathbb{N}:p^{n}\in A\}. \tag{1}\] By Transfer, for all \(p\in{}^{*}\!P_{A}\) and all \(x\in{}^{*}\!\mathbb{N}\), \[p^{x}\in{}^{*}\!A\mbox{ if and only if }x\geq{}^{*}\!h_{A}(p). \tag{2}\] For \(p\in{}^{*}\!P\setminus{}^{*}\!P_{A}\), no power of \(p\) in is \({}^{*}\!A\). Thus, in order for \(v(p^{x})\,\widetilde{\phantom{\rule{0.0pt}{1.0pt}}\rule{0.0pt}{1.0pt}}\,{ \cal F}\) to hold, \({\cal F}\) needs to contain all \(A\in{\cal U}\) for which \({}^{*}\!h_{A}(p)\leq x\). In the reverse direction, for any \(A\subseteq P\) and any \(h:A\to\mathbb{N}\) we can define \[A^{h}=\{m\in\mathbb{N}:(\exists p\in A)p^{h(p)}\mid m\}\in{\cal U}. \tag{3}\] If \(\{{\cal F}_{i}:i\in I\}\) is a set of ultrafilters and \({\cal W}\) an ultrafilter on \(I\), \({\cal G}=\lim_{i\to{\cal W}}{\cal F}_{i}\) is the ultrafilter defined by: \(A\in{\cal G}\) if and only if \(\{i\in I:A\in{\cal F}_{i}\}\in{\cal W}\). The following was proven in [11], Lemma 4.1. **Proposition 1.3**: _(a) Every chain \(\langle[{\cal F}_{i}]_{\sim}:i\in I\rangle\) in \((\beta{\mathbb{N}}/{=_{\sim}},\,\widetilde{\,\,\,}\,)\) has the least upper bound \([{\cal G}_{U}]_{\sim}\) (obtained as \({\cal G}_{U}=\lim_{i\to{\cal W}}{\cal F}_{i}\) for any \({\cal W}\) containing all final segments of \(I\)) and the greatest lower bound \([{\cal G}_{L}]_{\sim}\)._ _(b) \(\bigcup_{i\in I}({\cal F}_{i}\cap{\cal U})={\cal G}_{U}\cap{\cal U}\) and \(\bigcap_{i\in I}({\cal F}_{i}\cap{\cal U})={\cal G}_{L}\cap{\cal U}\)._ Since the \(=_{\sim}\)-class of the l.u.b. \({\cal G}_{U}\) does not depend on the choice of \({\cal W}\), in the case when \(I=\gamma\) is an ordinal we will denote by \(\lim_{\delta\to\gamma}{\cal F}_{\delta}\) the class \([{\cal G}_{U}]_{\sim}\). In any nonstandard extension the following generalization of the Fundamental theorem of arithmetic holds (a proof was provided in [10], Theorem 2.5). Let \(\langle p_{n}:n\in{\mathbb{N}}\rangle\) be the increasing enumeration of \(P\), so its nonstandard extension \(\langle p_{n}:n\in{{}^{*}\!{\mathbb{N}}}\rangle\) is the increasing enumeration of \({}^{*}\!P\). Recall that, for \(p\in P\), \(n,k\in{\mathbb{N}}\), \(p^{k}\parallel n\) means that \(k\) is the largest natural number \(l\) such that \(p^{l}\mid n\); we say that \(p^{k}\) is exact divisor of \(n\). Likewise, for \(p\in{{}^{*}\!{\mathbb{N}}}\), \(p^{k}\,{{}^{*}\!\parallel}\,x\) means that \(k\) is the largest \(l\in{{}^{*}\!{\mathbb{N}}}\) such that \(p^{l}\,{{}^{*}\!\parallel}\,x\). If \(p^{k}\,{{}^{*}\!\parallel}\,x\), we also write \(k=\exp_{p}x\). **Proposition 1.4**: _(a) For every \(z\in{{}^{*}\!{\mathbb{N}}}\) and every internal sequence \(\langle h(n):n\leq z\rangle\) there is unique \(x\in{{}^{*}\!{\mathbb{N}}}\) such that \(p_{n}^{h(n)}\,{{}^{*}\!\parallel}\,x\) for \(n\leq z\) and \(p_{n}\,{{}^{*}\!\!\parallel}\,x\) for \(n>z\); we denote this element by \(\prod_{n\leq z}p_{n}^{h(n)}\)._ _(b) Every \(x\in{{}^{*}\!{\mathbb{N}}}\) can be uniquely represented as \(\prod_{n\leq z}p_{n}^{h(n)}\) for some \(z\in{{}^{*}\!{\mathbb{N}}}\) and some internal sequence \(\langle h(n):n\leq z\rangle\) such that \(h(z)>0\)._ The product \({\cal F}\cdot{\cal G}\) is a minimal ultrafilter divisible by both \({\cal F}\) and \({\cal G}\) but (similar to Proposition 1.1) it may be only one of many such ultrafilters. Thus, products \(xy\) for \(x\in\mu({\cal F})\) and \(y\in\mu({\cal G})\) do not always belong to \({\cal F}\cdot{\cal G}\). They do whenever \((x,y)\) is a _tensor pair_, meaning that it belongs to \(\mu({\cal F}\otimes{\cal G})\), where \({\cal F}\otimes{\cal G}=\{S\subseteq{\mathbb{N}}:\{x\in{\mathbb{N}}:\{y\in{ \mathbb{N}}:(x,y)\in S\}\in{\cal G}\}\in{\cal F}\}\) is the tensor product of ultrafilters \({\cal F}\) and \({\cal G}\). Theorem 3.4 from [6] gives an equivalent condition that we will use (and several more can be found in Theorem 11.5.7 of [1]). **Proposition 1.5**: \((x,y)\) _is a tensor pair if and only if, for every \(f:{\mathbb{N}}\to{\mathbb{N}}\), either \({{}^{*}\!f(y)\in{\mathbb{N}}}\) or \({{}^{*}\!f(y)>x}\)._ The following lemma is a generalization of Theorem 11.5.12 from [1]. **Lemma 1.6**: _Let \({\cal F},{\cal G}\in\beta{\mathbb{N}}\) and \({\cal H}\in\beta({\mathbb{N}}\times{\mathbb{N}})\). If \(x_{0}\in\mu({\cal F})\) and \(y_{0}\in\mu({\cal G})\) are such that \((x_{0},y_{0})\in\mu({\cal H})\), then for every \(x\in\mu({\cal F})\) there is \(y\in\mu({\cal G})\) such that \((x,y)\in\mu({\cal H})\)._ _In particular, for every \(x\in{{}^{*}\!{\mathbb{N}}}\setminus{\mathbb{N}}\) and every \({\cal G}\in\beta{\mathbb{N}}\setminus{\mathbb{N}}\), there is \(y\in\mu({\cal G})\) such that \((x,y)\) is a tensor pair; analogously there is \(y^{\prime}\in\mu({\cal G})\) such that \((y^{\prime},x)\) is a tensor pair._ **Proof.** Let \(B_{A,X}=\{y\in{}^{*}\!X:(x,y)\in{}^{*}\!A\}\) for \(A\subseteq\mathbb{N}\times\mathbb{N}\) and \(X\subseteq\mathbb{N}\). Let \(F=\{B_{A,X}:A\in\mathcal{H}\wedge X\in\mathcal{G}\}\). Since \(F\) is closed for finite intersections, to prove that \(F\) has the finite intersection property it suffices to show that \(B_{A,X}\neq\emptyset\) for all \(A\in\mathcal{H}\), \(X\in\mathcal{G}\). If we denote by \(\pi_{1}\) the first projection, \(y_{0}\) witnesses that \(x_{0}\in{}^{*}\!\pi_{1}({}^{*}\!A\cap({}^{*}\!\mathbb{N}\times{}^{*}\!X))={}^{* }\!(\pi_{1}(A\cap(\mathbb{N}\times X)))\), so \(x\in{}^{*}\!\pi_{1}({}^{*}\!A\cap({}^{*}\!\mathbb{N}\times{}^{*}\!X))\) as well. This means that there is \(y\in{}^{*}\!X\) such that \((x,y)\in{}^{*}\!A\). Now \(\mathfrak{c}^{+}\)-saturation implies that there is \(y\in\bigcap_{A\in\mathcal{H},X\in\mathcal{G}}B_{A,X}\), which means that \(y\in\mu(\mathcal{G})\) and \((x,y)\in\mu(\mathcal{H})\). \(\Box\) ## 2 Basic ultrafilters We begin the description of the divisibility order by describing powers of prime ultrafilters. If \(p\in{}^{*}\!P\) is a fixed nonstandard prime and \(\mathcal{P}=v(\!p\!)\), \(\langle p^{x}:x\in{}^{*}\!\mathbb{N}\rangle\) is a \({}^{*}\!\mid\)-increasing chain in \({}^{*}\!\mathbb{N}\), and so \(\langle v(p^{x}):x\in{}^{*}\!\mathbb{N}\rangle\) is a \({}^{\!\mid}\)-nondecreasing chain in \(\beta\mathbb{N}\). For \(n\in\mathbb{N}\) all the ultrafilters \(\mathcal{P}^{n}=v(p^{n})\) are \(=_{\sim}\)-nonequivalent. Example 4.5 from [10] shows that for \(p\in P\) the situation is simple. **Proposition 2.1**: _If \(p\in P\), all the ultrafilters \(v(p^{x})\) for \(x\in{}^{*}\!\mathbb{N}\setminus\mathbb{N}\) are \(=_{\sim}\)-equivalent._ However, for \(p\in{}^{*}\!P\setminus P\) this need not be true, as the next example shows. **Example 2.2**: _Let \(P=\{p_{n}:n\in\mathbb{N}\}\) be an enumeration of \(P\), and let \(A=\bigcup_{n\in\mathbb{N}}\{p_{n}{}^{n}\}\!\uparrow\). (In terms of (3): if \(h:P\to\mathbb{N}\) is given by \(h(p_{n})=n\), then \(A=A^{h}\).) Then:_ _(a) \((\forall n\in\mathbb{N})(\exists p\in P)(p^{n}\notin A\wedge p^{n+1}\in A)\) holds so, by Transfer,_ \[(\forall x\in{}^{*}\!\mathbb{N})(\exists p\in{}^{*}\!P)(p^{x}\notin{}^{*}\!A \wedge p^{x+1}\in{}^{*}\!A).\] _Since \(A\in\mathcal{U}\), this means that such nonstandard numbers \(p^{x}\) and \(p^{x+1}\) are in the monads of \(=_{\sim}\)-nonequivalent ultrafilters._ _(b) \((\forall p,q\in P)(p\neq q\Rightarrow(\exists n\in\mathbb{N})(p^{n}\in A \mathop{\underline{\vee}}q^{n}\in A))\) is also true (where \(\mathop{\underline{\vee}}\) is the exclusive disjunction), and therefore_ \[(\forall p,q\in{}^{*}\!P)(p\neq q\Rightarrow(\exists x\in{}^{*}\!\mathbb{N})( p^{x}\in{}^{*}\!A\mathop{\underline{\vee}}q^{x}\in{}^{*}\!A)).\] _Thus, any two prime nonstandard numbers have powers \(p^{x}\) and \(q^{x}\) such that \(v(p^{x})\neq_{\sim}v(q^{x})\)._ **Definition 2.3**: _Let \(\mathcal{P}\in\overline{P}\setminus P\). The relation \(\approx_{\mathcal{P}}\) on \(\mu(\mathcal{P})\times{}^{*}\!\mathbb{N}\) is defined as follows:_ \[(p,x)\approx_{\mathcal{P}}(q,y)\mbox{ if and only if }v(p^{x})=_{\sim}v(q^{y}).\] \(\approx_{\mathcal{P}}\) _is, of course, an equivalence relation; let_ \[\mathcal{E}_{\mathcal{P}}=\{[(p,x)]_{\approx_{\mathcal{P}}}:(p,x)\in\mu( \mathcal{P})\times{}^{*}\!\mathbb{N}\}\] _be the set of its equivalence classes. For any \(p\in\mu(\mathcal{P})\) and \(u\in\mathcal{E}_{\mathcal{P}}\), the "vertical sections" are sets \(u_{p}:=\{x:(p,x)\in u\}\)._ Example 2.2(b) shows that \(v(p^{x})=_{\sim}v(q^{x})\) need not hold for \(p,q\in\mu({\cal P})\), so the sets \(u_{p}\) are not independent of the choice of \(p\). **Definition 2.4**: _Families of ultrafilters of the form \({\cal P}^{u}:=\{v(p^{x}):(p,x)\in u\}\) for some \({\cal P}\in\overline{P}\) and \(u\in{\cal E}_{\cal P}\) will be called basic. Also, let \(\mu({\cal P}^{u})=\bigcup_{{\cal F}\in{\cal P}^{u}}\mu({\cal F})\)._ _By \({\cal B}\) we denote the set of all basic classes._ We will think of the \(=_{\sim}\)-classes \({\cal P}^{u}\) as powers of \({\cal P}\). **Lemma 2.5**: _Let \({\cal P}\in\overline{P}\setminus P\) and \(u\in{\cal E}_{\cal P}\)._ _(a) All elements of \(\mu({\cal P}^{u})\) are of the form \(p^{x}\) for some \((p,x)\in u\)._ _(b) For every \(p\in\mu({\cal P})\) the set \(u_{p}\) is nonempty and convex: if \(x,y\in u_{p}\) and \(x<z<y\), then \(z\in u_{p}\) as well._ _(c) Each \(u_{p}\) is either a singleton or a union of galaxies._ **Proof.** (a) If \({\cal F}\in{\cal P}^{u}\), there is \((q,y)\in u\) such that \(q^{y}\in\mu({\cal F})\). \(q^{y}\in{}^{*}\!P^{exp}\), so \(P^{exp}\in{\cal F}\). Thus, every element of \(\mu({\cal F})\) belongs to \({}^{*}\!P^{exp}\), and so it is of the form \(p^{x}\). Clearly \((p,x)\approx_{\cal P}(q,y)\), so \((p,x)\in u\). (b) The convexity is obvious, so to prove \(u_{p}\neq\emptyset\) let \((q,y)\in u\) and \({\cal F}=v(q^{y})\). Using Proposition 1.2, \(q\ ^{*}|\ q^{y}\) implies \({\cal P}\;\widetilde{\;]}\;{\cal F}\), which in turn implies that there is an element of \(\mu({\cal F})\) divisible by \(p\), which by (a) must be of the form \(p^{x}\) for some \(x\in{}^{*}\!\mathbb{N}\). Hence \((p,x)\in u\). (c) Assume that, for some \(x\in{}^{*}\!\mathbb{N}\setminus\mathbb{N}\), \(v(p^{x})\neq_{\sim}v(p^{x+1})\); let us prove that \(v(p^{y})\neq_{\sim}v(p^{y+1})\) holds for every element \(y\) of the galaxy of \(x\). By the assumption there is \(A\in{\cal U}\cap v(p^{x+1})\setminus v(p^{x})\), in other words \({}^{*}\!h_{A}(p)=x+1\) (see (2)). Let \(g:\mathbb{N}\setminus\{1\}\to\mathbb{N}\) be the function defined by \(g(p_{1}^{a_{1}}\ldots p_{k}^{a_{k}})=p_{1}^{a_{1}+1}\ldots p_{k}^{a_{k}+1}\) (for distinct \(p_{1},\ldots,p_{k}\in P\) and \(a_{i}>0\)). Then \(B:=g[A]\!\uparrow\in{\cal U}\). \({}^{*}\!g(p^{x+1})=p^{x+2}\in{}^{*}\!B\) and \(p^{x+1}\notin{}^{*}\!B\), so \({}^{*}\!h_{B}(p)=x+2\). This implies that \(v(p^{x+1})\neq_{\sim}v(p^{x+2})\) and inductivelly \(v(p^{x+n})\neq_{\sim}v(p^{x+n+1})\). Using \(f:\mathbb{N}\setminus\{1\}\to\mathbb{N}\) defined by \(f(p_{1}^{a_{1}}\ldots p_{k}^{a_{k}})=p_{1}^{a_{1}-1}\ldots p_{k}^{a_{k}-1}\), in a similar way we conclude \(v(p^{x-n})\neq_{\sim}v(p^{x-n+1})\) for \(n\in\mathbb{N}\). \(\Box\) **Definition 2.6**: _On \({\cal E}_{\cal P}\) we define the relation:_ \[u\prec_{\cal P}v\quad\mbox{ if and only if }\quad u\neq v\mbox{ and for some }p\in\mu({\cal P})\mbox{ and some }x,y\in{}^{*}\!\mathbb{N}\] \[\mbox{ holds }(p,x)\in u,(p,y)\in v\mbox{ and }x<y.\] _We write \(u\preceq_{\cal P}v\) if \(u\prec_{\cal P}v\) or \(u=v\)._ Lemma 2.5 and Proposition 1.2 imply that, if \(u\prec_{\cal P}v\), then in fact for all \(p\in\mu({\cal P})\) and all \(x,y\in{}^{*}\!\mathbb{N}\) such that \((p,x)\in u\) and \((p,y)\in v\), \(x<y\) holds. **Lemma 2.7**: _For every \({\cal P}\in\overline{P}\setminus P\):_ _(a) \(\prec_{\cal P}\) is a strict linear order._ _(b) Every increasing sequence in \(({\cal E}_{\cal P},\prec_{\cal P})\) has a supremum and every decreasing sequence has an infimum._ _(c) The order \(({\cal E}_{\cal P},\prec_{\cal P})\) contains a copy of \((\mathbb{R},<)\)._ **Proof.** (a) is obvious, using the remark preceding this lemma. (b) We can assume, without loss of generality, that the given sequence is well-ordered (otherwise we can first thin it out into a cofinal well-ordered subsequence). So let \(\langle u_{\xi}:\xi<\gamma\rangle\) be a \(\prec_{\mathcal{P}}\)-increasing sequence in \(\mathcal{E}_{\mathcal{P}}\). Pick some \(p\in\mu(\mathcal{P})\) and \(x_{\xi}\) for \(\xi<\gamma\) such that \((p,x_{\xi})\in u_{\xi}\). Then \(\langle v(p^{x_{\xi}}):\xi<\gamma\rangle\) is a \(\widetilde{\ \ }\)-increasing sequence of ultrafilters so by Proposition 1.3 it has a least upper bound \(\mathcal{G}\). Since each \(v(p^{x_{\xi+1}})\setminus v(p^{x_{\xi}})\) contains a set in \(\mathcal{U}\) and \(|\mathcal{U}|=\mathfrak{c}\), the sequence is of length less than \(\mathfrak{c}^{+}\). But \(\mathfrak{c}^{+}\)-saturation easily proves that the cofinality of \({}^{*}\mathbb{N}\) is at least \(\mathfrak{c}^{+}\), so there is an upper bound \(z\) for \(\{x_{\xi}:\xi<\gamma\}\). Using Proposition 1.2 we find \(y\leq z\) such that \(p^{y}\in\mu(\mathcal{G})\), so \([(p,y)]_{\approx_{\mathcal{P}}}\) must be the supremum of the given sequence. Analogously we prove that \((\mathcal{E}_{\mathcal{P}},\prec_{\mathcal{P}})\) is closed for infimums of decreasing sequences. (c) is exactly what was proven in [11], Theorem 4.6. even though the formulation there was somewhat weaker. \(\Box\) Note that both cases from Lemma 2.5(c) are possible: the first case occurs by Example 2.2(a), and above every such galaxy ("cut" into singleton classes) comes, by Lemma 2.7(b), the second case: a union of galaxies. As another corollary of Lemma 2.7, we conclude that each \(\mathcal{E}_{\mathcal{P}}\) has the greatest element \(u_{max}\); let us write \(\mathcal{P}^{max}\) for \(\mathcal{P}^{u_{max}}\). Also, for \(\mathcal{P}\in\overline{P}\) we will denote \(\mathcal{P}^{\omega}=\mathcal{P}^{u}\), where \(u=\sup\omega\) in \(\mathcal{E}_{\mathcal{P}}\). As we already mentioned, for \(p\in P\) holds \(p^{\omega}=p^{max}\), so the order-type of \((\mathcal{E}_{p},\prec_{p})\) is \(\omega+1\). When convenient, we will identify the first \(\omega+1\) many elements of \(\mathcal{E}_{\mathcal{P}}\) with the set of ordinals \(\omega+1\). **Example 2.8**: _Let \(A\in\mathcal{U}\), \(\mathcal{P}\in\overline{P}\), \(x\in{}^{*}\!\!\mathbb{N}\setminus\mathbb{N}\) and \(p\in\mu(\mathcal{P})\). If \((x,p)\) is a tensor pair, then by Proposition 1.5 either \({}^{*}\!h_{A}(p)\in\mathbb{N}\) or \({}^{*}\!h_{A}(p)>x\). By (2), \(p^{x}\in{}^{*}\!A\) can hold only if \({}^{*}\!h_{A}(p)=m\in\mathbb{N}\), which means that already \(p^{m}\in{}^{*}\!A\), so \(A\in\mathcal{P}^{m}\). Hence all \(p^{x}\) such that \((x,p)\) is a tensor pair belong to \(\mu(\mathcal{P}^{\omega})\)._ If \(A\subseteq\mathbb{N}\) and \(\mathcal{P}^{u}\in\mathcal{B}\), let us abuse the notation and write \(A\in\mathcal{P}^{u}\) (or \(\mathcal{P}^{u}\in\overline{A}\)) if \(A\in\mathcal{F}\) for all \(\mathcal{F}\in\mathcal{P}^{u}\). For example, \(P^{exp}\in\mathcal{P}^{u}\) for all such \(\mathcal{P}^{u}\). Note that, when \(A\in\mathcal{C}\), for \(A\in\mathcal{P}^{u}\) it suffices that \(A\in\mathcal{F}\) for some \(\mathcal{F}\in\mathcal{P}^{u}\). Hence we denote \(\mathcal{P}^{u}\cap\mathcal{U}=\{A\in\mathcal{U}:A\in\mathcal{P}^{u}\}\) and \(\mathcal{P}^{u}\cap\mathcal{C}=\{A\in\mathcal{C}:A\in\mathcal{P}^{u}\}\). Let us call the topology on \(\mathcal{B}\) generated by base sets \(\overline{A}\) for \(A\in\mathcal{U}\) the \(\mathcal{U}\)-topology. The closure of \(S\subseteq\mathcal{B}\) in this topology will be denoted \(\operatorname{cl}_{\mathcal{U}}S\). **Lemma 2.9**: _The sets \(\overline{A^{h}}\) for \(h:A\to\mathbb{N}\) such that \(A\subseteq P\), form a base of the \(\mathcal{U}\)-topology on \(\mathcal{B}\)._ **Proof.** Actually, we will prove that for every \(B\in\mathcal{U}\), there is a set of the form \(A^{h}\) such that, for \(\mathcal{Q}^{u}\in\mathcal{B}\), \(B\in\mathcal{Q}^{u}\) if and only if \(A^{h}\in\mathcal{Q}^{u}\). Simply define \(h=h_{B}\) as in (1). Clearly \(A^{h_{B}}\subseteq B\), which gives us one of the desired implications. On the other hand, for any \(\mathcal{Q}^{u}\in\overline{B}\), \(B\cap P^{exp}\in\mathcal{Q}^{u}\) so \(A^{h_{B}}=(B\cap P^{exp}){\uparrow}\in\mathcal{Q}^{u}\) as well. \(\Box\) **Example 2.10**: _To every \(A\subseteq P\) corresponds a set \(A\!\uparrow\!\in\mathcal{U}\) so that, for every \(\mathcal{Q}\in\overline{P}\), \(\mathcal{Q}\in\overline{A}\) if and only if \(\mathcal{Q}\in\overline{A\!\uparrow}\). Thus the \(\mathcal{U}\)-topology on \(\overline{P}\) coincides with the standard one (generated by all \(\overline{A}\) for \(A\subseteq P\)). If \(S\subseteq\overline{P}\) then \(\mathcal{Q}\in\mathrm{cl}_{\mathcal{U}}S\) implies that, for every \(k\in\mathbb{N}\), \(\mathcal{Q}^{k}\in\mathrm{cl}_{\mathcal{U}}\{\mathcal{P}^{k}:\mathcal{P}\in S\}\) as well._ _In fact, if \(\mathcal{G}=\mathcal{Q}^{k}\in\overline{P^{k}}\), then essentially all sets \(S^{\prime}\) such that \(\mathcal{G}\in\mathrm{cl}_{\mathcal{U}}S^{\prime}\) are obtained in this way: for any \(A\in\mathcal{G}\cap\mathcal{U}\), \(A\cap(P^{k})\in\mathcal{G}\) as well, so there is an ultrafilter \(\mathcal{P}^{k}\in S^{\prime}\cap\overline{P^{k}}\) such that \(A\in\mathcal{P}^{k}\), meaning that \(\mathcal{G}\in\mathrm{cl}_{\mathcal{U}}(S^{\prime}\cap\overline{P^{k}})\). Hence in such occasions we can immediately restrict ourselves to the case \(S^{\prime}=\{\mathcal{P}^{k}:\mathcal{P}\in S\}\) for some \(S\subseteq\overline{P}\), whence \(\mathcal{G}=\mathcal{Q}^{k}\in\mathrm{cl}_{\mathcal{U}}S^{\prime}\) if and only if \(\mathcal{Q}\in\mathrm{cl}_{\mathcal{U}}S\)._ ## 3 Patterns As we saw in the previous section, when generalizing from \(L\) to the whole \(\beta\mathbb{N}\) the set of basic divisors needs to be expanded by classes \(\mathcal{P}^{u}\) for \(u\in\mathcal{E}_{\mathcal{P}}\setminus\omega\). Another difference is that an ultrafilter can be divisible by a basic class infinitely many times. It will turn out, as we will now see, that there is only one possibility for such infinite multiplicity of a basic divisor. For \(x\in{}^{*}\mathbb{N}\) and \(u,v\in\mathcal{E}_{\mathcal{P}}\) such that \(u\preceq_{\mathcal{P}}v\), denote \(D^{[u,v]}_{x}:=\{(p,k):u\preceq_{\mathcal{P}}[(p,k)]_{\approx_{\mathcal{P}}} \preceq_{\mathcal{P}}v\wedge p^{k}\ {}^{*}\|\ {x}\}\). **Lemma 3.1**: _Let \(\mathcal{P}\in\overline{P}\), \(u,v\in\mathcal{E}_{\mathcal{P}}\) and \(\mathcal{F}\in\beta\mathbb{N}\). If there is \(x_{0}\in\mu(\mathcal{F})\) such that \(D^{[u,v]}_{x_{0}}\) is infinite then, for every \(z\in{}^{*}\mathbb{N}\setminus\mathbb{N}\), there is \(x\in\mu(\mathcal{F})\) such that \(D^{[u,v]}_{x}\) has a hyperfinite subset of internal cardinality \(z\), and thus \(|D^{[u,v]}_{x}|=\mathfrak{c}^{+}\)._ **Proof.** The formula \[\theta_{1}(y,C)\equiv(\forall p\in P)(p\mid y\Rightarrow p^{\exp_{p}y}\in C)\] claims that all powers of primes which are exact factors of \(y\) belong to \(C\). \[\theta_{2}(y,t)\equiv(\exists f:\{1,2,\ldots,t\}\to P)(f\mbox{ is one-to- one }\wedge(\forall i\leq t)f(i)\mid y)\] claims that \(y\) has "at least \(t\)" prime divisors. Also let \[\theta_{3}(x,y)\equiv(\forall p\in P)(p\mid y\Rightarrow\exp_{p}x=\exp_{p}y).\] Choose any \(z\in{}^{*}\mathbb{N}\setminus\mathbb{N}\) and let \(B_{A,C}=\{(x,y)\in{}^{*}\!A\times{}^{*}\!\mathbb{N}:{}^{*}\!\theta_{1}(y,{}^ {*}\!C)\wedge{}^{*}\!\theta_{2}(y,z)\wedge{}^{*}\!\theta_{3}(x,y)\}\). \(B_{A,C}\) is internal, since it is defined from \({}^{*}\!A\), \({}^{*}\!C\) and \(z\), all of which are internal. Finally, let \(F=\{B_{A,C}:A\in\mathcal{F}\cap\mathcal{C}\wedge C\in\mathcal{P}^{u}\cap \mathcal{P}^{v}\cap\mathcal{C}\}\). (Recall that \(\mathcal{C}\) is the family of all convex sets.) Let \(A\in\mathcal{F}\cap\mathcal{C}\) and \(C\in\mathcal{P}^{u}\cap\mathcal{P}^{v}\cap\mathcal{C}\). Consider the formula \[\psi\equiv(\forall t\in\mathbb{N})(\exists x\in A)(\exists y\in\mathbb{N})( \theta_{1}(y,C)\wedge\theta_{2}(y,t)\wedge\theta_{3}(x,y)).\] For any \(t\in\mathbb{N}\) and the formula \[\varphi\equiv(\exists x\in A)(\exists q_{1},\ldots,q_{t}\in P^{exp})((\forall i \neq j)q_{i}\neq q_{j}\wedge(\forall i)(q_{i}\ \|\ x\wedge q_{i}\in C))\] its star-counterpart \({}^{*}\varphi\) holds, as witnessed by \(x_{0}\) and any \(t\) of its exact divisors \(p_{i}^{k_{i}}\) such that \(u\preceq_{\cal P}[(p_{i},k_{i})]_{\approx_{\cal P}}\preceq_{\cal P}v\). Therefore \(\varphi\) is also true, and so is \(\psi\). \({}^{*}\!\psi\) (for \(t=z\)) establishes that \(B_{A,C}\neq\emptyset\). Since \(F\) is closed for finite intersections, it has the finite intersection property, so by \({\mathfrak{c}}^{+}\)-saturation there are \(x\in\mu({\cal F})\) and an internal one-to-one function \(f:\{1,2,\ldots,z\}\to{}^{*}\!P\) so that, for every \(i\leq z\), \((f(i),\exp_{f(i)}x)\in D_{x}^{[u,v]}\). Finally, the set \(\{f(i):i\leq z\}\) has internal cardinality \(z\), so \(|\{f(i):i\leq z\}|={\mathfrak{c}}^{+}\) and \(|D_{x}^{[u,v]}|={\mathfrak{c}}^{+}\). \(\Box\) **Theorem 3.2**: _If \(x\in{}^{*}\!{\mathbb{N}}\), \({\cal P}\in\overline{P}\) and \(u,v\in{\cal E}_{\cal P}\), then \(x\) has either finitely many or \({\mathfrak{c}}^{+}\)-many divisors from \(\bigcup_{u\preceq_{\cal P}w\preceq_{\cal P}v}\mu({\cal P}^{w})\)._ **Proof.** Assume that \(x\) has infinitely many divisors from \(\bigcup_{u\preceq_{\cal P}w\preceq_{\cal P}v}\mu({\cal P}^{w})\) and let \({\cal F}=v(x)\). Choose any \(z\in{}^{*}\!{\mathbb{N}}\setminus{\mathbb{N}}\). By Lemma 3.1 there are \(y\in\mu({\cal F})\) and internal one-to-one function \(f:\{1,2,\ldots,z\}\to D_{y}^{[u,v]}\). Lemma 1.6 provides some \(t\in{}^{*}\!{\mathbb{N}}\setminus{\mathbb{N}}\) such that \(v(y,z)=v(x,t)\). For \(A\subseteq{\mathbb{N}}\) let \[B_{A}:=\{(m,k)\in{\mathbb{N}}^{2}:(\exists f:\{1,2,\ldots,k\}\to A\cap P^{exp}) (f\mbox{ is one-to-one}\wedge(\forall i\leq k)f(i)\parallel m)\}.\] Now \((y,z)\in{}^{*}\!B_{A}\) implies \((x,t)\in{}^{*}\!B_{A}\) for all \(A\in{\cal P}^{u}\cap{\cal P}^{v}\cap{\cal C}\). Hence the family \[\{\{f:\{1,2,\ldots,t\}\to{}^{*}\!(A\cap P^{exp})|f\mbox{ is one-to-one}\wedge( \forall i\leq t)f(i)\ ^{*}\!\parallel x\}:A\in{\cal P}^{u}\cap{\cal P}^{v}\cap{\cal C}\}\] has the f.i.p. If \(f:\{1,2,\ldots,t\}\to{}^{*}\!P^{exp}\) is in its intersection, then \(f(i)\) for \(i\leq t\) are \({\mathfrak{c}}^{+}\)-many divisors of \(x\) from \(\bigcup_{u\preceq_{\cal P}w\preceq_{\cal P}v}\mu({\cal P}^{w})\). \(\Box\) The most important corollary of the result above is obtained for \(u=v\): every \(x\) has either finitely many or \({\mathfrak{c}}^{+}\)-many divisors from \(\mu({\cal P}^{u})\). This will simplify considerably the definition of pattern of \(x\). **Definition 3.3**: _Let \({\cal A}\) be the set of all functions \(\alpha:{\cal B}\to{\mathbb{N}}_{\infty}\). Elements \(\alpha\in{\cal A}\) are called patterns._ _If \({\cal P}^{u}=\{{\cal P}^{k}\}\) is a singleton (in particular, for \(k\in{\mathbb{N}}\)), we identify \({\cal P}^{u}\) with \({\cal P}^{k}\) and write \(\alpha({\cal P}^{k})\) instead of \(\alpha({\cal P}^{u})\). In particular, \(\overline{P^{k}}\) is regarded as a subset of \({\cal B}\)._ We will also write \(\alpha=\{({\cal P}^{u_{i}}_{i},n_{i}):i\in I\}\) (meaning that \(\alpha({\cal P}^{u_{i}}_{i})=n_{i}),\) omitting some (or all) of the pairs \(({\cal P}^{u},m)\) for which \(m=0\). **Definition 3.4**: _We will say that \(\alpha\in{\cal A}\) is \({\cal U}\)-closed if, whenever \({\cal Q}^{u}\in{\cal B}\) and \(n\in{\mathbb{N}}\), \(\sum_{{\cal P}^{u}\in\overline{A}}\alpha({\cal P}^{u})\geq n\) for every \(A\in{\cal Q}^{u}\cap{\cal U}\) implies \(\sum_{w\succeq_{\cal Q}u}\alpha({\cal Q}^{w})\geq n\)._ _The family of all \({\cal U}\)-closed patterns will be denoted by \({\cal A}_{cl}\)._ Intuitively, \({\cal U}\)-closedness of a pattern \(\alpha\) means that, if \(\sum_{w\succeq_{\cal Q}u}\alpha({\cal Q}^{w})\) is finite, then there is a neighborhood \(\overline{A}\) of \({\cal Q}^{u}\) in which there are no basic classes "appearing" in \(\alpha\) other than higher powers of \({\cal Q}\). Some special cases should help to illuminate the concept of \({\cal U}\)-closedness. **Example 3.5**: _Let \(\alpha\in{\cal A}_{cl}\)._ _(a) Let \(S\subseteq\overline{P}\), \({\cal Q}\in{\rm cl}_{\cal U}S\) and \(k\in\mathbb{N}\). Recall from Example 2.10 that \({\cal Q}^{k}\) is in the closure of \(\{{\cal P}^{k}:{\cal P}\in S\}\). If \(\sum_{{\cal P}\in S}\sum_{u\succeq_{{\cal P}}k}\alpha({\cal P}^{u})\geq n\), then \(\sum_{u\succeq_{{\cal Q}}k}\alpha({\cal Q}^{u})\geq n\). In particular, \(\sum_{{\cal P}\in S}\sum_{u\succeq_{{\cal P}}k}\alpha({\cal P}^{u})=\infty\) implies that \(\sum_{u\succeq_{{\cal Q}}k}\alpha({\cal Q}^{u})=\infty\)._ _(b) Let \(n\in\mathbb{N}\). If \(\langle v_{\xi}:\xi<\gamma\rangle\) is a \(\prec_{{\cal P}}\)-increasing sequence in \({\cal E}_{\cal P}\) and \(u=\sup_{\xi<\gamma}v_{\xi}\) (see Lemma 2.7(b)), then \({\cal P}^{u}\in{\rm cl}_{\cal U}(\{{\cal P}^{v_{\xi}}:\xi<\gamma\})\), so \(\sum_{v\succeq_{{\cal P}}v_{\xi}}\alpha({\cal P}^{w})\geq n\) for all \(\xi<\gamma\) implies \(\sum_{w\succeq_{{\cal P}}u}\alpha({\cal P}^{w})\geq n\). Consequently, \(\sum_{w\succeq_{{\cal P}}v_{\xi}}\alpha({\cal P}^{w})=\infty\) for \(\xi<\gamma\) implies \(\sum_{w\succeq_{{\cal P}}u}\alpha({\cal P}^{w})=\infty\)._ _(c) If \(p_{n}\in{}^{*}\!P\) (for \(n\in\mathbb{N}\)) are distinct, \({\cal P}_{n}=v(p_{n})\), \({\cal F}_{n}={\cal P}_{1}{\cal P}_{2}\ldots{\cal P}_{n}\) and \([{\cal G}]=\lim_{n\to\omega}{\cal F}_{n}\), then \({\cal G}\) is divisible by any prime ultrafilter in \({\rm cl}(\{{\cal P}_{n}:n\in\mathbb{N}\})\). Thus it may happen that there is a prime ultrafilter \({\cal P}\) such that none of the \({\cal F}_{n}\)'s is divisible by \({\cal P}\), but their limit is._ To every \(\alpha\in{\cal A}\) and every prime \({\cal P}\) we can adjoin a sequence \(\alpha\upharpoonright{\cal P}:=\langle\alpha({\cal P}^{u}):u\in{\cal E}_{\cal P}\rangle\). Clearly, fixing \(\alpha\upharpoonright{\cal P}\) for every \({\cal P}\in\overline{P}\) determines \(\alpha\) completely. **Definition 3.6**: _Let \((L,\leq)\) be a linear order and let \(a=\langle a_{m}:m\in L\rangle\), \(b=\langle b_{m}:m\in L\rangle\) be two sequences in \(\mathbb{N}_{\infty}\). We say that \(a\) dominates \(b\) if, for every \(l\in L\):_ \[\sum_{m\geq l}a_{m}\geq\sum_{m\geq l}b_{m}. \tag{4}\] _For \(\alpha,\beta\in{\cal A}\) we define: \(\alpha\preceq\beta\) if \(\beta\upharpoonright{\cal P}\) dominates \(\alpha\upharpoonright{\cal P}\) for every \({\cal P}\in\overline{P}\). If \(\alpha\preceq\beta\) and \(\beta\preceq\alpha\), we write \(\alpha\approx\beta\)._ **Lemma 3.7**: _Let \(\alpha,\beta\in{\cal A}\) be \({\cal U}\)-closed and let \({\cal P}\in\overline{P}\). The following conditions are equivalent:_ _(i) \(\beta\upharpoonright{\cal P}\) dominates \(\alpha\upharpoonright{\cal P}\);_ _(ii) there is a one-to-one function \(g_{{\cal P}}:\bigcup_{u\in{\cal E}_{\cal P}}(\{u\}\times\alpha({\cal P}^{u})) \to\bigcup_{v\in{\cal E}_{\cal P}}(\{v\}\times\beta({\cal P}^{v}))\) such that \(v\succeq_{{\cal P}}u\) whenever \(g_{{\cal P}}(u,i)=(v,j)\);_ _(iii) there is a function \(f_{{\cal P}}:\bigcup_{u\in{\cal E}_{\cal P}}(\{u\}\times\alpha({\cal P}^{u})) \to{\cal E}_{\cal P}\) such that \(f_{{\cal P}}(u,i)\succeq_{{\cal P}}u\) for every \((u,i)\in\bigcup_{u\in{\cal E}_{\cal P}}(\{u\}\times\alpha({\cal P}^{u}))\) and \(|f_{{\cal P}}^{-1}[\{v\}]|\leq\beta({\cal P}^{v})\) for every \(v\in{\cal E}_{\cal P}\)._ **Proof.** (i)\(\Rightarrow\)(ii) Assume that \(\beta\upharpoonright{\cal P}\) dominates \(\alpha\upharpoonright{\cal P}\). We consider two cases. \(1^{\circ}\) If \(\sum_{v\in{\cal E}_{\cal P}}\beta({\cal P}^{v})=n\in\mathbb{N}\), then we enumerate \(\bigcup_{v\in{\cal E}_{\cal P}}(\{v\}\times\beta({\cal P}^{v}))=\{(v_{j},l_{j}) :j<n\}\) and \(\bigcup_{u\in{\cal E}_{\cal P}}(\{u\}\times\alpha({\cal P}^{u}))=\{(u_{j},i_{j}) :j<k\}\) in the descending order of first coordinates. Clearly, (4) implies that \(k\leq n\) and \(u_{j}\preceq_{{\cal P}}v_{j}\) for \(j<k\), so \(g_{{\cal P}}(u_{j},i_{j})=(v_{j},l_{j})\) defines a function as desired. \(2^{\circ}\) Let \(m\in{\cal E}_{\cal P}\) be the maximal such that \(\sum_{v\succeq_{{\cal P}}m}\beta({\cal P}^{v})=\mathfrak{c}^{+}\). (By Theorem 3.2, this sum, if infinite, must be equal to \(\mathfrak{c}^{+}\); the maximal such \(m\) exists by Example 3.5(b).) If \(m\) has an immediate successor \(w\) in \(({\cal E}_{\cal P},\prec_{{\cal P}})\), then we can define \(g(u,i)\in\bigcup_{v\succ_{{\cal P}}w}(\{v\}\times\beta({\cal P}^{v}))\) for \((u,i)\in\bigcup_{u\succeq_{{\cal P}}w}(\{u\}\times\alpha({\cal P}^{u}))\) in the same way as we defined \(g_{{\cal P}}\) in \(1^{\circ}\). Otherwise, fix a descending sequence \(\langle v_{\xi}:\xi<\gamma\rangle\) in \(({\cal E}_{\cal P},\prec_{{\cal P}})\) such that \(\inf_{\xi<\gamma}v_{\xi}=m\) (constructing it by recursion and using Lemma 2.7(b) at limit stages). As in \(1^{\circ}\), by recursion on \(\xi\) we define an one-to-one function \(g\) mapping each \((u,i)\in\bigcup_{u\geq\,p\,v_{\mathfrak{C}}}(\{u\}\times\alpha(\mathcal{P}^{u}))\) to some \(g(u,i)\in\bigcup_{v\succeq\,\wp\,v_{\mathfrak{C}}}(\{v\}\times\beta(\mathcal{P}^ {v}))\). Thus, \(|\{g(u,i):u\succ_{\mathcal{P}}m\wedge i<\alpha(\mathcal{P}^{u})\}|\leq\aleph_{0}\). Now enumerate the remaining pairs: \(\bigcup_{v\succ m}(\{v\}\times\beta(\mathcal{P}^{v}))\setminus\{g(u,i):u\succ_ {\mathcal{P}}m\wedge i<\alpha(\mathcal{P}^{u})\}=\{(v_{\zeta},i_{\zeta}):\zeta< \mathfrak{c}^{+}\}\). Let \(h:\bigcup_{u\prec\,\succeq m}(\{u\}\times\alpha(\mathcal{P}^{u}))\to\mathfrak{ c}^{+}\) be any one-to-one function. Denoting by \(\pi_{1}\) the first projection, a desired function can be defined as \[g_{\mathcal{P}}(u,j)=\left\{\begin{array}{ll}g(u,j)&\mbox{if }u\succ_{ \mathcal{P}}m,\\ (v_{h(u,j)},i_{h(u,j)})&\mbox{otherwise.}\end{array}\right.\] (ii)\(\Rightarrow\)(iii) is obvious. (iii)\(\Rightarrow\)(i) Let \(f_{\mathcal{P}}\) be a function as in (iii). For any \(u\in\mathcal{E}_{\mathcal{P}}\) the set \(\bigcup_{w\succeq\,\wp\,u}(\{w\}\times\alpha(\mathcal{P}^{w}))\) is contained in \(\bigcup_{w\succeq\,\wp\,u}f_{\mathcal{P}}^{-1}[\{w\}]\), so its cardinality \(\sum_{w\succeq\,\wp\,u}\alpha(\mathcal{P}^{w})\) is at most \(|\bigcup_{w\succeq\,\wp\,u}f_{\mathcal{P}}^{-1}[\{w\}]|\leq\sum_{w\succeq\, \wp\,u}\beta(\mathcal{P}^{w})\). \(\Box\) **Definition 3.8**: _For any \(x=\prod_{n\leq z}p_{n}^{h(n)}\in\mbox{\textFr${}^{*}$\!$N}\) as in Proposition 1.4, define \(\alpha_{x}\in\mathcal{A}\) as follows. For each basic \(\mathcal{P}^{u}\in\mathcal{B}\), let \(\alpha_{x}(\mathcal{P}^{u}):=|D_{x}^{[u,u]}|\) (writing \(\infty\) instead of \(\mathfrak{c}^{+}\))._ **Theorem 3.9**: _For every \(x\in\mbox{\textFr${}^{*}$\!$N}\), the pattern \(\alpha_{x}\) is \(\mathcal{U}\)-closed._ **Proof.** Assume that \(\mathcal{Q}^{u}\in\mathcal{B}\), \(n\in\mathbb{N}\) and \(\sum_{\mathcal{P}^{u}\in\overline{A}}\alpha_{x}(\mathcal{P}^{u})\geq n\) for every \(A\in\mathcal{Q}^{u}\cap\mathcal{U}\). Denote \(B_{A}=\{(q_{1},q_{2},\ldots,q_{n})\in\mbox{\textFr${}^{*}$\!$($A\cap P^{exp}$ })^{n}:(\forall i\neq j)q_{i}\neq q_{j}\wedge(\forall i)q_{i}\,\mbox{\textFr${}^ {*}$\!$}\|\;x\}\). We prove that the family \(F:=\{B_{A}:A\in\mathcal{Q}^{u}\cap\mathcal{U}\}\) has the finite intersection property. This family is closed for finite intersections, so we need only show that each \(B_{A}\) is nonempty. Hence let \(A\in\mathcal{Q}^{u}\cap\mathcal{U}\); \(\sum_{\mathcal{P}^{u}\in\overline{A}}\alpha_{x}(\mathcal{P}^{u})\geq n\) implies that there are some \(\mathcal{P}^{v_{i}}_{i}\in\overline{A}\) and \(z_{i}\in\mu(\mathcal{P}^{v_{i}}_{i})\) for \(1\leq i\leq n\) such that \(z_{i}\,\mbox{\textFr${}^{*}$\!$}\|\;x\). Thus \(B_{A}\neq\emptyset\) and \(F\) has the finite intersection property, so by \(\mathfrak{c}^{+}\)-saturation we get distinct \(q_{1},q_{2},\ldots,q_{n}\in\bigcup_{w\succeq\,\wp\,u}\mu(\mathcal{Q}^{w})\) such that \(q_{i}\,\mbox{\textFr${}^{*}$\!$}\|\;x\), which means that \(\sum_{w\succeq\,\wp\,u}\alpha_{x}(\mathcal{Q}^{w})\geq n\). Thus \(\alpha_{x}\) is \(\mathcal{U}\)-closed. \(\Box\) We will show in Corollary 4.9 that (a sort of) a converse of Theorem 3.9 also holds: every \(\mathcal{U}\)-closed pattern is \(\approx\)-equivalent to one of the form \(\alpha_{x}\) for some \(x\). **Lemma 3.10**: _If \(x\,\,\mbox{\textFr${}^{*}$\!$}|\;y\), then \(\alpha_{x}\preceq\alpha_{y}\)._ **Proof.** Let \(\mathcal{P}\in\overline{P}\). According to Definition 3.8, to every \((u,i)\in\bigcup_{u\in\mathcal{E}_{\mathcal{P}}}(\{u\}\times\alpha_{x}(\mathcal{ P}^{u}))\) corresponds some \((p_{u,i},k_{u,i})\in u\), such that \(p_{u,i}^{k_{u,i}\,\,\mbox{\textFr${}^{*}$\!$}\|\;x}\) and \(p_{u,i}\)'s are all distinct. Let \(f_{\mathcal{P}}(u,i):=[(p_{u,i},\exp_{p_{u,i}}y)]_{\approx\wp}\); clearly \(f_{\mathcal{P}}(u,i)\succeq_{\mathcal{P}}u\) and \(|f_{\mathcal{P}}^{-1}[\{v\}]|\leq\alpha_{y}(\mathcal{P}^{v})\) for every \(v\in\mathcal{E}_{\mathcal{P}}\). By Lemma 3.7, the function \(f_{\mathcal{P}}\) witnesses that \(\alpha_{x}\upharpoonright\mathcal{P}\) is dominated by \(\alpha_{y}\upharpoonright\mathcal{P}\). \(\Box\) **Theorem 3.11**: _For any \(\mathcal{F}\in\beta\mathbb{N}\) and any two \(x,y\in\mu(\mathcal{F})\) holds \(\alpha_{x}=\alpha_{y}\)._ **Proof.** It suffices to prove that, for every \({\cal P}^{u}\in{\cal B}\), \(\alpha_{y}({\cal P}^{u})\geq n\) implies \(\alpha_{x}({\cal P}^{u})\geq n\). For \(A\in{\cal P}^{u}\cap{\cal C}\) let \[X_{A}:=\{m\in{\mathbb{N}}:(\exists q_{1},\ldots,q_{n}\in A\cap P^{exp})((\forall i \neq j)q_{i}\neq q_{j}\wedge(\forall i)q_{i}\parallel m)\}.\] Then \(y\in{}^{*}X_{A}\) implies \(x\in{}^{*}X_{A}\). Thus, the family \[F:=\{\{(q_{1},q_{2},\ldots,q_{n})\in{}^{*}\!(A\!\cap\!P^{exp})^{n}:(\forall i \neq j)q_{i}\neq q_{j}\wedge(\forall i)q_{i}\ ^{*}\!\parallel x\}:A\in{\cal P}^{u}\cap{\cal C}\}\] has the f.i.p., so \(x\) has distinct exact divisors \(q_{1},q_{2},\ldots,q_{n}\in\mu({\cal P}^{u})\). \(\Box\) Theorem 3.11 allows the following definition. **Definition 3.12**: _For \({\cal F}\in\beta{\mathbb{N}}\) and any \(x\in\mu({\cal F})\) define \(\alpha_{\cal F}=\alpha_{x}\)._ We can now restate what we proved in Lemma 3.10 as follows. **Corollary 3.13**: _(a) If \({\cal F}\,\widetilde{\,\mid}\,{\cal G}\), then \(\alpha_{\cal F}\preceq\alpha_{\cal G}\)._ _(b) If \({\cal F}=_{\sim}{\cal G}\), then \(\alpha_{\cal F}\approx\alpha_{\cal G}\)._ The converse implications are false. To see this, recall that by Proposition 1.1 there is \({\cal P}\in\overline{P}\) for which there are \(\,\widetilde{\,\mid}\,\)-incomparable ultrafilters \({\cal F},{\cal G}\in\overline{L_{2}}\) such that \(\alpha_{\cal F}=\alpha_{\cal G}=\{({\cal P},2)\}\). The point is that a pattern \(\alpha_{\cal F}\) determines all the basic ultrafilters that divide \({\cal F}\) and their multiplicity, but it does not determine its \(=_{\sim}\)-equivalence class. **Example 3.14**: _(a) It was already mentioned that, for \(q\in P\), \(q^{max}=q^{\omega}\). Note also that, for any \({\cal F}\in\beta{\mathbb{N}}\), \(\alpha_{\cal F}(q^{n})\leq 1\) for \(n\in\omega+1\), and equality holds for at most one \(n\) (because \({\cal F}=q^{m}\cdot q^{n}\cdot{\cal G}\) would actually mean that \(q^{m+n}\,\widetilde{\,\mid}\,{\cal F}\)). In particular, \(\alpha_{\cal F}(q^{\omega})=1\) is equivalent to \(q^{n}\,\widetilde{\,\mid}\,{\cal F}\) for all \(n\in{\mathbb{N}}\)._ _(b) Recall that \(MAX\) is the \(\,\widetilde{\,\mid}\,\)-greatest class. By [10], Lemma 4.6, \({\cal F}\in MAX\) if and only if \(m\,\widetilde{\,\mid}\,{\cal F}\) for all \(m\in{\mathbb{N}}\). Let us draw this conclusion from Theorem 3.9. Take any \({\cal P}\in\overline{P}\setminus P\) and \(A\in{\cal U}\cap{\cal P}^{max}\). Since \(P^{exp}\in{\cal P}^{max}\), \(A\cap P^{exp}\) is infinite. \(\{p\in P:(\exists n\in{\mathbb{N}})p^{n}\in A\}\) is also infinite because \({\cal P}\notin P\). Since \(\alpha_{\cal F}(p^{\omega})=1\) whenever \(p^{n}\,\widetilde{\,\mid}\,{\cal F}\) for all \(n\in{\mathbb{N}}\), this means that \(\sum_{p^{\omega}\in\overline{A}}\alpha_{\cal F}(p^{\omega})=\infty\). By \({\cal U}\)-closedness we have \(\alpha_{\cal F}({\cal P}^{max})=\infty\)._ _Thus, \(\alpha_{MAX}\) is in the \(\approx\)-equivalence class of \(\beta:=\{(p^{\omega},1):p\in P\}\), and any pattern \(\alpha_{\cal F}\) is in this class if and only if it contains \(\beta\)._ _(c) \(NMAX\) is the \(\,\widetilde{\,\mid}\,\)-greatest class among \({\mathbb{N}}\)-free ultrafilters (those not divisible by any \(n\in{\mathbb{N}}\)), see Section 5 of [11]. Thus, \(\alpha_{NMAX}\) is the \(\approx\)-equivalence class of \(\{({\cal P}^{max},\infty):{\cal P}\in\overline{P}\setminus P\}\)._ ## 4 \(F_{\alpha}\)-sets In this section we describe those sets from \({\cal F}\cap{\cal U}\) that are determined by basic divisors of \({\cal F}\). Recall that, by Lemma 2.9, the sets of the form \(A^{h}\) constitute a basis for the \({\cal U}\)-topology, so it will suffice to consider only such sets in the following definition. **Definition 4.1**: _Let \(\alpha\in{\cal A}\), \({\cal P}\in\overline{P}\) and \(A\in{\cal P}\upharpoonright P\)._ _For \(u\in{\cal E}_{\cal P}\), an \((A,{\cal P}^{u})\)-set is every set of the form \(A^{h}=\bigcup_{n\in{\mathbb{N}}}{A_{n}}^{n}\uparrow\) (where \(h:A\to{\mathbb{N}}\) and \(A_{n}=h^{-1}[\{n\}]\)) such that for some/every \((p,x)\in u\) holds \(p^{x}\in{}^{*}\!A^{h}\)._ _An \((A,{\cal P}^{w})\)-set for some \(w\succeq_{\cal P}u\) which is not an \((A,{\cal P}^{v})\)-set for any \(v\prec_{\cal P}u\) will be called an \((A,{\cal P}^{\succeq u})\)-set._ _An \((\alpha,A,{\cal P})\)-set is any finite product of \((A,{\cal P}^{u})\)-sets for various \(u\in{\cal E}_{\cal P}\), such that for any fixed \(u\), if \(\sum_{w\succeq_{\cal P}u}\alpha({\cal P}^{w})=n\in{\mathbb{N}}\), then there are at most \(n\)\((A,{\cal P}^{\succeq u})\)-sets in the product._ _An \(\alpha\)-set is any finite product \(C_{1}C_{2}\ldots C_{k}\) of \((\alpha,A_{i},{\cal P}_{i})\)-sets \(C_{i}\), with \(A_{i}\in{\cal P}_{i}\), \({\cal P}_{i}\neq{\cal P}_{j}\) and \(A_{i}\cap A_{j}=\emptyset\) for \(i\neq j\)._ _Finally, \(F_{\alpha}\) is the intersection of \({\cal U}\) with the filter generated by \(\alpha\)-sets._ **Example 4.2**: _(a) First let \(\alpha=\{({\cal P}^{2},1)\}\). If \(A\in{\cal P}\upharpoonright P\) and \(A^{h}=\bigcup_{n\in{\mathbb{N}}}{A_{n}}^{n}\uparrow\) is an \((A,{\cal P}^{\geq 2})\)-set, then in order for \(p^{2}\in{}^{*}\!A^{h}\) and \(p\notin{}^{*}\!A^{h}\) to hold (for any \(p\in\mu({\cal P})\)), it is necessary that \(A_{2}\in{\cal P}\). Every such set \(A^{h}\) is contained in \({A_{2}}^{2}\!\uparrow\), so \(F_{\alpha}\) is generated by sets of the form \(A^{2}\!\uparrow\) for \(A\in{\cal P}\)._ _(b) In general, let us call a pattern \(\alpha\) finite if \(\alpha({\cal P}^{u})\) is finite for all \({\cal P}^{u}\in{\cal B}\) and nonzero for only finitely many \({\cal P}^{u}\). Finite patterns are exactly what was considered in [9]. For example, if \(\alpha=\{({\cal P},2),({\cal P}^{2},1),({\cal Q},1)\}\), \(F_{\alpha}\) is generated by the family of sets of the form \((A\!\uparrow\cdot\!A\!\uparrow\cdot\!A^{2}\!\uparrow)\cap B\!\uparrow=(A^{(2) }A^{2}B)\!\uparrow\), where_ \[A^{(2)}A^{2}B=\{a_{1}a_{2}a_{3}^{2}b:a_{1},a_{2},a_{3}\in A\wedge b\in B\}\] _for some disjoint \(A\in{\cal P}\upharpoonright P\), \(B\in{\cal Q}\upharpoonright P\)._ _(c) If \(\alpha=\{({\cal P},\infty)\}\), \(F_{\alpha}\) is generated by the sets \(A^{(n)}\!\uparrow\) for all \(A\in{\cal P}\upharpoonright P\) and all \(n\in{\mathbb{N}}\)._ _(d) If \(\alpha=\{({\cal P}_{i},1):i\in I\}\) and \({\cal P}_{i}\neq{\cal P}_{j}\) for \(i\neq j\), \(F_{\alpha}\) is generated by the sets \((A_{i_{1}}A_{i_{2}}\ldots A_{i_{n}})\!\uparrow\), where \(\{i_{1},i_{2},\ldots,i_{n}\}\subseteq I\) and \(A_{i_{1}}\in{\cal P}_{i_{1}}\upharpoonright P,\ldots,A_{i_{n}}\in{\cal P}_{i_{n }}\upharpoonright P\) are disjoint._ _(e) If \(\alpha=\{({\cal P}^{n},1):n\in{\mathbb{N}}\}\) for some \({\cal P}\in\overline{P}\setminus P\), \(F_{\alpha}\) is generated by the sets \((A^{k_{1}}A^{k_{2}}\ldots A^{k_{m}})\!\uparrow\) for some \(A\in{\cal P}\upharpoonright P\), some \(m\in{\mathbb{N}}\) and \(k_{1},k_{2},\ldots,k_{m}\in{\mathbb{N}}\)._ _(f) If \(\alpha=\{({\cal P}^{u},1)\}\) for some \(u\in{\cal E}_{\cal P}\setminus\omega\), \(F_{\alpha}\) is generated by \(A^{h}\) for \((A,{\cal P}^{u})\)-sets \(A^{h}=\bigcup_{n\in{\mathbb{N}}}{A_{n}}^{n}\) and \(A=\bigcup_{n\in{\mathbb{N}}}{A_{n}}\in{\cal P}\), such that \(p^{x}\in{}^{*}\!A^{h}\) whenever \((p,x)\in u\). Note that, if \(u=\sup_{\beta<\gamma}u_{\beta}\) for some \(\prec_{\cal P}\)-increasing sequence \(\langle u_{\beta}:\beta<\gamma\rangle\) in \({\cal E}_{\cal P}\) then, using Proposition 1.3, we conclude that every \(B\in{\cal P}^{u}\cap{\cal U}\) is also in \({\cal P}^{u_{\beta}}\cap{\cal U}\) for some \(\beta<\gamma\). Hence every \((A,{\cal P}^{u})\)-set is also an \((A,{\cal P}^{u_{\beta}})\)-set for some \(\beta<\gamma\)._ _(g) Finally, if \(\alpha=\{({\cal P}^{u},2)\}\) for some \(u\in{\cal E}_{\cal P}\setminus\omega\), \(F_{\alpha}\) is generated by sets of the form \(C_{1}C_{2}\) for some \((A,{\cal P}^{u})\)-sets \(C_{1}\) and \(C_{2}\). Note that, by Transfer, for any \((p,x),(q,y)\in u\), \(p^{x}\in{}^{*}\!C_{1}\) and \(q^{y}\in{}^{*}\!C_{2}\) imply \(p^{x}q^{y}\in{}^{*}\!(C_{1}C_{2})={}^{*}\!C_{1}{}^{*}\!C_{2}\). (\(p^{x}q^{y}\) is a typical element of any \(\mu({\cal F})\) such that \(\alpha_{\cal F}=\{({\cal P}^{u},2)\}\).)_ **Theorem 4.3**: _For every \({\cal F}\in\beta{\mathbb{N}}\), \(F_{\alpha_{\cal F}}\subseteq{\cal F}\cap{\cal U}\)._ **Proof.** Take any \(x\in\mu({\cal F})\); we show that all \(\alpha_{x}\)-sets are in \({\cal F}\cap{\cal U}\). Let \(D=C_{1}C_{2}\ldots C_{k}\) be an \(\alpha_{x}\)-set; since clearly \(D\in{\cal U}\), we need to prove that \(D\in{\cal F}\). Each \(C_{i}\) is an \((\alpha_{x},A_{i},{\cal P}_{i})\)-set for some \({\cal P}_{i}\) and some \(A_{i}\in{\cal P}_{i}\upharpoonright P\), and it is a finite product in which there are at most \(\sum_{w\succeq\,pu}\alpha_{x}({\cal P}_{i}^{w})\)\((A_{i},{\cal P}_{i}^{\succeq u})\)-sets for any \(u\in{\cal E}_{\cal P}\). So let \(D=\prod_{j=1}^{m}B_{j}\) be the representation of \(D\) as the product of such sets. Since all \(B_{i}\) are \(|\)-upward closed, \((\forall n\in{\mathbb{N}})(n\in D\Leftrightarrow(\exists b_{1}\in B_{1}) \ldots(\exists b_{m}\in B_{m})b_{1}\ldots b_{m}\mid n)\) so, by Transfer, an element \(n\in{}^{*}\!{\mathbb{N}}\) belongs to \({}^{*}\!D\) if and only if \[(\exists b_{1}\in{}^{*}\!B_{1})\ldots(\exists b_{m}\in{}^{*}\!B_{m})b_{1} \ldots b_{m}\ ^{*}\mid n.\] By the definitions of \(\alpha_{x}\) and \(F_{\alpha_{x}}\), this formula is true for \(n=x\), implying that \(x\in{}^{*}\!D\) and so \(D\in{\cal F}\). \(\Box\) As we already noted before, \({\cal F}\cap{\cal U}\) is not generated by \(F_{\alpha_{\cal F}}\): by Proposition 1.1, there are \({\cal P}\in\overline{P}\) such that for \(\alpha=\{({\cal P},2)\}\) there are \(=_{\sim}\)-nonequivalent ultrafilters containing \(F_{\alpha}\). However, the next result shows that \(F_{\alpha_{\cal F}}\) determines the list of basic divisors of \({\cal F}\) up to \(\approx\)-equivalence. **Theorem 4.4**: _For patterns \(\alpha,\beta\in{\cal A}_{cl}\), the following conditions are equivalent:_ _(i) \(\alpha\preceq\beta\);_ _(ii) \(F_{\alpha}\subseteq F_{\beta}\)._ **Proof.** (i)\(\Rightarrow\)(ii) Assume \(\alpha\preceq\beta\) and let us prove that every \((\alpha,A,{\cal P})\)-set \(C\) is also a \((\beta,A,{\cal P})\)-set. Every such \(C\) is a finite product of some \((A,{\cal P}^{u_{j}})\)-sets of the form \(A^{h_{j}}\) for some \(h_{j}:A\to{\mathbb{N}}\). By Lemma 3.7\(\alpha\preceq\beta\) implies that there is a function \(f_{\cal P}\) adjoining to each such \(u_{j}\) some \(v_{j}\succeq_{\cal P}u_{j}\), so that \(A^{h_{j}}\) is also a \((A,{\cal P}^{v_{j}})\)-set. For every \(v\in{\cal E}_{\cal P}\) there are at most \(\sum_{w\succeq_{\cal P}v}\beta({\cal P}^{w})\)\((A,{\cal P}^{\succeq v})\)-sets in the factorization of \(C\) and hence \(C\) is an \((\beta,A,{\cal P})\)-set. (ii)\(\Rightarrow\)(i) Assume the opposite, that \(\sum_{w\succeq_{\cal P}u}\alpha({\cal P}^{w})>s=\sum_{w\succeq_{\cal P}u}\beta ({\cal P}^{w})\) for some \({\cal P}\in\overline{P}\). By \({\cal U}\)-closedness of \(\beta\) and using Lemma 2.9, we find \(A\subseteq P\) and \(h:A\to{\mathbb{N}}\) such that \(A^{h}\in{\cal P}^{u}\) and \(\sum_{{\cal Q}^{v}\in\overline{A^{h}}}\beta({\cal Q}^{v})=s\). Consider the \(\alpha\)-set \(C=(A^{h})^{(s+1)}\). By (ii) there are \(\beta\)-sets \(D_{1},\ldots,D_{m}\) such that \(D_{1}\cap\ldots\cap D_{m}\subseteq C\). Each \(D_{i}\) is a product containing \(s_{i}\leq s\)\((B,{\cal P}^{\succeq u})\)-sets, and without loss of generality we may assume that \(B\subseteq A\). Let us fix \(p_{1},p_{2},\ldots,p_{s}\in\mu({\cal P})\) and \(x_{1},x_{2},\ldots,x_{s}\) such that \((p_{i},x_{i})\in u\). \({}^{*}\!D_{i}\) contains an element \(d_{i}\) of the form \(p_{1}^{x_{1}}p_{2}^{x_{2}}\ldots p_{s_{i}}^{x_{s_{i}}}y_{i}\), such that, for every prime factor \(q\) of \(y_{i}\) holds \(q\neq p_{j}\) (for \(j=1,2,\ldots,s\)) and \(q^{\exp_{q}y_{i}}\notin{}^{*}\!A^{h}\). If we denote \(d^{\prime}=l.c.m.\{d_{1},d_{2},\ldots,d_{m}\}\), then \(d^{\prime}=p_{1}^{x_{1}}p_{2}^{x_{2}}\ldots p_{s^{\prime}}^{x_{s^{\prime}}}y^ {\prime}\), and again \(s^{\prime}\leq s\) and for every prime factor \(q\) of \(y^{\prime}\) holds \(q\neq p_{j}\) (for \(j=1,2,\ldots,s\)) and \(q^{\exp_{q}y^{\prime}}\notin{}^{*}\!A^{h}\). But \(d^{\prime}\) belongs to \(D_{1}\cap\ldots\cap D_{m}\), and it can not belong to \(C\) which contains only elements with more than \(s\) factors from \({}^{*}\!A^{h}\), a contradiction. \(\Box\) The following example, a sequel to Example 3.5, shows that the condition of \({\cal U}\)-closedness in the theorem above is necessary. **Example 4.5**: _(a) Let \({\cal P}\in\overline{P}\setminus P\). Consider the patterns \(\alpha=\{({\cal Q},1):{\cal Q}\in\overline{P}\setminus P\}\) and \(\beta=\alpha\setminus\{({\cal P},1)\}\cup\{({\cal P},2)\}\). \(\alpha\) and \(\beta\) are clearly not \(\approx\)-equivalent._ _On the other hand, for any \((\beta,A,{\cal P})\)-set \(C\) there are \({\cal Q}\in\overline{A}\setminus\{{\cal P}\}\) and disjoint sets \(A_{1}\in{\cal P}\), \(A_{2}\in{\cal Q}\) such that \(A_{1}\cup A_{2}=A\), so that \(A_{1}A_{2}\subseteq A^{(2)}\). Thus \(C\) is also an \(\alpha\)-set, so \(F_{\alpha}\) and \(F_{\beta}\) generate the same filter. However, note that \(\alpha\) is not \({\cal U}\)-closed because in every \({\cal U}\)-neighborhood of \({\cal P}\) there are (infinitely many) primes \({\cal Q}\neq{\cal P}\)._ _(b) Likewise, \(\alpha=\{({\cal P}^{n},1):n\in\mathbb{N}\}\) and \(\beta=\{({\cal P}^{\omega},\infty)\}\) are not \(\approx\)-equivalent. Still, \(F_{\alpha}\) and \(F_{\beta}\) generate the same filter: since there are no \((A,{\cal P}^{\omega})\)-sets which are not \((A,{\cal P}^{n})\)-sets for some \(n\in\omega\), all the \(\beta\)-sets are also \(\alpha\)-sets. Again, the explanation is that \(\alpha\) is not \({\cal U}\)-closed._ **Lemma 4.6**: _If \({\cal F},{\cal G}\in\beta\mathbb{N}\setminus\mathbb{N}\) and \({\cal P}\in\overline{P}\setminus P\) are such that \({\cal G}\in{\cal P}^{u}\) for some \(u\in{\cal E}_{\cal P}\), then \(\alpha_{{\cal F}\cdot{\cal G}}({\cal P}^{u})=\alpha_{{\cal F}}({\cal P}^{u})+1\) and \(\alpha_{{\cal F}\cdot{\cal G}}({\cal Q}^{v})=\alpha_{{\cal F}}({\cal Q}^{v})\) for all \({\cal Q}^{v}\in{\cal B}\setminus\{{\cal P}^{u}\}\)._ **Proof.** Let \((x,p)\in\mu({\cal F})\times\mu({\cal P})\) be a tensor pair. Then, for any \(p^{a}\in\mu({\cal G})\), by Proposition 1.5\((x,p^{a})\) is also a tensor pair, so \(x<p^{a}\) and \(xp^{a}\in\mu({\cal F}\cdot{\cal G})\). By definition, \(\alpha_{{\cal F}\cdot{\cal G}}({\cal P}^{u})=\alpha_{xp^{a}}({\cal P}^{u})= \alpha_{x}({\cal P}^{u})+1=\alpha_{{\cal F}}({\cal P}^{u})+1\) and \(\alpha_{{\cal F}\cdot{\cal G}}({\cal Q}^{v})=\alpha_{{\cal F}}({\cal Q}^{v})\) for all \({\cal Q}^{v}\neq{\cal P}^{u}\). \(\Box\) **Theorem 4.7**: _Let \(\beta\in{\cal A}_{cl}\) and \({\cal F}\in\beta\mathbb{N}\)._ _(a) If \(\alpha_{{\cal F}}\preceq\beta\), then there is \({\cal G}\in\beta\mathbb{N}\) such that \(\alpha_{{\cal G}}\approx\beta\) and \({\cal F}\,\widetilde{\,\mid\,}{\cal G}\)._ _(b) If \(\beta\preceq\alpha_{{\cal F}}\), then there is \({\cal G}\in\beta\mathbb{N}\) such that \(\alpha_{{\cal G}}\approx\beta\) and \({\cal G}\,\widetilde{\,\mid\,}{\cal F}\)._ **Proof.** (a) We obtain the desired ultrafilter as a limit of a \(\,\widetilde{\,\mid\,}\)-increasing sequence \(\langle{\cal G}_{\gamma}:\gamma<\epsilon\rangle\). By recursion we construct this sequence, along with the sequence of respective patterns \(\alpha_{\delta}=\alpha_{{\cal G}_{\delta}}\). We start with \({\cal G}_{0}={\cal F}\) and \(\alpha_{0}=\alpha_{{\cal F}}\). Assume that \(\alpha_{\delta}\) and \({\cal G}_{\delta}\) have been constructed for \(\delta<\gamma\) so that \(\alpha_{\delta}\preceq\beta\) and \({\cal F}\,\widetilde{\,\mid\,}{\cal G}_{\delta}\). First we consider the successor case \(\gamma=\delta+1\). If \(\alpha_{\delta}\approx\beta\), put \(\epsilon=\gamma\) and we are done. Otherwise, let \({\cal P}\) be such that \(\alpha_{\delta}\upharpoonright{\cal P}\) does not dominate \(\beta\upharpoonright{\cal P}\). We consider two cases. \(1^{\circ}\) If there is \(u\in{\cal E}_{\cal P}\) such that \(\sum_{w\succeq_{{\cal P}}v}\alpha_{\delta}({\cal P}^{w})<\sum_{w\succeq_{{ \cal P}}v}\beta({\cal P}^{w})\) for all \(v\preceq_{{\cal P}}u\) such that \(\sum_{w\succeq_{{\cal P}}v}\beta({\cal P}^{w})<\infty\) and in particular \(\sum_{w\succeq_{{\cal P}}u}\alpha_{\delta}({\cal P}^{w})<\infty\), we put \({\cal G}_{\delta+1}:={\cal G}_{\delta}\cdot{\cal H}\) for some \({\cal H}\in{\cal P}^{u}\). By Lemma 4.6 we have \(\alpha_{\delta}\prec\alpha_{\delta+1}\preceq\beta\). \(2^{\circ}\) Otherwise, there are \(u,v\in{\cal E}_{\cal P}\) such that \(v\prec_{{\cal P}}u\), \(\sum_{w\succeq_{{\cal P}}v}\alpha_{\delta}({\cal P}^{w})=\sum_{w\succ_{{ \cal P}}v}\beta({\cal P}^{w})<\infty\) and \(\sum_{w\succeq_{{\cal P}}u}\alpha_{\delta}({\cal P}^{w})<\sum_{w\succeq_{{ \cal P}}u}\beta({\cal P}^{w})\). Define \(v_{0}:=\sup\{v\prec u:\sum_{w\succeq_{{\cal P}}v}\alpha_{\delta}({\cal P}^{ w})=\sum_{w\succeq_{{\cal P}}v}\beta({\cal P}^{w})\}\); by \({\cal U}\)-closedness we have \(\sum_{w\succeq_{{\cal P}}v_{0}}\alpha_{\delta}({\cal P}^{w})=\sum_{w\succeq_{{ \cal P}}v_{0}}\beta({\cal P}^{w})\) so \(v_{0}\prec_{{\cal P}}u\). Thus we obtain \(\sum_{w\succeq_{{\cal P}}v}\alpha_{\delta}({\cal P}^{w})<\sum_{w\succeq_{{ \cal P}}v}\beta({\cal P}^{w})\) whenever \(v_{0}\prec_{{\cal P}}v\preceq_{{\cal P}}u\). Now take \(x\in\mu({\cal G}_{\delta})\) and \(q\in\mu({\cal P}^{v_{0}})\) such that \(q\,{}^{*}\!\mid x\), choose \(q^{\prime}\in\mu({\cal P}^{u})\) such that \(q\,{}^{*}\!\mid q^{\prime}\) and let \(y=\frac{q^{\prime}}{q}x\). Then \({\cal G}_{\delta+1}:=v(y)\) is such that \({\cal G}_{\delta}\,\widetilde{\,\mid\,}{\cal G}_{\delta+1}\), \(\alpha_{\delta+1}({\cal P}^{u})=\alpha_{\delta}({\cal P}^{u})+1\) and \(\alpha_{\delta+1}({\cal P}^{v_{0}})=\alpha_{\delta}({\cal P}^{v_{0}})-1\), so again we have \(\alpha_{\delta}\prec\alpha_{\delta+1}\preceq\beta\). Finally, if \(\gamma\) is a limit ordinal, let \([{\cal G}_{\gamma}]:=\lim_{\delta\to\gamma}{\cal G}_{\delta}\) and \(\alpha_{\gamma}=\alpha_{{\cal G}_{\gamma}}\). Let us show that (the \(\approx\)-equivalence class of) \(\alpha_{\gamma}\) is the supremum of the sequence \(\langle\alpha_{\delta}:\delta<\gamma\rangle\) in \(({\cal A}_{cl},\prec)\). Assume the opposite: that there is \(\alpha^{\prime}\in{\cal A}_{cl}\) such that \(\alpha_{\delta}\preceq\alpha^{\prime}\) for all \(\delta<\gamma\), but \(\alpha_{\gamma}\not\preceq\alpha^{\prime}\). By Theorem 4.4, there is some \(A\in F_{\alpha_{\gamma}}\setminus F_{\alpha^{\prime}}\) By Proposition 1.3 there is \(\delta<\gamma\) such that \(A\in F_{\alpha_{\delta}}\), which implies \(A\in F_{\alpha^{\prime}}\), a contradiction. In particular, we get \(\alpha_{\gamma}\preceq\beta\). This concludes the construction. (b) is proved in a similar way, with the successor case requiring only the analogue of the construction from \(2^{\circ}\). \(\Box\) It may seem that, if \(\mathcal{F}\,\widetilde{\mid}\,\mathcal{H}\) and \(\beta\in\mathcal{A}_{cl}\) is such that \(\alpha_{\mathcal{F}}\preceq\beta\preceq\alpha_{\mathcal{H}}\), then there is \(\mathcal{G}\in\beta\mathbb{N}\) such that \(\alpha_{\mathcal{G}}\approx\beta\), \(\mathcal{F}\,\widetilde{\mid}\,\mathcal{G}\) and \(\mathcal{G}\,\widetilde{\mid}\,\mathcal{H}\). However, this is false, as the next example shows. **Example 4.8**: _Let \(\mathcal{P}\in\overline{P}\setminus P\) be arbitrary, and let \(p,q\in\mu(\mathcal{P})\) be such that \((p,q)\) is a tensor pair. Denote \(\mathcal{F}=\mathcal{P}^{2}\cdot\mathcal{P}\) and \(\mathcal{H}=\mathcal{P}^{3}\cdot\mathcal{P}^{5}\). Since \((p^{2},q)\) and \((p^{3},q^{5})\) are also tensor pairs (by Proposition 1.5), it follows that \(v(p^{2}q)=\mathcal{F}\) and \(v(p^{3}q^{5})=\mathcal{H}\)._ _We have \(\alpha_{\mathcal{F}}=\{(\mathcal{P},1),(\mathcal{P}^{2},1)\}\) and \(\alpha_{\mathcal{H}}=\{(\mathcal{P}^{3},1),(\mathcal{P}^{5},1)\}\). Now, if \(\beta=\{((\mathcal{P},1),(\mathcal{P}^{4},1)\}\), then clearly \(\alpha_{\mathcal{F}}\preceq\beta\preceq\alpha_{\mathcal{H}}\), but there can be no ultrafilter \(\mathcal{G}\) such that \(\alpha_{\mathcal{G}}\approx\beta\) (which is in this case equivalent to \(\alpha_{\mathcal{G}}=\beta\)), \(\widetilde{\mathcal{F}}\,\widetilde{\mid}\,\mathcal{G}\) and \(\mathcal{G}\,\widetilde{\mid}\,\mathcal{H}\). Namely, for such an ultrafilter by Proposition 1.2 there would be some \(y\in\mu(\mathcal{G})\) such that \(p^{2}q\,^{*}\mid y\), and the only possibility is \(y=p^{4}q\). In turn, there would be some \(z\in\mu(\mathcal{H})\) such that \(p^{4}q\,^{*}\mid z\), so \(z=p^{5}q^{3}\). However, \(v(p^{5}q^{3})\neq v(p^{3}q^{5})=\mathcal{H}\) because for \(A=\{a^{3}b^{5}:a,b\in P\wedge a<b\}\) we have \(p^{3}q^{5}\in{}^{*}\!A\) and \(p^{5}q^{3}\notin{}^{*}\!A\)._ As a special case of the previous theorem, we have the following. **Corollary 4.9**: _For every closed pattern \(\beta\) there is an ultrafilter \(\mathcal{G}\) such that \(\alpha_{\mathcal{G}}\approx\beta\)._ One may also be tempted to think that, if \(\beta\) is a \(\mathcal{U}\)-closed pattern which is not finite (as defined in Example 4.2(b)), then there are in fact \(2^{\mathfrak{c}}\) ultrafilters \(\mathcal{G}\) as described in the Corollary 4.9, because of the possibility to choose, at limit stages of the construction, different ultrafilters \(\mathcal{W}\) on \(\gamma\) and get different \(\mathcal{G}_{\gamma}=\lim_{\delta\to\mathcal{W}}\mathcal{G}_{\delta}\). However, we saw in Example 2.2 that there are a basic ultrafilter \(\mathcal{P}^{u}\) and \(p\in\mu(\mathcal{P})\) for which \(u_{p}\) is a singleton, say \(u_{p}=\{x\}\). It follows that for \(\beta=\{(\mathcal{P}^{u},1)\}\) there can be only one ultrafilter \(\mathcal{F}=v(p^{x})\) such that \(\alpha_{\mathcal{F}}\approx\beta\). By Theorem 4.7 this ultrafilter must then be divisible by all \(\mathcal{G}\in\mathcal{P}^{\omega}\). ## 5 Closing remarks and open questions Although we tried to keep the notation as simple as possible, it was not always feasible; we hope that this did not discourage the readers. To help the intuition, we included in this paper a large number of examples and counterexamples. A few more remarks are due. First, it may seem that some of the proofs may have been simpler, using the \(\mathfrak{c}^{+}\)-saturation. In some cases, of course, this may be true. However, sometimes this is prevented by the fact that some sets (such as \(u_{p}\), unless they are singletons) are not internal, and hence many other sets derived from them are not internal either. Second, why did we assume throughout the text that \(|^{*}\!\mathbb{N}|=\mathfrak{c}^{+}\)? This condition implies that, for every \(x\in{}^{*}\!\mathbb{N}\setminus\!\mathbb{N}\), the cardinality of \(\{y\in{}^{*}\!\mathbb{N}:y<x\}\) is \(\mathfrak{c}^{+}\), and in turn many other internal sets have the same cardinality. This allowed us to conclude that there is only one possible infinite value of the number of divisors of a fixed ultrafilter from a given \(\mathcal{E}_{\mathcal{P}}\)-class, which we denoted by \(\infty\). With more than one such value things could get significantly more complicated. Here are a few questions that remain unanswered. **Question 5.1**: _(a) Is it possible, for a given \(\mathcal{P}\in\overline{P}\setminus P\), to describe precisely the order \((\mathcal{E}_{\mathcal{P}},\prec_{\mathcal{P}})\)?_ _(b) In particular, is \((\mathcal{E}_{\mathcal{P}},\preceq_{\mathcal{P}})\) isomorphic to \((\mathcal{E}_{\mathcal{Q}},\preceq_{\mathcal{Q}})\) for all nonprincipal \(\mathcal{P}\) and \(\mathcal{Q}\)?_ **Question 5.2**: _Is it possible to improve Theorem 4.7 (or at least Corollary 4.9) to get an ultrafilter \(\mathcal{G}\) such that \(\alpha_{\mathcal{G}}=\beta\)?_ Funding: The author gratefully acknowledges financial support of the Science Fund of the Republic of Serbia (call PROMIS, project CLOUDS, grant no. 6062228) and of the Ministry of Science, Technological Development and Innovation of the Republic of Serbia (grant no. 451-03-47/2023-01/200125).